Trust

“Research shows a strong correlation between trust and the wealth of a society. Trust enables cooperation, cooperation enables specialization, and specialization drives productivity.

Fortunately, indicators of declining trust miss a deeper reality: While Americans may increasingly distrust many of their institutions, technology is enabling certain kinds of trust at levels seldom before seen in human history.

[..] Ride-sharing companies created a new way of allowing us to trust a ride with a stranger: a platform accessible from a smartphone where riders can view aggregated driver ratings. In this way, Uber and Lyft are providing an alternative less to taxis than to government.

One key difference between these companies and government is that they provide trust through a decentralized platform rather than a centralized bureaucracy. We don’t trust Uber, per se; we trust the riders who have gone before us. Ride-sharing apps just make trusting other people possible and scalable. Another difference is that they offer a choice. If you don’t like Uber, you can try Lyft. There is only one New York City taxi commission.

[..] Even if you favor government oversight and regulation, this is a good thing. If new suppliers of trust demonstrate that they are more efficient at delivering the trust necessary to make certain transactions happen, this frees up government to stick to areas where it has a comparative advantage.

There is a long history in the U.S. of private actors stepping in to establish public trust. In 18th-century New York, when government refused to enforce brokerage contracts in court, a group of brokers formed the New York Stock Exchange to privately regulate misbehavior. Today, Finra, a private corporation, is the primary regulator of stockbrokers. Some deride self-regulation as foxes guarding the henhouse. The question, however, isn’t whether it’s perfect but whether it’s more efficient than the next best alternative.

[..] The promise of internet platforms is to harness the information of millions of individuals without relying so much on government or corporations, which can take advantage of their position to serve their own narrow interests. With digital platforms tapping into the wisdom of crowds, we can have the best of both worlds: a system based on individuals and voluntary choices that also harnesses more information than historically available to any one individual, company or bureaucrat.

[..] A five-star consumer rating system is never going to replace the FDA. But the New Deal model of governance—experts based in Washington writing one-size-fits-all rules—is no longer enough. New challenges and technologies—from gene editing to 3-D-printed guns to AI—are proliferating beyond the capacity of our mid-20th-century governance tools.

Workplaces might be made safer through real-time tracking of employee well-being than by OSHA rules. A souped-up version of the job and recruiting site Glassdoor might be a better way of preventing workplace discrimination than lawsuits brought by the EEOC. And algorithms might be better than judges at setting criminal punishments because they can take more data into account and are less likely to suffer from human biases.

Much more work needs to be done, of course, before we let these new social technologies handle the bulk of our regulation. But if history is a guide, new trust technologies will emerge and displace old ones, offering opportunities to cooperate in ways we can’t now imagine.”

How Technology Will Revolutionize Public Trust: Though Americans increasingly distrust their institutions, digital platforms are spurring them to rely on one another like never before (2019.10.17)

“The algorithms provide cheaper and more accurate forecasts, while judges, analysts or policy-makers use their judgment to recognize when unusual circumstances may cause the algorithm to be inaccurate. Such hybridization is appealing because algorithms can help offset well-known human cognitive biases, while humans can address the algorithm’s difficulty in dealing with novel circumstances or recognizing when the algorithm is not functioning well.

But how well hybrid decision-making will work is inherently tied to the degree of trust humans, and the larger public, have in algorithms to make these forecasts. While these issues have been well-studied in aerospace and medicine, there has been much less done in the social sciences, and almost none in public policy. Too much trust in the algorithm (algorithm bias) can result in poor decision making, potentially ignoring important characteristics of a particular case or subtly adopting the biases built into the algorithm that may reinforce inequities. Too little trust in automation (algorithm aversion) would result in abandoning some of our most promising tools for improving human decision-making. Bringing this into the realm of public policy adds an additional aspect to these discussions, as we must not only deal with the trust of an operator (like a judge or intelligence analyst), but also the trust placed in the algorithm by the general public as they hold officials accountable for decisions made on the basis of the algorithm.

[..] Participants were much more trusting of algorithms than of human or crowd-sourced methods across all of the studies, even when they are explicitly told that algorithms do not perform better than untrained humans at the task or when the task is ill-suited for algorithmic forecasting. Those who have higher general levels of trust in automation are found to be most in influenced by algorithmic advice, while there are more mixed effects of age, education and gender across tasks. In terms of preferences among algorithms, we found that items often discussed by designers of algorithmic decision aids (e.g., transparency, relevance of training data, and human involvement) play little role in preferences when compared with the size of the dataset, the errors reported in training data, and the source of the algorithm – areas where many people tend to have difficulty with interpretation.

[..] On the one hand, people seem to have trust in algorithms to solve problems in the public policy sphere and there is a small preference for having humans incorporated into decision-making systems. On the other, respondents seem to draw little distinction between areas where algorithms are likely to perform well and areas where they are not. Moreover, preferences on algorithms seem to rely more on heuristics that are not necessarily relevant to an algorithm’s utility, such as how many training cases are used or who made the algorithm, or can be difficult to interpret without reference to some baseline, like accuracy statistics.

If the future is hybrid decision-making, the need to improve the human end of the hybrid is every bit, if not more, important as the need to continue improving our machines. While programming and algorithms are increasingly being taught in classrooms, this curriculum needs to also contain discussions of what makes for quality data, such as the process of sample selection and inference. As others have shown, “big data” does not eliminate traditional concerns about inference and self-selected samples. Similarly, instruction should include discussion of how to evaluate the accuracy of models and the importance of having a baseline for comparison. These norms should carry over to popular discussion about algorithms as well, with emphasis placed on algorithms being transparent, based on well-documented and justi ed training data, and evaluated against a relevant baseline, rather than the current media focus on the size of the data used and the prestige of the institution from which the algorithm originated. These lessons are not just relevant for public policy, but their growing importance in this area makes broader algorithmic literacy an increasingly important element of democratic citizenship.”

Trust in Public Policy Algorithms (2018.11.12)