The Luddites lost the fight to save their livelihoods. As the threat of artificial intelligence looms, can we do any better?
Excerpt – The Luddites rejected the moral and political authority of a system that had abandoned long-held principles of fairness, quality, and mutual obligation. Under feudalism and mercantile capitalism, Britain’s rigid class structure placed the gentry at the top, merchants and professionals (such as doctors, parsons, and lawyers) in the middle, and the vast majority in the “lower orders.” Yet this social hierarchy was accompanied by labor-market regulations—both formal and informal—that provided some measure of reciprocity. Skilled trades were restricted to those who had undergone apprenticeships, and in times of economic distress local authorities offered unemployed workers and their families “outdoor relief” in the form of food, money, and clothing.
Industrial capitalism, by contrast, ushered in a free-market ideology that emphasized employers’ rights and viewed government intervention—whether in wage regulation or in hiring and firing practices—with suspicion. As [historian and author of “The Making of the English Working Class” (60 years ago) EP] Thompson observed, Luddites “saw laissez-faire not as freedom, but as ‘foul Imposition.’ ” They rejected the idea that “one man, or a few men, could engage in practices which brought manifest injury to their fellows.”
Even technology optimists acknowledge that A.I. raises questions similar to those that the Luddites once posed. In a 2022 article in Daedalus, Erik Brynjolfsson argued that today’s key challenge is steering A.I. development toward augmenting the efforts of human workers rather than replacing them. “When AI augments human capabilities, enabling people to do things they never could before, then humans and machines are complements,” he wrote. “Complementarity implies that people remain indispensable for value creation and retain bargaining power in labor markets and political decision-making.”
That’s the hopeful scenario. But when A.I. automates human skills outright, Brynjolfsson warned, “machines become better substitutes for human labor,” while “workers lose economic and political bargaining power, and become increasingly dependent on those who control the technology.” In this environment, tech giants—which own and develop A.I.—accumulate vast wealth and power, while most workers are left without leverage or a path to improving their conditions. Brynjolfsson termed this dystopian outcome “the Turing Trap,” after the computing pioneer Alan Turing. [..]
As an example of A.I.’s potential to play a socially productive role, [MIT economist who helped characterize the China shock (the flooding of the American market with cheap imports destroyed the country’s manufacturing sector) David] Autor pointed to health care, now the largest employment sector in the U.S. If nurse practitioners were supported by well-designed A.I. systems, he said, they could take on a broader range of diagnostic and treatment responsibilities, easing the country’s shortage of M.D.s and lowering health-care costs. Similar opportunities exist in other fields, such as education and law, he argued. “The problem in the economy right now is that much of the most valuable work involves expert decision-making, monopolized by highly educated professionals who aren’t necessarily becoming more productive,” he said. “The result is that everyone pays a lot for education, health care, legal services, and design work. That’s fine for those of us providing these services—we pay high prices, but we also earn high wages. But many people only consume these services. They’re on the losing end.”
If A.I. were designed to augment human expertise rather than replace it, it could promote broader economic gains and reduce inequality by providing opportunities for middle-skill work, Autor said. His great concern, however, is that A.I. is not being developed with this goal in mind. Instead of designing systems that empower human workers in real-world environments—such as urgent-care centers—A.I. developers focus on optimizing performance against narrowly defined data sets. “The fact that a machine performs well on a data set tells you little about how it will function in the real world,” Autor said. “A data set doesn’t walk into a doctor’s office and say it isn’t feeling well.”
He cited a 2023 study showing that certain highly trained radiologists, when using A.I. tools, produced diagnoses that were less accurate, in part because they gave too much weight to inaccurate A.I. results. “The tool itself is very good, yet doctors perform worse with it,” he said. His solution? Government intervention to insure that A.I. systems are tested in real-world conditions, with careful evaluation of their social impact. The broader goal, he argued, should be to enable workers without advanced degrees to take on high-value decision-making tasks. “But that message has to filter all the way down to the question of: How do we benchmark success?” he said. “I think it’s feasible—but it’s not simple.”
Full article, J Cassidy, The New Yorker, 2025.4.14