Why A.I. Might Not Take Your Job or Supercharge the Economy

“[Roge Karma, senior editor for the “Ezra Klein Show,” a podcast on the New York Times] From Patrick A: ‘[..] So what do you make of the most dire assessments of the risks posed by A.I.? And what level of alarm do you feel about its dangers, and why?’

[Ezra Klein] [..] He [Dan Hendrycks, an AI safety research] wrote a recent paper [..] called “Natural Selection Favors A.I.s Over Humans,” and the point of his paper is I think it offers a more intuitive idea of how we could get into some real trouble, whether or not you’re thinking about existential risk or just a lot of risk.

And he writes, “as A.I.s become increasingly capable of operating without direct human oversight, A.I.s could one day be pulling high-level strategic levers. And if this happens, the direction of our future will be highly dependent on the nature of these A.I. agents.”

[..] he writes that these A.I.s are basically going to get better. That’s already happening. [..] We’re going to turn over things like make an advertising campaign or analyze this data set or I’m trying to make this strategic decision about what my country or my company should do. Look at all the data, and advise me.

And he writes, “Eventually, A.I.s will be used to make the high-level strategic decisions, now reserved for C.E.O.s or politicians. At first, A.I.s will continue to do tasks they already assist people with, like writing emails, but as A.I.s improve, as people get used to them, and as staying competitive in the market demands using them, A.I.s will begin to make important decisions with very little oversight.”

[..] What he is getting at here is that, as these programs become better, there’s going to be a market pressure for companies and potentially even countries to hand over more of their operations to them because you’ll be able to move faster. You’ll be able to make more money with your high speed, A.I.-driven algorithmic trading. You’ll be able to outcompete other players in your industry. Maybe you’ll be able to outcompete other countries.

[..] rather than relying on a moment of intelligence take off, it relies on something we understand much better, which is that we have an alignment problem, not just between human beings and computer systems but between human society and corporations, human society and governments, human society and institutions.

[..] I worry sometimes that the way the existential risk conversation goes, it frames it almost entirely as a technical problem when it isn’t. It’s, at least for a while, a coordination problem, and if we get the coordination problems right, we’re going to have a lot more leverage on the technical problems. [..]

[Karma] [..] this letter was calling for a six-month pause on any A.I. development more advanced than GPT-4. [..] We don’t like the incentives at play. We don’t know what we’re creating or how to regulate it. So we need to slow this all down to give us time to think, to reflect.

And so I’m wondering what you think of that effort and whether you think it’s the right approach.

[Klein] [..] if you do a pause and you don’t know what you’re doing with that pause, if that pause takes place — you do a six-month stop — and what?

[..] What are you doing with that time?

Otherwise, you just delay whatever’s going to happen six months and maybe gave worse actors out there, although I want to be careful with this because I think people hear China — and I am not sure China is actually a worse actor on this than we are right now because I think we are actually moving a lot faster, and we are using this specter of China to absolve ourselves of having to think about that at all.

[..] I am more inclined to say that what I want is, first, a public vision for A.I. I think this is, frankly, too important to leave to the market.

[..] when I say interpretability, I mean the ability to understand what the A.I. system is doing when it is making decisions, or when it is drawing correlations, or when it is answering a question. When you ask ChatGPT to summarize the evidence on whether or not raising wages reduces health care spending, it’ll give you something, but we don’t really know why or how.

So basically, if you try to spit out what the system is doing, you get a complete incomprehensible series of calculations. There is work happening on trying to make this more interpretable, trying to figure out from where in the system a particular answer is coming from, for trying to make it show more of its work. But that work, that effort, is way, way, way, way behind where the learning systems are right now. So we’ve gotten way better at getting an output from the system than we are at understanding what the inputs were that went into it or at least what the sort of mid-level calculations were that went into it.

[..] it’s only going to take one of these systems causing some kind of catastrophe that people didn’t expect for a regulatory hammer to come down so hard it might break the entire industry. If you get a couple people killed by an A.I. system for reasons we can’t even explain, do you think that’s going to be good for releasing future A.I. systems? Because I don’t.

There’s one reason we don’t have driverless cars actually all over the road yet. That’s one thing. Another is that these systems are being shaped in the direction — they’re being constructed to solve problems that are in the direction of profit. So there are many different kinds of A.I. systems you can create directed at many different purposes.

The reason that what we’re ending up seeing is a lot of chat bots, a lot of systems designed to fool human beings into feeling like they’re talking to something human, is because that appears to people to be where the money is. You can imagine the money in A.I. companions, and there are startups like Replika trying to do that. You can imagine the money in mimicking human beings when you’re writing up a Word document, or a college application essay, or creating marketing decks, or whatever it might be.

But so I don’t know that I think that’s great, actually. There are a lot of purposes you could turn these systems to that might be more exciting for the public. It is still, to me, the most impressive thing A.I. has done is solve the protein-folding problem. That was a program created by DeepMind.

What if you had a prizes system, where we had 15 or 20 scientific and medical innovations we want, problems we want to see solved by whatever means you can do it. And we think these are big enough that, if you do it, you get a billion dollars. We’ve thought about prizes in other areas, but let’s put them into things that society really cares about.

Maybe that would lead more A.I. systems to be tuned in the direction not of fooling human beings into thinking they’re human but into solving important mathematical problems or into speeding up whatever it is we might want to speed up, this kind of drug development. So that’s another place where I think the goals we actually have publicly, how you can make money off of this, I would like to see some real regulation here.

I don’t think you should be able to make money, just flatly, by using an A.I. system to manipulate behavior to get people to buy things. I think that should be illegal. I don’t think you should be able to feed into it surveillance capitalism data, get it to know people better and then try to influence their behavior for profit. I don’t think you should be allowed to do that.

Now, you might want to think about what that regulation actually reads like in practice because I can think of holes in that. But whether I’m right or wrong about those things, these questions should be answered. And at the end of that answering process, I think, is not a vision of less A.I. or more A.I. but a vision of what we want from this technology as a public.

And one thing that worries me is that just the negative vision — let’s pause it. It’s terrifying. It’s scary — I don’t think that’s going to be that strong. And another thing that worries me is that Washington is going to fight the last war and try to treat this like it was social media. We wish we had had somewhat better privacy protections. We wish we had had somewhat better liability, maybe around disinformation, something like that.

But I don’t think just regulating the harms around the edges here, and I don’t think just slowing it down a little bit, is enough. I think you have to actually ask as a society, what are you trying to achieve? What do you want from this technology? If the only question here is, what is Microsoft want from the technology or Google, that’s stupid.

That is us abdicating what we actually need to do.

So I’m not against a pause, but I am also not for pause being the message. I think that there has not been nearly enough work done on a positive public vision of A.I., how it is run, what it includes, how the technology is shaped, and to what ends we are willing to see it turned. That’s what I want to see. [..]

[Karma] [..]  the most common kind of email we’ve gotten over the past few weeks is people really concerned about how A.I. is going to impact the labor market and, specifically, the kinds of knowledge work jobs that tend to be done by folks with college degrees.

[..]  there’s really two levels to this I’ve seen. One is financially like, am I going to have a job? Will I be economically OK? But then also, on a deeper, more existential level, I think there’s a lot of concern, and I feel this, too, about, what would it mean for me and for my life, my sense of self worth, my purpose, my sense of meaning to have A.I. systems be able to do my job better than I can?

[Klein] [..] there is a lot that doctors currently do that can be done perfectly well by nurses, and nurse practitioners, and physician assistants. But we have created regulatory structures that make that really hard. There are places where it’s incredibly hard just to become a hair cutter because of the amount of occupational licensing you need to go through.

We do not just let, in many, many, many, many cases, jobs get done by anyone. We do let some of them get outsourced, and we’ve done that in, obviously, a lot of cases. But again, think about telehealth and how many strictures are on that. Now we’re seeing a little bit more of it.

So I am skeptical that A.I. is going to diffuse through the economy in a way that leads to a lot of replacement as quickly as people think is likely, both because I don’t think the systems are going to prove to be, for a while, as good as they need to be for that. It’s actually very, very hard to catch hallucinations in these systems, and I think the liability problems of that are going to be a very big deal.

Driverless cars are a good example here, where there’s a lot they can do, but driverless cars are not going to need to be as safe as human drivers to be put onto the road in mass. They’re going to have to be far, far, far safer. We are going to be — and I think we already are — less tolerant of a driverless car getting in a crash and kills a person than we are of human beings getting in a crash and kills a person.

And you could say, from a consequentialist perspective or a utilitarian perspective, maybe that’s stupid. But that is where we are. We see that already. And it’s a reason driverless cars are now seeming very far off still.

We can have cars. I mean, they’re all around San Francisco. You have these little Waymo cars with their little hats running around. But they are not going to take over the roads any time soon because they need to be not 80 percent reliable, not 90 percent reliable, not 95 percent reliable, but like 99.99999 percent reliable.

And these models, they’re not, and so that’s going to be true for a lot of things. We’re not going to let it be a doctor. We might let it assist a doctor but not if we don’t think the doctor knows how to catch a system when it’s getting something wrong. And as of now, I don’t see a path given how these are being trained to not getting enough wrong that we are going to be fully comfortable with them in high-stakes and, even frankly, a lot of low-stakes professions.

[..] Sam Altman, the head of OpenAI said, to paraphrase, we’re all stochastic parrots with the point being that there’s this idea that these models are stochastic parrots. They parrot back what human beings would say with no understanding.

And so then people turn and say, maybe that’s all we’re doing, too. Do we really understand how our thinking works, how our consciousness works? These are token-generating machines. They just generate the next token in a sequence, a word, an image, whatever.

We’re token-generating machines. How did I just come up with that next word? I didn’t think about it consciously. Something generated the token. [..]

I think the kernel of profound truth in the A.I. dehumanization discourse is that we do dehumanize ourselves and not just in metaphors around A.I. We dehumanize ourselves all the time. We make human beings act as machines all the time.

We tell people a job is creative because we need them to do it. We need them to find meaning in it. But in fact, it isn’t. Or we tell them there’s meaning in it, but the meaning is that we pay them.

So this, I think, is more intuitive when we think about a lot of manufacturing jobs that got automated, where somebody was working as part of the assembly line. And you could put a machine on the assembly line, and you didn’t need the person.

[..] A lot of it [knowledge work] is rules based, a lot of the young lawyers creating documents and so on. We tell stories about it, but it is not the highest good of a human being to be sitting around doing that stuff. And it has taken a tremendous amount of cultural pressure from capitalism and other forces — from religion — to get people to be comfortable with that lot in life. [..]

If I tell you that my work in life is I went to law school and now I write contracts for firms trying to take over other firms, well, if I make a bunch of money, you’d be like, great work. You really made it, man [LAUGHS]

If that law degree came from a good school, and you’re getting paid, and you’re getting that big bonus, and you’re working those 80 hour weeks, fantastic job. You made it. Your parents must be so proud.

If I tell you that I spend a lot of time at the park, I don’t do much in terms of the economy, but I spend a lot of time at the park. I have a wonderful community of friends. I spend a lot of time with them. It’s like, well, yeah, but when are you going to do something with your life, right, just reading these random books all the time in coffee shops.

I think that, eventually, from a certain vantage point, the values of our current society are going to look incredibly sick. And at some point, in my thinking on all this, I do wonder if A.I. won’t be part of a set of technological and cultural shocks that leads to that kind of reassessment.”

Full article, E Klein, New York Times, 2023.4.9