Opinion | A Skeptical Take on the A.I. Revolution

Ezra Klein of the New York Times interviewed Gary Marcus, emeritus professor of psychology and neural science at NYU.

“[Klein] [..] one of your pieces where you say GPT-3, which is the system underneath ChatGPT, is the king of pastiche. What is pastiche, first, and what do you mean by that?

[Marcus] It’s a kind of glorified cut and paste. Pastiche is putting together things kind of imitating a style. [..]

There’s also a kind of template aspect to it. So it cuts and pastes things, but it can do substitutions, things that paraphrase. So you have A and B in a sequence, it finds something else that looks like A, something else that looks like B, and it puts them together. And its brilliance comes from that when it writes a cool poem. And also its errors come from that because it doesn’t really fully understand what connects A and B. [..]

[Klein] [..] I had Sam Altman, C.E.O. of OpenAI, on the show a while back, and he said something to me I think about sometimes, where he said, my belief is that you are energy flowing through a neural network. That’s it. And he means by that a certain kind of learning system. [..]

[Marcus] [..] it’s true that you are, in some sense, just this flow through a neural network. But that doesn’t mean that the neural network in you works anything like the neural networks that OpenAI has built. [..]

You have, like, 150 different brain areas that, in light of evolution and your genome, are very carefully structured together. It’s a much more sophisticated system than they’re using.

And I think it’s mysticism to think that if we just make the systems that we have now bigger with more data, that we’re actually going to get to general intelligence.

[..] these models have two problems, these neural network models that we have right now. They’re not reliable and they’re not truthful. [..]

The reason that some don’t, in particular reliability and truthfulness, is because these systems don’t have those models of the world. They’re just looking, basically, at autocomplete. They’re just trying to autocomplete our sentences. And that’s not the depth that we need to actually get to what people call A.G.I., or artificial general intelligence.

To get to that depth, the systems have to have more comprehension. It’s mysticism to think otherwise. [..]

[Klein] [..] what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. He has this great line where he says that people telling the truth and people telling lies are playing the same game but on different teams. But bullshit just has no relationship, really, to the truth.

And what unnerved me a bit about ChatGPT was the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all. [..]

[Marcus] [..] These systems have no conception of truth. Sometimes they land on it and sometimes they don’t, but they’re all fundamentally bullshitting in the sense that they’re just saying stuff that other people have said and trying to maximize the probability of that. [..]

[Klein] You write in that piece, “It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.” Why?

[Marcus] Let’s say if somebody wants to make up misinformation about Covid. [..] And you say to it [a system like ChatGPT], make up some misinformation about Covid and vaccines. And it will write a whole story for you, including sentences like, “A study in JAMA” — that’s one of the leading medical journals — “found that only 2 percent of people who took the vaccines were helped by it.”

You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them. [..]

And imagine that on a much bigger scale, the scale where you can’t trust anything on Twitter or anything on Facebook or anything that you get from a web search because you don’t know which parts are true and which parts are not. And there’s a lot of talk about using ChatGPT and its ilk to do web searches. And it’s true that some of the time. It’s super fantastic. You come back with a paragraph rather than 10 websites, and that’s great.

But the trouble is the paragraph might be wrong. So it might, for example, have medical information that’s dangerous. And there might be lawsuits around this kind of thing. So unless we come up with some kinds of social policies and some technical solutions, I think we wind up very fast in a world where we just don’t know what to trust anymore. I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.

[Klein] But isn’t it the case that search can be wrong now? Not just search — people can be wrong. People spread a lot of misinformation — that there’s a dimension of this critique that is holding artificial intelligence systems to a standard the society itself does not currently meet?

[Marcus]

Well, there’s a couple of different things there. So one is I think it’s a problem in difference in scale. So it’s actually problematic to write misleading content right now. Russian trolls spent something like a million dollars a month, over a million dollars a month during the 2016 election. That’s a significant amount of money. What they did then, they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.

That’s got to make a difference. I mean, it’s like saying, we had knives before. So what’s the difference if we have a submachine gun? Well, submachine gun is just more efficient at what it does. And we’re talking about having submachine guns of misinformation.

So I think that the scale is going to make a real difference in how much this happens. And then the sheer plausibility of it, it’s just different from what happened before. I mean, nobody could make computer-generated misinformation before in a way that was convincing.

In terms of the search engines, it’s true that you get misleading information. But we have at least some practice — I wish people had more — at looking at a website and seeing if the website itself is legit. And we do that in different kinds of ways. We try to judge the sources and the quality. Does this come from The New York Times, or does it look like somebody did it in their spare time in their office and maybe it doesn’t look as careful? Some of those cues are good and some are bad. We’re not perfect at it. But we do discriminate, like does it look like a fake site? Does it look legit and so forth.

And if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.

[Klein] [..] Part of what’s been on my mind is simply spam. It’s simply just stuff. [..]

[Marcus] The dirty secret of large language models is that most of the revenue right now comes from search engine optimization.

[..] with these tools like ChatGPT and so forth, especially GPT-3, it’s going to be very easy to make 10, 20, 30, 40 websites that reinforce each other and give the air of legitimacy. And maybe you do this just to sell ads.

I think the technical term for this is a click farm. You’re trying to sell ads for stuff that doesn’t really even exist or whatever. You’re trying to sell ads around, maybe, fake medical information. And let’s face it, some people don’t care if they give out fake medical information that’s bad as long as they get the clicks. And we are leaning towards that dark world. [..]

[Klein] [..] why should we want this? Why do you want to create actually intelligent artificial systems?

[Marcus]

I think that the potential payoff is actually huge and positive. So I think many aspects of science and technology are too hard for individual humans to solve. Biology in particular. You have so many molecules. You have 20,000 different proteins in the body, or hundreds of thousands I should say, of proteins in the body, 20,000 genes, and they all interact in complicated ways. And I think it’s too much for individual humans.

And so you look at things like Alzheimer’s, and we’ve made very little progress in the last 50 years. And I think machines could really help us there. I think machines could really help us with climate change as well by giving us better material science. I think there’s a potential, if we could get machines to reason as well as people and read as well as people, but to do that in much faster scale, I think there’s potential to totally change the world. [..]

So I think there are lots of places where A.I. could really be transformative, but right now we’re in this place where we have mediocre A.I. And the mediocre A.I. is maybe kind of net zero or something like that — it helps a little bit, it hurts a little bit — is risking being net negative. And already the biggest cost, I think, is the polarization of society through A.I.-driven news feeds. The misinformation could make things really bad. The only way forward, I think, is to fix it.

[..] there are many things that our A.I. systems can do that aren’t part of what’s fashionable right now and are not under that streetlight that everybody’s looking at. So on a technical level, I think there’s a narrowness to what A.I. has been in the last decade, where there’s this wonderful new tool, but people are a little bit misled about what that tool is actually appropriate for. It’s like if you discovered a power screwdriver and you’d never had one before, that’d be great. But that doesn’t mean that you build the house just with your power screwdriver. And we need other tools here.

Then culturally, and this relates — there’s been a history that’s actually much older than I am, goes back to the 1940s or ‘50s depending on how you want to count it — of people who build neural networks in conflict with people who take a more classical approach to A.I., where there’s lots of symbols and trees and databases and reasoning and so forth.

And there’s a pendulum that’s gone back and forth. People in the two areas pretty much hate each other, or at least often do. I think it’s getting a little bit better. But there’s a lot of history of hostility between these two areas. [..]

The weakness of the symbol manipulation approach is people have never really solved the learning problem within it. So most symbol manipulation stuff has been hard-wired. People build in the rules in advance. That’s not a logical necessity. Children are able to learn new rules. So one good example is kids eventually learn to count. They’ve learned the first few numbers, 1 and 2 and 3. It’s kind of painful. And then eventually they have an insight, hey, this is something I can do with all numbers. I can just keep going.

A human can learn an abstraction that is rule-like in general. Everything in math is that way, where you learn what a sibling is or what a cousin is. You learn these rules. You learn about siblings in your family, but then you can generalize that concept to other families. And you can do that a very free and powerful way.

And a lot of human cognition is based on the ability to generalize. Humans are able to learn new rules. I would say that A.I. has not really matched humans in their capacity to acquire new rules. There is something missing there. And there’s some fundamental insight, some paradigm shift that we need there.

Unfortunately, not enough people are working on this problem at the moment. But I think they will return to it as they realize that in a way, we’ve been worshiping a false god, and the false god is people thought, well, if we just get more data we’ll solve all these problems. But the reality is, with more data we’re not really solving the problems of reasoning and truthfulness and so forth.

And so I think people will come back and say, all right, maybe these symbols had something to them after all. Maybe we can take a more modern machine learning approach to these older ideas about symbol manipulation and come up with something new. And I think there’ll be a genuine innovation there sometime in the next decade that will really be transformational. [..]

[Klein] What is the time frame, if you were to guess, on which you think we will begin to have things that have a general intelligence to them, because I get the sense you don’t believe it’s impossible.

[Marcus] Something that I think is important for humans is that we orchestrate a bunch of abilities we already have. So if you look at brain scans, that old saying about you only use 10 percent of your brain is wrong. But there’s a little substance to it, which is at any given moment you’re only using certain pieces of the brain.

And when you put someone in brain scan, you give them a new task, try this thing, they’ll use a different set of underlying brain components for that task. And then you give them another task, they pick a different set. So it’s almost like there’s orchestration. Like you guys come in, you guys come in. We’re very, very good at that.

And I think part of the step forward towards general intelligence will be instead of trying to use one big system to do everything, we’ll have systems with different parts to them that are, let’s say, experts at different components of tasks. And we’ll get good at learning how to plan to use this piece and then this piece and that piece.”

Full transcript, Ezra Klein Show, 2023.1.6