How ChatGPT Surprised Me

“like all A.I. systems, it [GPT-5] degrades as a conversation continues or as the chain of tasks becomes more complex (although in two years, the length it can sustain a given task has gone from about five minutes to over two hours). This is the first A.I. model where I felt I could touch a world in which we have the always-on, always-helpful A.I. companion from the movie “Her.” [..]

In their paper “A.I. as Normal Technology,” Arvind Narayanan and Sayash Kapoor, both computer scientists at Princeton, argue that the external world is going to act as “a speed limit” on what A.I. can do.

In their telling, we shouldn’t think of A.I. as heralding a new economic or social paradigm; rather, we should think of it more like electricity, which took decades to begin showing up in productivity statistics. They note that GPT-4 reportedly performs better on the bar exam than 90 percent of test takers, but it cannot come close to acting as your lawyer. The problem is not just hallucinations. The problem is that lawyers need to master “real-world skills that are far harder to measure in a standardized, computer-administered format.” For A.I.s to replace lawyers, we would need to redesign how the law works to accommodate A.I.s.

[..] intelligence does not smoothly translate into power. Human beings, for most of our history, were just another animal. It took us eons to turn to build the technological civilization that has given us the dominion we now enjoy, and we stumbled often along the way. Some of the smartest people I know are the least effectual. Trump has altered world history, though I doubt he’d impress anyone with his SAT scores.

Even if you believe that A.I. capabilities will keep advancing — and I do, though how far and how fast I don’t pretend to know — a rapid collapse of human control does not necessarily follow. I am quite skeptical of scenarios in which A.I. attains superintelligence without making any obvious mistakes in its effort to attain power in the real world.

At the same time, a critique Scott Alexander, one of the authors of “A.I. 2027,” makes has also stuck in my head. A central argument of “A.I. as Normal Technology” is that it is hard for new technologies to diffuse through firms and bureaucracies. We are decades into the digital revolution and I still can’t easily port my health records from one doctor’s office to another’s. It makes sense to assume, as Narayanan and Kapoor do, that the same frictions will bedevil A.I.

And yet I am a bit shocked by how even the nascent A.I. tools we have are worming their way into our lives — not by being officially integrated into our schools and workplaces but by unofficially whispering in our ears. The American Medical Association found that two in three doctors are consulting with A.I. A Stack Overflow survey found that about eight in 10 programmers already use A.I. to help them code. The Federal Bar Association found that large numbers of lawyers are using generative A.I. in their work, and it was more common for them to report they were using it on their own rather than through official tools adopted by their firms. It seems probable that Trump’s “Liberation Day” tariffs were designed by consulting a chatbot.

[..] search is A.I.’s gateway drug. After you begin using it to find basic information, you begin relying on it for more complex queries and advice. Search was flexible in the paths it could take you down, but A.I. is flexible in the roles it can play for you: It can be an adviser, a therapist, a friend, a coach, a doctor, a personal trainer, a lover, a tutor. In some cases, that’s leading to tragic results. But even if the suicides linked to A.I. use are rare, the narcissism and self-puffery the systems encourage will be widespread. Almost every day I get emails from people who have let A.I. talk them into the idea that they have solved quantum mechanics or breached some previously unknown limit of human knowledge. [..]

In the “A.I. 2027” scenario, the authors imagine that being deprived of access to the A.I. systems of the future will feel to some users “as disabling as having to work without a laptop plus being abandoned by your best friend.” I think that’s basically right. I think it’s truer for more people already than we’d like to think. Part of the backlash to GPT-5 came because OpenAI tried to tone down the sycophancy of its responses, and people who’d grown attached to the previous model’s support revolted.

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me.

I don’t know whether A.I. will look, in the economic statistics of the next 10 years, more like the invention of the internet, the invention of electricity or something else entirely. I hope to see A.I. systems driving forward drug discovery and scientific research, but I am not yet certain they will. But I’m taken aback at how quickly we have begun to treat its presence in our lives as normal. I would not have believed in 2020 what GPT-5 would be able to do in 2025. I would not have believed how many people would be using it, nor how attached millions of them would be to it.

But we’re already treating it as borderline banal — and so GPT-5 is just another update to a chatbot that has gone, in a few years, from barely speaking English to being able to intelligibly converse in virtually any imaginable voice about virtually anything a human being might want to talk about at a level that already exceeds that of most human beings. In the past few years, A.I. systems have developed the capacity to control computers on their own — using digital tools autonomously and effectively — and the length and complexity of the tasks they can carry out is rising exponentially.

I find myself thinking a lot about the end of the movie “Her,” in which the A.I.s decide they’re bored of talking to human beings and ascend into a purely digital realm, leaving their onetime masters bereft. It was a neat resolution to the plot, but it dodged the central questions raised by the film — and now in our lives.

What if we come to love and depend on the A.I.s — if we prefer them, in many cases, to our fellow humans — and then they don’t leave?”

Full editorial, E Klein, New York Times, 2025.8.24