AI

What happens when AI surpasses humans?

Old technology doesn’t disappear because it’s bad: it disappears because it’s outcompeted


I recently sat down to dinner with some very smart economists. I am the chief executive of an artificial intelligence company and so the conversation swiftly turned to the value of AI for the economy.

The economists had many interesting things to say, both about the advantages of AI adoption and about the displacement effects on jobs. But about halfway through the dinner, another AI chief executive offered an opinion that struck me. He said: “I can’t quite articulate it, but I have a sense that what you are measuring with your GDP analysis is not…

I recently sat down to dinner with some very smart economists. I am the chief executive of an artificial intelligence company and so the conversation swiftly turned to the value of AI for the economy.

The economists had many interesting things to say, both about the advantages of AI adoption and about the displacement effects on jobs. But about halfway through the dinner, another AI chief executive offered an opinion that struck me. He said: “I can’t quite articulate it, but I have a sense that what you are measuring with your GDP analysis is not what I care about. You treat this like an economic question. But it’s more like a geopolitical question.” 

At a gut level, I knew exactly what he meant – but I also couldn’t clearly state the distinction. And now I’ve realized why.

Politicians of all stripes have grown comfortable comparing AI to the printing press or the internet. It’s a convenient metaphor. A great leap forward, some jobs lost, some disruption along the way, but ultimately a triumph of human progress.

But that could be very wrong. Imagine a super-intelligent alien landed in Washington, DC, and it was immediately obvious to everyone that it was far smarter and more capable than a human. Would your first thought be, “I wonder how much GDP per capita will increase over the next few decades?” Or would you be more worried about security and safety? Who controls our society if a more intelligent being has arrived? How can I protect my family? Are we safe?

While the economists (see Michael Lind on p18) are possibly right, and AI could just turn out to be another tool, there’s also a chance that it won’t. AI might not be anything at all like the printing press or the invention of electricity or the birth of the internet. It could be something fundamentally different – something that surpasses us and becomes unpredictable.

Given this, you might think we should just stop developing and accelerating AI. But considering the entire field of AI as a unified whole is like talking about biology as either good or bad. It doesn’t really make sense. There’s a huge distinction between building cutting-edge systems that we don’t understand and can’t control – artificial general intelligence (AGI) – and building other systems that we do understand and can control – narrow AI. I worry about the consequences of the more general systems. To frame things another way, game theorists sometimes talk about “strict domination” – a point at which one thing is better than another in every respect. We’ve seen it throughout history: stone knives replaced by bronze, horses replaced by cars, individual transistors replaced by printed circuit boards. The old technology doesn’t disappear because it’s bad: it disappears because it’s outcompeted.

Old technology doesn’t disappear because it’s bad: it disappears because it’s outcompeted

It’s important to realize that the human brain is, in a sense, a technology itself. Imagine a new general intelligence, running on silicon, millions of times faster than the human brain. Once that system exists, arguably the most important variable will be what goals it’s pursuing and whether those goals include us. This is a world where the human brain itself is strictly dominated. At that moment, it becomes redundant for creating most economic value. You’d no more ask a person to write a legal brief than you’d use a horse to travel to France, or a stone knife to cut steak. There are certain tasks – live music, professional sports or chess – that will retain their value because their value doesn’t depend on their economic efficiency. So this scenario doesn’t imply no jobs. More likely, there would be a strange (perhaps better?) subset of today’s jobs.

We don’t know exactly when the human-AI crossover will occur. In principle, maybe it could never happen. It could be decades away. But some of the best minds in the field – people such as Geoffrey Hinton, Dario Amodei and Sam Altman – have said publicly that they think it could be soon, perhaps by 2030. And when experts tell you that a species-level threat may be near, and that they might be partially responsible for creating it, I think it’s worth paying attention.

As someone who builds (narrow) AI systems and advises governments and businesses on how to use them, I find myself constantly struck by how little real understanding there is. We might be charging into the most consequential technological transformation in human history and, while that is superficially acknowledged, there’s really very little being done about it. You might think that politicians who grasp the significance of AI would immediately and urgently want to change their policy priorities. But so far, they haven’t.

Having said that, I don’t think there is an obvious answer. Technological progress is one of the great strengths of the West. It’s helped build our economies, fueled innovation and improved the lives of billions. Other countries, particularly China, are investing heavily in AI and will make progress. And winning in this domain matters for one reason: power. As AI systems grow more capable, the people and institutions that control them will gain unprecedented leverage. They will shape economies and decide who gets access to knowledge, to resources, to opportunity. This was why DeepSeek was an important moment. Its geopolitical significance, demonstrating that Chinese companies were closer to the leading US companies than expected, considerably outweighed its significance as a technological breakthrough.

So, to me at least, it seems clear that simply sitting this out means that you will be on the receiving end of the whims of actors who do engage in the race. Currently, that race is between a handful of companies focused on building the first systems to reach general intelligence. OpenAI, with its first-mover advantage in ChatGPT. Google, powered by the genius of DeepMind and funded by its search income. Anthropic, hoping to differentiate itself by creating safer models. Meta, historically favoring a more open approach and spending tens of billions to catch up. And the Chinese companies: DeepSeek, Huawei, ByteDance etc.

I do not want to sleepwalk into a world where my son’s mind is no longer the most important asset he possesses

Without our own powerful AI, there is a very real chance that our future could be written not by our governments but in Silicon Valley or by the politburo in Beijing. This is the core complexity of the situation. Not taking part seems bad. Taking part seems bad. What should we do? Fortunately, I’m not a politician, so I don’t have to solve these questions at a more abstract level. But I am a father. And that forces me to answer the question: what kind of world am I preparing my two-year-old son for? It’s already clear that my son will experience enormous benefits from AI, if it’s done safely.

At school, he will have AI tutors providing round-the-clock access to personalized tuition. If he becomes ill, AI will be capable of outperforming doctors at diagnoses. It’s even possible that we could cure most diseases before he even reaches adulthood. Probably road deaths will be a thing of the past by then as well. And if he gets a job, he won’t do the repetitive, labor-intensive, document-heavy work AI excels at. He would have a profession that is more natural and appealing than conventional office work; an artisan, perhaps, or a live performer.

But I do not want to sleepwalk into a world where his mind is no longer the most important asset he possesses. Nor do I want a world in which human agency is slowly designed out of the system. AI might turn out to be another tool. But it might turn out to be an alien intelligence. Then, despite what the economists think, its development would not be another chapter in the story of human progress. It would be a new book altogether. We must think and act accordingly.

This article was originally published in The Spectator’s August 2025 World edition.

Comments
Share
Text
Text Size
Small
Medium
Large
Line Spacing
Small
Normal
Large

Leave a Reply

Your email address will not be published. Required fields are marked *