This summer, two of the leading contenders in the great AI race have suddenly, alarmingly, declared that the endgame is in sight and that they’re now spending vast amounts of time and money to try to ensure that their own AIs beat the others.
What does winning mean? It means that their models (you know them perhaps as GPT, Claude and Gemini) reach first AGI (human-level intelligence), then superintelligence. No one quite knows what superintelligence will do (we’re not smart enough) but it’s clear that whoever owns the winning model will wield unimaginable power. They’ll dominate the world. A new Alexander the Great.
The first to show his hand was Sam Altman, the chief executive and founder of OpenAI, a company he once shared with his former friend Elon Musk. Altman is the proud father of GPT and in a blog post entitled “the gentle singularity,” he addressed humanity in the manner of the world president he aspires to be: “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence. We (the whole industry, not just OpenAI) are building a brain for the world… we have a lot of work in front of us, but most of the path in front of us is now lit and the dark areas are receding fast.” He concluded: “May we scale smoothly, exponentially and uneventfully through superintelligence.”
In July, just weeks after Altman’s blog post, his contemporary, the Facebook billionaire Mark Zuckerberg made a strikingly similar announcement. Zuckerberg signaled a major revamp of his AI operations, putting the company’s collection of AI businesses and projects under the umbrella of a newly created organization called Meta Superintelligence Labs, or MSL. He said: “I believe this will be the beginning of a new era for humanity and I am fully committed to doing what it takes for Meta to lead the way.”
Sam A and Zuck are focused on accelerating as fast as possible toward superintelligence, but another AI rival, Dario Amodei, chief executive of Anthropic, has a different motive. Anthropic was founded by former OpenAI employees all desperately worried that Altman was moving too quickly and too carelessly. It’s all very well saying superintelligence could solve humanity’s problems, but will it want to? Will it care about humanity at all? Anthropic’s raison d’être is to build a more open and more biddable AI. And this is currently Claude, which competes directly with Altman’s GPT.
Coming up on the inside is Gemini, the AI model trained by a Brit, Sir Demis Hassabis, founder and chief executive of DeepMind, now owned by Google. DeepMind has traditionally focused on solving complex problems such as protein folding (with AlphaFold) and it has been less commercially aggressive than OpenAI or Meta. Hassabis worries that superintelligence could be dangerous if misaligned with human interests and he launched Gemini to compete directly with GPT and Claude. But for all the talk of safety, DeepMind is just as ambitious as the rest.
Entirely unbothered by safety is China’s DeepSeek. Because it’s a government project, and because it’s China, DeepSeek has access to massive data pools, state-sponsored research initiatives and a frighteningly well-trained and impressive engineering workforce. Unlike its western counterparts, DeepSeek is, of course, tightly integrated with China’s strategic priorities. The CCP absolutely understands, in a way US politicians struggle to, that national security and economic dominance will depend entirely on AI.
There are two dark horses: Elon Musk’s relatively new xAI and Safe Superintelligence, run by another Open AI refugee, the brilliant Ilya Sutskever. Sutskever was Altman’s former chief scientist and helped develop GPT. In 2023, with several other OpenAI board members, Sutskever dramatically voted to oust Altman from his own company. What happened next remains unclear, but just a week later the unsinkable Altman bobbed back up to the surface, took control of OpenAI again and Sutskever stepped down, first from the board and then from the company. Very little is known about Safe Superintelligence, but everyone in the AI world agrees that it would be a mistake to bet against Ilya.
This article was originally published in The Spectator’s August 2025 World edition.
Leave a Reply