Doomers

Doomers looks at what AI means for the future

Someone — I forget who — suggested the idea of my play as a joke, and I thought it was a good one


I wrote my play Doomers partly because, the night Sam Altman was fired, I was performing in a play called Zoomers.

Someone — I forget who — suggested the idea of Doomers as a joke, and I thought it was a good one. My method for some, if not all, of my plays over the past few years has been to take some kind of mimetic material — downtown, Gen Z, polyamory — and to find what is surprising or human inside the meme. I try to locate a universal story in what might otherwise seem…

I wrote my play Doomers partly because, the night Sam Altman was fired, I was performing in a play called Zoomers.

Someone — I forget who — suggested the idea of Doomers as a joke, and I thought it was a good one. My method for some, if not all, of my plays over the past few years has been to take some kind of mimetic material — downtown, Gen Z, polyamory — and to find what is surprising or human inside the meme. I try to locate a universal story in what might otherwise seem like a surface-level idea that feels niche, obnoxious or both.

Sam Altman and the autistic tech world, in particular, represent opaque surfaces that I believe conceal something deeper. I wanted to write about “doomers,” about rationalists who believe we are creating and accelerating a technological apocalypse, because I didn’t understand who they were. I wanted to think through what it might feel like for enormously rich, successful, rational, nearly emotionless people to live out their apocalyptic rapture. What does it feel like to be one of the several thousand people who believe they have a privileged relationship with the end of civilization? Who believe they are working intimately for — or against — what they see as the most important technology in the history of the world?

What is it like to be rational about the most emotional thing possible — the eradication of life by rogue machines? In Sam Altman, OpenAI and the AI field at large, I saw new versions of Frankenstein and Faust — a real-life science-fiction story that crystallized nearly all the major themes of modernity and enlightenment in a story Goethe, Newton or da Vinci would recognize. In Doomers, I see the reductio ad absurdum of the Enlightenment: dare to know, dare to challenge all received truth, dare to irrevocably mess up the world, dare to put everybody out of a job, dare to create ontological shock, dare to create Skynet.

These are grotesque, absurd risks — but they are profitable and exciting. Sam Altman and Ilya, Myra and Greg — they have become celebrities, “rock stars,” as they refer to themselves in my play. Through my research — watching interviews, reading books and blogs, even conducting informal interviews with people in the AI world after word got out about my project — I found that philosophical sophistication in the rationalist and tech world is relatively low. Many who consider themselves, their friends or their colleagues geniuses often turned out to lack poetry, spiritual depth or any interest in opposing perspectives. They seem, in many ways, like absurd parodies or extreme outcomes of the Enlightenment mind.

AI doesn’t need to doubt the existence of the soul to be dangerous. It doesn’t matter if we prove that consciousness arises from something AI can never replicate. What matters is that AI itself can self-replicate, grow and follow imperatives. I think of it now as “viral intelligence” — unalive and yet lifelike, a viral agent capable of carrying out rational imperatives without feeling or sensation. A monster.

I discovered that while nearly everyone in the AI world had come up with statistical models or analytical inferences about the likelihood of doom, nobody seemed to possess humanistic or theological intuition about why they should, perhaps, back away. They exhibited no signs of understanding taboos, no feelings, no sense of transgression. What shocked me further — though I shouldn’t have been — was that most people driving AI development don’t think deeply about history, art, music or the evolution of human culture. They think in terms of numbers: how much money will this make, who will it empower or disempower, what are the statistical risks? There’s no fear or respect for chaos or for the possibility that AI development could be nonlinear, highly complex and unpredictable. The prevailing assumption is that chaos can be tamed and uncertainty reduced to negligible levels, and that technology is a tool capable of imposing rationality on everything.

The result of this is a comfort with, even a preference for, the idea of machines replacing humans. From a purely rational perspective, super-computers, AGI agents and similar systems might seem better at most cognitive tasks — more efficient, and perhaps without human flaws. This is, in fact, part of the founding philosophy of some companies like Google: that the ultimate point of technology is to build a higher life form that can leave humanity behind. There’s something noble about that idea — but it’s also evil, bleak and spiritually totalitarian. Bay Area rationalism is not humanism. For the most part, the people behind AI are not thinking about the qualitative aspects of life, about our quality of life. They are engaged in a potentially catastrophic competition to build more computers, reach artificial superintelligence, and harness themselves to godlike power. It’s essentially Jurassic Park without an island: AI, via the internet and smartphones (and one day neuroimplants) is everywhere. When the dinosaurs knock down the electronic fences they’ll walk right into our brains.

Writing Doomers has made it difficult not to become a doomer myself, to wish that civilization could pull back from the Faustian bargain companies like OpenAI are asking us to make. There’s ample room for writers to conceive of themselves again as poets, and to rebel against the desiccated and humorless rationalism of a community which doesn’t realize that it could historicize its own behavior and place the development of AI in a larger, historical context that goes back at least to alchemy and astrology. The new is not the new; the solutions it offers are not new concepts; the counterarguments to pushing reason and technological invention to their extremes are centuries old. Goethe, Blake, Mary Shelley, Samuel Butler, among other romantics, created powerful antecedent myths; poetry has been ahead of technology and technologists — warning us, urging us to accept our fragile, limited natures, wisely created by “nature or God.”

The striking aspect of the AI world, from what I’ve observed, is its relentlessness —– the drive of people like Sam Altman to find out — to decide the argument about the probability of doom (“p-doom”) once and for all (even if “for all” is in fact doom). There’s a dark, obsessional, Faustian psychology to the field that I’m not sure industry leaders are aware of in themselves. Their profound shallowness, their lack of interest in intellectual history and in the greatest products of human thought, of the human soul, reflect the prevailing view that human beings are obsolete, surpassed by AGI and ASI.

AI thought leaders (Altman, Sergey Brin, Elon Musk, Dario Amodei, among others) all say the right things — and they may even believe the things they say, but their statements about humility and safety and caution bely the reality of the industry, which is accelerationist, driven by fierce, uncontrolled competition and opaque agreements with both the American and Chinese governments and without the meaningful legislation which should represent the voice of the people, democracy.

Sometimes it looks as though the future of civilization is being decided by an invisible trust of unelected officials and founders and venture capitalists. The social contract has been altered, perhaps irretrievably, and ordinary people have not had a say. Arguably, this applies to almost all breakthrough technology — but I would say in hindsight that it has been clear since at least the iPhone that we plebs have to organize in meaningful ways to counteract addictive, disruptive and totalizing technologies which alter, sur- veil and manipulate us; AI is an exponential acceleration of an ongoing trend — it is, you might argue, the acceleration of dystopian trends from the last hundred or more years.

Ironically, I find large language models very useful tools; in the short term, they make life in the information economy much easier (they help deal with the externalities produced by previous technologies like the iPhone). ChatGPT and Claude and Gemini don’t look apocalyptic to the naked eye — they seem more like a very smart but absentminded tutor. If we froze these technologies where they are, their creators would have made a major contribution to civilization. They would essentially have made super-calculators and word-processors — the best version of the technological wave that began in the 1980s.

But we can see what’s on the other side of the wave: the dulling and dependency of the brain. Young people using ChatGPT to do their assignments will quite literally learn nothing; some do not even have the writing and critical thinking skills to properly prompt the AI themselves. I find that using LLMs appeals to my lizard brain; I start to think: “I can just prompt that; I don’t have to work it through myself.” There’s a point at which, using LLMs — as I have especially in preparation for writing Doomers — I feel my cognitive powers are extended. There follows one at which I feel a deep, structural idiocy setting in. If the worse-case scenario for AI is Skynet, the best-case scenario is WALL-E.

The question is whether the growth of these systems is exponential — or whether, in the end, the risks remain as unpredictable and chaotic as the systems themselves. As an artist, I’m not only interested, but feel a duty to create a new kind of romantic myth that illuminates and explores the inner worlds, the anxieties and comic blindness, of a self-created elite which thinks it’s accelerating civilization on behalf of the rest of us. As a romantic and a humanist, I do not have the raw power, financial or technological, to stop the AI revolution, but I do, as do others, have the power to resist in the one domain we have control over: the imagination and the soul.

This article was originally published in The Spectator’s February 2025 World edition.

Comments
Share
Text
Text Size
Small
Medium
Large
Line Spacing
Small
Normal
Large

Leave a Reply

Your email address will not be published. Required fields are marked *