The AI panic is overblown

It’s not hard to find cool uses of ChatGPT-5 enabled AI already

ai
Hikari Azuma, a character for the Gatebox virtual home robot, is displayed on the business day of the Tokyo Game Show 2019 in Chiba, Japan (Getty)

I refuse to get Amazon Alexa, and never use Siri, because I find the concept of human-style interactions with robots somewhere between the unhealthy and the grotesque. And almost always more hassle than they’re worth because they don’t actually “understand” what you’re telling them.

But I don’t find them sinister, and find myself skeptical of the growing panic about AI since ChatGPT-4 launched in March. Two weeks ago, seventy-five-year-old British scientist Geoffrey Hinton made a dramatic exit from Google, so that he could speak freely about the dangers of the technology he’d helped create. His fears…

I refuse to get Amazon Alexa, and never use Siri, because I find the concept of human-style interactions with robots somewhere between the unhealthy and the grotesque. And almost always more hassle than they’re worth because they don’t actually “understand” what you’re telling them.

But I don’t find them sinister, and find myself skeptical of the growing panic about AI since ChatGPT-4 launched in March. Two weeks ago, seventy-five-year-old British scientist Geoffrey Hinton made a dramatic exit from Google, so that he could speak freely about the dangers of the technology he’d helped create. His fears seem to revolve around the “hive mind’ function of AI, whereby everything one robot learns, they all learn. It’s too much, too quickly-replicating knowledge for Hinton, especially given that AI is being trained not just on language but video.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said. “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Lots of scientists are panicking

These seem like observations both garbled and obvious which is odd from a man who knows, or knew, the technology so well. He seems genuinely scared of Google too, which now, he suggests, is no longer a trustworthy “steward” since Microsoft released an AI-augmented version of its search engine Bing (this translated in the real world into people saying Bing had suddenly become quite good, which made Google executives worry).

Lots of scientists are panicking, leaping on the apparent certainty that we are living in a world created by Frankenstein author Mary Shelley.  In March, the august group of scientists at the Future of Life organization penned an open letter: “Pause Giant AI Experiments: An Open Letter.” It has a fairly measly 27,565 signatures as I write but with mass publicity it certainly upped the panic discourse volume.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” they write. This comes on top of the dire warning from the Asilomar AI Principles, another Future of Life project: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”

The word “could” recurs frequently in these panic stations, which is, of course, what panic — sometimes justified — is about. But what “could” hides is the substantial chance of “could not.” Echoing the climate movement’s obsession with apocalyptic outcomes — such as the extinction of the human race, which serious scientists dismiss as a possibility arising from global warming as we now know it — there is little evidence to suggest that AI is going to run amok. I want to hear how ChatGPT and the future of AI could be great: more along the lines, in other words, of what Sal Khan has recently said about how AI will transform education, giving “every student a personalized tutor.”

I want to hear more thought about how humans are likely to control AI — not how we “could” be taken over by evil monsters we created by our own hand. If we could come up with the blueprint for AI, we can come up with safeguards. I am more inclined to side with the clever clogs at the Warp institute, who wrote a “Pause AI Doomster Pessimism” open letter in response.

It’s not hard to find cool uses of ChatGPT-5 enabled AI already. Take Hikari Azuma, an anime-style chat bot that looks like a girl fairy, has a high-pitched voice and lives in a hologram. She keeps many men company; and now, with her advanced language skills, has become so delectable that some Japanese men have married her. I watched the promotional video of Hikari in action, and while it was queasy-making in just the ways you would expect — not least because these “wives” look like schoolgirls — there was also something touching about it; these lonely bachelors feeling “looked after.” Better than one grim alternative for isolated men who can’t cope with flesh and blood women: becoming an incel.

I confess I began to imagine the tables turning: the perfect hologram husband. Gorgeous, lithe, adorable, part assistant, part best friend, part therapist. Maybe someone — or thing — to fantasize about, which would be no worse than many sexual kinks indulged in by men legally and regularly. OK, “he” wouldn’t be real, but that doesn’t mean he couldn’t brighten my day and simulate a feeling of psychological succor — best of all, he could revolutionize my personal admin. 

I find the Alexa and Siri more trouble than their worth, and creepy, but it seems to me their problem is only partly their imitation of a servant, and rather about a failure of the technology to expand in ways that would make them not necessarily unstoppable, but irresistible.

This article was originally published on The Spectator’s UK website.

Comments
Share
Text
Text Size
Small
Medium
Large
Line Spacing
Small
Normal
Large