Much of the magic of Curb Your Enthusiasm comes from the show being plotted but not scripted. The direction of the conversation is agreed in advance, after which the cast — mostly stand-up comedians and hence naturally good at extemporizing — improvise the lines on the fly. This makes the show engagingly realistic even in the rare moments when it isn’t being funny.
In such “high-context” communication, there is always a side-channel alongside the words which determines their real meaning, whether through tone of voice, facial or hand gestures or shared knowledge. This is why policing language is so dangerous — it is too easy to strip words of their context. Everything becomes a version of the 1952 Derek Bentley case, where five words, “Let him have it, Chris” (which could mean either “fire the gun” or “hand it over”), can be framed to suit the prosecution. In such a world it is dangerous to use irony, metaphor, sarcasm, exaggeration or affectionate rudeness. This is why comedians, who depend on such things, mostly oppose restrictions on free speech.
Which brings me to ChatGPT and the question of the Turing Test, the rather arbitrary but interesting threshold for what might be described as computational intelligence, set by Alan Turing in 1950. This requires that in two open-ended conversations, one between a questioner and another human, the other between the questioner and a computer, the questioner is unable to tell which is which.
The obvious flaw in this test is that all of us know humans who fail the Turing Test. By which I mean you could read their transcribed words for many hours and still not be confident they weren’t produced by a machine.
Nonetheless, ChatGPT is remarkable. The ability almost instantly to repurpose and condense information into plausible, coherent sentences is impressive. It might make you believe it is human, though its extreme literal-mindedness leaves it a long way from convincing you it is British.
Against this, I must list a few criticisms. For one thing it uses a canny psychological trick (known as the labor illusion) by delivering its answers one word at a time, like a fast teleprinter, which makes it seem much more impressive than if it were simply to vomit up a page of text instantaneously. It is unaware of anything that happened since 2021, and so believes that “Nicola Sturgeon is sure to have a long future in politics.”
Its probabilistic approach also means it’s not averse to making stuff up. For some deluded reason it believes that I was awarded an Order of the British Empire in 2018. Eh? A colleague in New York asked it to list academic papers in citation of its findings and it simply invented three papers which sounded believable but didn’t exist. It is also programmed to have a paranoid fear of anything resembling a right-wing opinion.
Andrew Orlowski, writing in the Telegraph, brilliantly argues that the last thing a bureaucratic world needs is the ability to generate yet more text in huge quantities. He’s right. But I see another problem. There will be hundreds of social and professional situations where it will be necessary to prove that we ourselves wrote the words being sent rather than outsourcing them to ChatGPT. And — that Turing Test again — the only way to do this will be to use words ChatGPT won’t. As it explains: “I adhere to ethical and legal standards, and I will not generate content that is harmful, discriminatory, or offensive in nature or otherwise unethical.”
This means that, to send a letter or write an article without the suspicion it has been machine-generated, we’ll need to fill it with xenophobic right-wing profanities. So Fraser, you Jock bastard, here’s your 650 words for that hotbed of recusancy that is your magazine. Send the usual pittance to the Cayman account. Viva il Duce!
This article was originally published in The Spectator’s UK magazine. Subscribe to the World edition here.