We’ve long known that computers can beat us at chess, so does it matter if they have started to beat us at more verbal and collaborative games such as Diplomacy? It certainly does, and suggests a future in which artificial intelligence may begin to play a growing role in the whole spectrum of international affairs, from crafting communiqués to solving disputes and analyzing intelligence briefings.
Diplomacy, a strategic board game that was a favorite of both Henry Kissinger and John F. Kennedy, is set in Europe before World War One. The objective is to gain control of at least half the board by negotiating alliances via private one-to-one conversations. There are no binding agreements, so players can misrepresent their plans and double-deal. To play, let alone win, requires the capacity to understand the other players’ motivation, but also to be able to negotiate with them in a natural and flexible way, eliciting their trust, only to betray it at the right moment. It’s a game of guile, not mechanics.
So the development at Mark Zuckerberg’s Meta of Cicero, a computer able to play Diplomacy better than 90 percent of human players, demonstrates that computers can learn how to talk, understand and scheme like the rest of us. Of course, Diplomacy is just a game, but it uses the core skills needed for real diplomacy, as well as its shadowy younger brother, intelligence. It suggests that AI, already being presented as the next big thing in war-fighting, may also play its part in peacemaking.
AI systems simulate learning, problem-solving and decision-making processes that have hitherto been the preserve of human intelligence. Already they are in widespread use, from high-frequency stock-trading platforms to the speech-recognition systems on our phones. But it is one thing to crunch large numbers very fast and quite another to be able to second-guess, engage with and understand human intent.
The company OpenAI recently unveiled its ChatGPT system — a computer programmed to talk like a human. You can find it online and ask it to draft anything from a history essay to a marriage proposal. For a while, it was a phenomenon, with journalists claiming that it could even replace journalists. But the limitations of the chatbot became clear after a while, such as a pedestrian sameness about its replies. What, though, did ChatGPT have to say when asked whether AI could revolutionize international diplomacy?
Its answers all revolved around improved analysis and decision-making: “AI algorithms could be used to analyze vast amounts of data and provide insights and recommendations on complex diplomatic issues,” ChatGPT replied. “This could help diplomats make more informed and strategic decisions and improve their ability to navigate complex international situations.”
This, essentially, is one of the two main areas in which — for the present — AI is beginning to be employed. It has become normal for machines to take over dull, repetitive work; it started with stamping out identical components, then assembling cars. So why not automate some of the boilerplate messaging that is a part of diplomacy? Official condolences and congratulations, speeches at the opening of a cultural center here or a graduation there: all these need to be pretty standard but personalized enough not to be insultingly so. This is precisely the kind of thing at which AI excels. A British diplomat in Washington enthused to me about ChatGPT, saying: “The amount of time this could have saved me in my career, finding slightly different ways of saying the same damn thing…”
Hard-pressed analysts, scarcely able to cope with the sheer amount of information available to them, can also use AI to look for correlations, hunt out the anomalies that merit human attention or even backtrack sources. In the late 1990s, I was briefly attached to what was then the UK Foreign and Commonwealth Office’s research analysts department. I could spend all day reading everything from media stories to intelligence materials without writing a word or giving a single briefing, and still not catch up with everything available. The situation in today’s data-saturated world is even more extreme.
There has never been more to digest. As individuals, whether we like it or not, we live in a surveillance society. Smartphones pinpoint our locations, bank card transactions reveal our indulgences and cameras watch our movements. It has never been harder for someone to hide something, and this also applies to governments. Consider how the open-source sleuths of Bellingcat used leaked Russian databases to identify the would-be assassins who tried to poison Sergei Skripal in Salisbury. Their challenge was not so much acquiring the information as processing it all. AIs have the untiring speed of a computer but also the capacity to learn the kind of analytic leaps we would call human intuition. Before their cybernetic gaze, even states may become naked.
Already, AI is tiptoeing into the world of intelligence. MI5, for example, has for the past five years been working with the Alan Turing Institute (named after the man who was effectively the father of the original concept of AI) on unspecified projects. MI5 has said that the UK faces a “range of threats, with the clues hidden in ever more fragmented data,” so it is asking AI machines to help look for those clues. This will likely include the use of big data analysis and voice-recognition systems to track suspected terrorists and foreign spies.
In the pages of this magazine, Henry Kissinger himself raised the specter of “autonomous weapons… capable of defining, assessing and targeting their own perceived threats and thus in a position to start their own war.” “How can leaders exercise control,” he asked, “when computers prescribe strategic instructions on a scale and in a manner that inherently limits and threatens human input?” However, it is worth noting that the threat of autonomous weapons is still very limited, largely confined to anti-missile defenses designed to shoot down targets moving so quickly that having a human “in the loop” would slow them down too much. Another use is that of “loitering munitions” that may go after a target that meets pre-programmed criteria. In both cases, a human will have had to switch on the system or launch the munition.
When the ChatGPT bot is asked about AI and diplomacy, it signs off with an important sentence: “It is important to carefully consider the potential ethical and societal implications of using AI in this context.” There is certainly a valid concern about the degree to which bringing AI into altogether fuzzier realms involving human interaction — such as diplomacy and politics — may begin to distort the process and disempower the humans.
The idea, therefore, is that AIs could become advisors, supporting but not replacing their human partners. Five years ago, at a World Trade Organization meeting, a “Cognitive Trade Advisor” was showcased, a system designed to provide quick answers to complex questions relating to the arcane intricacies of global trade. A job, in other words, that otherwise takes time and an array of experts. Likewise, the UK Foreign Office has adopted AI tools to monitor public data to flag up potential crises with the hope that they can be prevented (or at least prepared for). The German Foreign Ministry has followed suit.
All well and good, but any student of politics — or watcher of Yes, Prime Minister — knows that real power lies in those who frame the choices. Might humans become dependent on their AI advisors? The computers can be phenomenally smart and astonishingly stupid. Their capacity quickly to collate, digest and assess a huge range of data is offset by the biases and assumptions built into the algorithms they use to interpret the world and learn. These may be flawed at the outset, or even deliberately manipulated.
Seven years ago, Microsoft introduced Tay, an AI Twitter chatbot that learned conversational gambits and new vocabulary through its interactions with users. It immediately became something of a sport to get it to tweet offensive terms and opinions and after just sixteen hours, Tay was withdrawn. No one expects government AI to be as vulnerable, but at a time when hacking has become another tool of great-power rivalry, what might happen if Chinese or Russian intelligence agencies could tweak the algorithms?
We are nowhere near the age of autonomous AI diplomat engines writing treaties and issuing démarches on their own initiative — and neither would we want it. As with military AI, except in very specific circumstances, we will for the foreseeable future want humans in the loop to set the parameters of policy and veto the less appealing suggestions from the systems.
Nonetheless, as AIs begin to pass the so-called Turing Test — able to communicate in a way which cannot reliably be distinguished from a human — their role in diplomacy and politics will inevitably grow. As with quantum computing, an AI arms race is under way as countries seek to steal a march and change the global balance of power.
As Vladimir Putin grandiloquently put it in 2017: “Artificial intelligence is the future, not only of Russia, but of all mankind… whoever becomes the leader in this sphere will become the ruler of the world.” Maybe not (and in any case it doesn’t look as if it will be Russia), but if computers are now beating humans at Diplomacy, its success suggests we’re approaching a new stage. In the future it will not just be autonomous “slaughterbots” and similar violent applications that demonstrate the power of AI, but also diplomats’ cybernetic aides, analysts, copywriters and protocol officers. For some time, robots have been helping us fight wars. The next challenge could be to help us avoid them.
This article was originally published in The Spectator’s UK magazine. Subscribe to the World edition here.