gaza ai war

AI and the new way of war

The killer robots are coming


What is happening in Gaza now provides a glimpse of how all wars may be fought in the future — with artificial intelligence. In The Spectator last November, I wrote about an Israeli airstrike that brought down a six-story building in Gaza City, reportedly killing more than forty civilians. One of the residents, a man named Mahmoud Ashour, dug through the rubble with his bare hands, trying to find his daughter and her four children, a girl aged eight and three boys of six, two and six months — all killed. The Israeli military would not tell…

What is happening in Gaza now provides a glimpse of how all wars may be fought in the future — with artificial intelligence. In The Spectator last November, I wrote about an Israeli airstrike that brought down a six-story building in Gaza City, reportedly killing more than forty civilians. One of the residents, a man named Mahmoud Ashour, dug through the rubble with his bare hands, trying to find his daughter and her four children, a girl aged eight and three boys of six, two and six months — all killed. The Israeli military would not tell me why the building was hit, beyond saying that Gaza’s armed groups put their military infrastructure amid civilians, but Amnesty International said there had been a single member of Hamas living there. What we now know about the war in Gaza suggests there is a good chance the people in that building were effectively sentenced to death by artificial intelligence, AI.

We have learned about the central role of artificial intelligence in Israel’s Gaza offensive largely thanks to an investigation by the Tel Aviv magazine +972. One of their journalists, Yuval Abraham, reported that an automated system was being used to track Palestinian targets to the family home, allowing human operators to send a bomb or missile there. The system is allegedly called, rather ghoulishly, “Where’s Daddy.” The target list is said to come from a different system with another creepy name: “Lavender.” Abraham writes that Lavender poured out kill lists with a total of 37,000 names, all suspected members of the military wings of Hamas and Palestinian Islamic Jihad: “The army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.”

Abraham’s investigation drew on interviews with six Israeli army officers who said they had been involved in the AI assassination program in Gaza. As you would expect, no names were published. One source said that the system was built to look for Palestinian militants at their homes because it was “much easier to bomb a family’s home” than a military installation, which would probably be underground. Another of those quoted in the article said that human personnel were often just a “rubber stamp” for the machine’s decisions, spending “twenty seconds” on a target before authorizing an attack — just long enough to make sure the target was male. It had to work like this, the officer went on, because of the huge number of suspected militants the machine was finding to kill, many times more than any team of flesh and blood intelligence analysts could have identified. “Once you go automatic, target generation goes crazy.”

It’s a terrifying prospect to be marked for death by a line of code. And it is claimed that Lavender made errors: one in ten of those it selected were apparently judged later to be unconnected to any militant group. But those who want to see more AI used on the battlefield argue that, on average, machines can do a better job than humans in accurately finding targets. One senior officer quoted by the magazine defending the program said he had more trust in a “statistical mechanism” than a soldier who had lost a friend two days ago. This applied to everyone — himself included — because everyone serving had lost people on October 7, when Hamas launched a pogrom that killed some 1,200 Israelis. So data, not emotion, ruled the process of identifying the enemy in Gaza. The senior officer said: “The machine did it coldly. And that made it easier.”


The Israeli military disputed the +972 story, denying that tens of thousands of Palestinians had been put on computer-generated kill lists. The Israeli Defense Forces issued a statement saying there was no “system,” only a database used by analysts to cross-reference information from different intelligence sources. This was an “information management tool in the target identification process,” not an AI to predict whether a person was a terrorist. “According to IDF directives, analysts must conduct inde- pendent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law.” The statement neither confirmed nor denied the existence of the dystopian codenames “Lavender” and “Where’s Daddy.” And it made no mention of the part of Israel’s military intelligence directorate thought to be home to the soldiers using AI, Unit 8200.

Unit 8200 has been called “probably the foremost technical intelligence agency in the world.” Soldiers are not allowed to disclose that they are part of the unit, or their role in it. Adding to the mystique, it is said to recruit teenage hackers and computer geniuses straight out of high school, or even while they are still there. A year ago, before the current war began, the chief of Unit 8200’s AI department, a “Colonel Yoav,” appeared in public on the podium of a conference at Tel Aviv University. He spoke about how a Hamas missile squad had been “thwarted” — presumably killed — through the use of AI systems named “Gospel” and “Alchemist.” What he described then sounds very much like the account in +972 magazine: “The machine,” he said, “was able to find the terrorists from a vast pool of people and transfer the information…to the intelligence department’s researchers.” The AI could compare images from the battlefield with lists of “dangerous” people previously fed into the system in “seconds… which in the past would have taken hundreds of researchers several weeks to do.”

This was foreshadowed in 2021 in a book called The Human-Machine Team, written by an Israeli military officer named on the cover only as Brigadier General Y.S. In April, through an embarrassing security lapse that linked his Gmail account to the book’s Amazon page, he was unmasked as Brigadier General Yossi Sariel — the head of Unit 8200. His book is a blueprint for an AI-driven war, predicting that thousands of human intelligence officers will be replaced by AI machines. “Big data will be the key to finding and understanding rivals and enemies. Data from hundreds of thousands of drones will be part of the basic information about everything.” Sariel describes an army of the very near future — perhaps the army now fighting in Gaza — as machine and human “bouncing data, ideas, and insights off each other.” This “synergetic learning” had created the potential for “super-cognition” on the battlefield.

Such “super-cognition” might restore the advantages that First World armies have over the armies of poorer nations. In 1898, at the battle of Omdurman, a British-Egyptian expeditionary force equipped with the latest machine guns killed some 10,000 Sudanese warriors armed with spears, swords and antique rifles. The British-led force, which included one Lieutenant Winston Churchill, suffered only forty-eight dead. Today, everyone has machine guns and so, in Fallujah in 2004, I followed a unit of the US Marines as they went house to house to clear out insurgents who were wearing flip-flops and using Kalashnikovs. At that point in the battle, America’s vast technological advantage had been all but neutralized: an engagement often came down to one scared nineteen-year-old with a gun against another. But what if the Marines could have sent in a swarm of small drones, never setting foot in Fallujah? Fighting this way would dramatically reduce casualties. This might even work so well it would cause another kind of problem, tempting Western governments and their publics into yet more rash foreign adventures. We might also be more willing to go to war believing that AIs could make all our wars good ones, killing only combatants and no civilians.

Taniel Yusef, of the UK Campaign to Stop Killer Robots, says this is an illusion. It is “actually incredibly difficult” for machines to accurately tell the difference between combatants and civilians, she says, and the more data you generate — from your thousands of drones on the battlefield — the more you confuse the AI. She tells me that there has been “ludicrous overhype” of AI weapons by the people making money out of selling them. “This isn’t fairy dust, it’s not magic, it’s not special: it’s just math.” She believes there must always be a place in war for human emotion and empathy, and human ways of understanding the world, honed over millions of years of evolution. “A battlefield is the most hectic, chaotic, surprising environment, by design. Remember, it’s designed to surprise. So we can’t train a computer system for every single surprising occurrence.”

Yet it is probably inevitable that armed combat will increasingly be automated, many weapons systems removing humans from the decision-making altogether. An incident in 2007 — ancient history in tech time — shows why. Soldiers in the South African army were training with a robotic antiaircraft gun. The weapon was designed to use radars and a laser to lock on to aircraft and missiles, feeding the targeting data directly to a pair of 35mm cannons with no human intervention, even reloading on its own when the magazines emptied. For some reason — at the time, a “software bug” or a “computer glitch” was blamed — the machine opened fire on its own, killing nine soldiers. There was nothing anyone could do. It took an eighth of a second from start to finish.

The last bottleneck in the kill chain is the human who pulls the trigger or pushes the button, and a machine will always beat a human to the draw. Armies that automate their killing will beat those that don’t. Everyone will be forced to fight this way, if they have the technology. But while the robotic gun in South Africa was an automatic system, it did not think for itself. An AI does. Self-driving cars, the computers that are the world champions of Chess and Go, and the so-called Large Language Models that gave us ChatGPT, are all self-taught. Humans gave them some minimal guidance and they figured the rest out for themselves, using massive amounts of data. The Go computer played 1.3 million games against itself before going on to beat every human opponent. The trouble is that the programmers have little idea what these machines are “thinking” or how. Such AIs may be difficult to predict, or to control.

AI
(Getty)

The dangerous marriage between AI and robotics is already happening, creating autonomous killing machines that can work with little or no human oversight. Last year, the Ukrainian drone company Saker claimed it had deployed a fully autonomous weapon that used AI to decide on its own when to shoot and whom to kill. South Korea has guard robots on its border with the capability to detect, track and fire on intruders without human intervention. Even an enemy in flip-flops has access to similar tech. A $500 drone bought off the shelf can be sent to drop a grenade on a $100 million fighter aircraft parked on the ground, destroying it. So, whether America’s next war is against a sophisticated Chinese military in Taiwan, or a more rudimentary Hezbollah in Lebanon, an AI arms race is inevitable. The Pentagon is said to have some 800 AI programs in the works. In November 2023 the Associated Press reported: “There is little dispute among scientists, industry experts and Pentagon officials that the US will within the next few years have fully autonomous lethal weapons.”

The widespread use of slaughterbots on the battlefield would have huge implications, but these would be tactical weapons, killing people a few at a time. The risks from autonomous weapons are many orders of magnitude greater when it comes to nuclear arms. The United States and the United Kingdom insist it will always take a human to give the order to drop the Bomb. But Russia and China may already have some kind of automation, an AI version of the so-called “dead hand” switch, where early-warning systems trigger a reflex to launch missiles in return… unless an order to stop comes from the country’s leadership. The dead hand is supposed to deter a decapitating first strike by the enemy, but with autonomous systems in the chain of command, the fate of humanity might be in the hands of machines we don’t really understand.

The rational thing to do now would be for everyone to stop developing autonomous weapons. But just as with AI itself, we are all stuck in the classic prisoners’ dilemma. Cooperation would be better for everyone, but no single actor can take the chance that another won’t break the pact, gaining an unbeatable advantage. As Vladimir Putin said in 2017, the world will belong to the country that wins the AI race. The most benign version of this future is that eventually we all become mere spectators of warfare, not participants; the worst version — familiar from a hundred sci-fi movies — is that our creations will destroy us. At the very least, we can expect war to become more deadly, more unpredictable, fought at ever greater speeds and with chilling, machine efficiency.

This article was originally published in The Spectator’s June 2024 World edition.

Comments
Share
Text
Text Size
Small
Medium
Large
Line Spacing
Small
Normal
Large

Leave a Reply

Your email address will not be published. Required fields are marked *