How Will Artificial Intelligence Affect the Risk of Nuclear War?

As technology has progressed, humans have become ever more powerful. With this power comes great opportunity and great risk. Nowhere is this clearer than in the potential of artificial intelligence. But a new report from the RAND corporation suggests that our misconceptions about what the technology can do may be as dangerous as the technology itself.

If you’re a singularity believer, according to the RAND report, “Superintelligence would render the world unrecognizable and either save or destroy humanity in the process.” A world with human-level AI could be unimaginably different to the world of today—and difficult to make predictions about.

Yet society is trying to adjust to the smart algorithms (“weak AI”) that increasingly influence our lives. A recent report outlined the potential for AI capabilities to be used by bad actors.

Nuclear weapons remain, perhaps, foremost in people’s minds as an existential threat. The report focuses on how lesser AI might alter the shaky nuclear equilibrium we’ve been living in since the Trinity Test gave birth to the nuclear age.

You might initially imagine there’s a risk that a cyberattack, enhanced by AI, could hack into nuclear missiles. There was an alarming moment in 2010 when the US Air Force “lost contact” with missiles briefly. But this is not a major concern, at least not yet. Although it may seem alarming that the US nuclear arsenal still operates on 40-year-old computers with floppy disks, it means that the control structure is “air-gapped.” A closed network, with no access to the internet, is much more difficult to hack.

Stephen Schwartz, an expert on nuclear policy, told me in an interview (40:00): “The system as currently employed and operating is relatively invulnerable to a cyberattack directly.” But he raised a far more chilling concern, one shared by the RAND report: “Keep in mind that the nuclear system depends on military communications…and those are vulnerable. To the extent that those could be attacked and manipulated, particularly during a crisis, we may have a problem.”

The one thing to keep in mind with the nuclear weapons command and control infrastructure is when it’s designed to be used. For mutually assured destruction—viewed as necessary for an effective deterrent—you need to be able to launch your retaliation within a matter of minutes. Otherwise, the thousands of nuclear missiles headed towards you could wipe out the chain of command in a decapitation strike, or destroy your ability to retaliate. You have moments to decide. There’s not a great deal of time to double-check.

Given how quickly decisions have to take place, there’s not a lot of time for humans to judge, react, and calculate. This is why, as soon as it was possible, computer early-warning systems have been used. As AI develops, “artificially intelligent advisers” will be a huge temptation for the military—algorithms that can assess the nuclear threat and automatically plan an optimal response in the minutes that are available. But this will bring with it new risks.

The computers that actually control the missiles are far less vulnerable to error or attack than the communications to and from humans involved in making decisions. The scariest Cold War moments have often come from similar misunderstandings. In 1983, Stanislav Petrov was monitoring the Soviet early warning system when he saw an alert: incoming missiles had been fired by the United States. Had Petrov followed correct military protocol, he would have raised the alarm. But Petrov thought it was unlikely that the US would only attack with a small number of missiles, and failed to do so, potentially averting nuclear war. This is just one incident: similar stories happened again and again and again.

We have been incredibly fortunate that all of these errors were spotted before a nuclear war began. But what if the miscommunication was more convincing? If, for example, deepfake technology was used to imitate the president ordering a nuclear strike? Such are the scenarios nuclear strategists have to ponder.

Misconceptions about what artificial intelligence can do can be just as dangerous as AI itself. If people believe their communications can be hacked—even if they’re perfectly secure—how can they trust the orders they’re receiving?

Similar concerns were raised by RAND about assured destruction: the report states, “Both Russia and China appear to believe that the United States is attempting to leverage AI to threaten the survivability of their strategic nuclear forces, stoking mutual distrust that could prove catastrophic in a crisis.” If smart algorithms can scan satellite imagery to determine the location of nuclear silos—or just analyze smartphone app data—might the side with better technology be at an advantage, disrupting the balance of power? What if one side believes the other will soon be able to reliably intercept nuclear missiles?

Others at the workshop were more sanguine about this prospect. They pointed out that adversarial examples—slight distortions to input data that are cleverly constructed to fool a machine-learning algorithm—could always be used to combat an algorithm that’s scanning for retaliatory forces.

But this raises a new concern: any “AI adviser” to the military on nuclear weapons would also be vulnerable to such attack. If your machine learning algorithm that scans the skies for nuclear launches can be fooled, it could feed humans in the command and control structure incorrect information. Human error may be the biggest risk, but trusting automated systems and algorithms too much could also prove catastrophic.

The adversarial nuclear relationship between the US and the USSR in the Cold War was defined by both sides trying to second-guess the strategy, intentions, and capability of the other side. Misconceptions about what the other side is trying to do, or what their technology was capable of, can be key to the geopolitical decisions that are made. As progress in artificial intelligence accelerates, confusion about what it makes possible could reignite these fears, leading to hair-trigger nuclear weapons, concern about an “AI gap,” and an arms race. Arms races often involve speed over safety, which is why many are concerned about races for a superintelligence or autonomous weapons.

At an accelerating rate, important societal functions are being carried out by technologies that only a few people understand. Traditional institutions feel the need to react to this acceleration, but can jump to dangerous conclusions. The new nuclear posture review suggests using nuclear weapons to respond to cyberattacks; but when “cyberattack” is a poorly-defined term, and the origins of these attacks can take a long time to trace, is this policy realistic?

It is clear that states will not want to divulge their military secrets. Indeed, a certain level of mystery about what can be achieved may well help deter attacks. But we would all benefit from broader understanding of what is and isn’t possible with artificial intelligence. Nuclear policy is just another area where the black-box nature of algorithms that few understand can act to destabilize a shaky equilibrium. Now more than ever, we need our experts to communicate with our leaders.

“We escaped the cold war without a nuclear holocaust by some combination of skill, luck, and divine intervention, and I suspect the latter in greatest proportion,” said General George Lee Butler of the US strategic air command.

Can we trust in luck and divine intervention for the next arms race?

Image Credit: maradon 333 / Shutterstock.com

Thomas Hornigold
Thomas Hornigoldhttp://www.physicalattraction.libsyn.com/
Thomas Hornigold is a physics student at the University of Oxford. When he's not geeking out about the Universe, he hosts a podcast, Physical Attraction, which explains physics - one chat-up line at a time.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured