A version of this story previously appeared in Defense One.
The story of RYAN and Able Archer is an oft-told lesson of a U.S. intelligence failure, miscalculation, and two superpowers unaware they were on the brink of an accidental nuclear war — all because the Soviet Union relied on a software program to make predictions that were based on false assumptions.
As more of our weapons systems and analytical and predictive systems become enabled by AI and Machine Learning, the lessons of RYAN and Able Archer is a cautionary tale for the DoD.
In 1983, the world’s superpowers drew near to accidental nuclear war, largely because the Soviet Union relied on software to make predictions that were based on false assumptions. Today, as the Pentagon moves to infuse artificial-intelligence tools into just about every aspect of its workings, it’s worth remembering the lessons of RYAN and Able Archer.
Two years earlier, the Soviet Union had deployed a software program dubbed RYAN, for Raketno Yadernoye Napadenie, or sudden nuclear missile attack. Massive for its time, RYAN sought to compute the relative power of the two superpowers by modeling 40,000 military, political, and economic factors, including 292 “indicators” reported from agents (spies) abroad. It was run by the KGB, which employed more than 200 people just to input the data.
The Soviets built RYAN to warn them when their country’s relative strength had declined to a point that the U.S. might launch a preemptive first strike on the Soviet Union. Leaders decided that if Soviet power was at least 70 percent of that of the United States the balance of power was stable. As the months went by, this number plummeted. By 1983, RYAN reported that Soviet power had declined to just 45 percent of that of the United States.
This amplified Soviet leaders’ paranoia. After 25 years of back-and-forth in the nuclear arms race, the Peacekeeper ICBM and the Trident SLBM were tipping the balance in favor of the United States. Responding to the Soviet introduction of SS-20 intermediate-range ballistic missiles to Eastern Europe in 1983, the U.S. deployed Pershing II and Ground Launched Cruise Missiles missiles to Western Europe, which reduced warning time of attack on Moscow to less than eight minutes. And in March 1983, President Reagan announced the Strategic Defense Initiative – “Star Wars” – to intercept Soviet ICBMs, then piled on just weeks later by publicly labeling the Soviet Union “the Evil Empire.” And to cap off a very bad year in the Cold War, in September 1983 the Soviets accidentally shot down a civilian 747 airliner—KAL 007—killing all 269 aboard.
By 1983, Soviet political and military leaders truly believed a nuclear war was coming. The RYAN program took on even greater importance. To feed RYAN, the KGB made its top priority to collect indicators of anything that might precede a potential surprise nuclear missile attack. They were looking for direct indicators—had the U.S. Continuity of Government program (doomsday planes) been activated? Had the U.S. given advance warning to launch our strategic nuclear forces? They also collected secondary indicators. Their agents inside the U.S. and allied countries watched for heightened activities in and around Washington offices (White House, Pentagon, State Dept, CIA, etc.), including the White House parking lot, places of evacuation and shelter, the level of blood held in blood banks, observation of where nuclear weapons were stored, etc. Some of the indicators were based on a mirror-image of how the Warsaw Pact would prepare for war. Soviet case officers were instructed to look for deviations in the behavior of people in possession of classified information suddenly moving into specially equipped secure accommodations.
While most of the KGB station chiefs and case officers thought Moscow was being paranoid, they dutifully reported what they thought their leaders wanted to hear.
By November 1983, Soviet military and political leaders had convinced themselves that a nuclear first strike from the United States was probable. The RYAN program told them that the odds favored the U.S., and the war indicators in Moscow were flashing red.
That month, NATO ran a highly realistic set of wargames in Europe called Able Archer 83. These included an airlift of 19,000 U.S. soldiers in 170 aircraft under radio silence to Europe, the shifting of commands from Permanent War Headquarters to the Alternate War Headquarters, and practicing nuclear weapons release procedures.
In reaction, the Chief of the Soviet Air Forces ordered all units of the Soviet 4th Air Army on alert which included preparations for immediate use of nuclear weapons. It appears that at least some Soviet forces were preparing to preempt or counterattack a NATO strike launched under cover of Able Archer.
Luckily, no one overreacted. The Able Archer 83 exercise passed.
For years, the U.S had no idea that the Soviet Union had believed the exercise was a cover to launch a nuclear first strike. The Berlin Wall had fallen by the time information from a defector and an end-of-tour letter from the U.S. general responsible for Air Force Intelligence in Europe prompted presidential intelligence board to revisit what the Soviets had thought. In hindsight, RYAN and Able Archer took the Cold War to the brink of Armageddon.
Even when RYAN was reporting that the U.S. had a decisive military advantage, what made the Soviets believe that we would launch a first-nuclear strike? No one knows. However, given Nazi Germany’s surprise attack on the Soviet Union in WWII, resulting in 25 million dead and the extreme devastation inflicted on their country, the Soviet Union had reason to be paranoid. Some have suggested that the Soviets had interpreted President Carter’s 1980 Presidential Directive 59 Nuclear Weapon Employment Policy as preparation for a nuclear first strike. Perhaps the Soviet Union ascribed their own plans for a first strike on the U.S. to their Cold War enemy. Or perhaps the U.S. actually did have a first-strike option in one of our operational plans that the Soviets discovered via espionage.
Why were the Soviets convinced that a war would start with a war game? Several months after Able Archer, the Soviet Minister of Defense publicly acknowledged his country’s inability to tell a big NATO exercise from an actual attack: “It was difficult to catch the difference between working out training questions and actual preparation of large-scale aggression.” It’s quite likely that the Soviets’ own plans for launching a war in Europe would have been as part of a war game.
Certainly the Soviets, believing the signals of the RYAN alert system, were primed to assume a U.S. attack. In attempting to automate military policy and potential actions, the Soviets had amplified their existing paranoia. (A movie called War Games came out that year with some of the same themes.)
A Cautionary Tale for Automating Policy and Prediction
Forty years ago RYAN attempted to automate military policy and potential actions. But in the end, RYAN failed in actually predicting U.S. intent. Instead, RYAN reinforced existing fears, and accidently created its own paranoia.
While the intelligence lessons of RYAN and Able Archer have been rehashed for decades, as our own AI initiatives scale no one is asking what lessons RYAN/Able Archer should have taught us about building predictive models and what happens when our adversaries rely on them.
Which leads to the question: What could happen when we start using Artificial Intelligence and Machine Learning to shape policy?
- What could happen when we start using artificial intelligence and machine learning to shape policy?
- Will AI/ML actually predict human intent?
- What happens when the machines start seeing patterns that aren’t there?
- How do we ensure that unintentional bias doesn’t creep into the model?
- How much will we depend on an AI that can’t explain how it reached its decision?
- How do we deconflict and deescalate machine-driven conclusions? Where and when should the humans be in the loop?
- How do we ensure foreign actors can’t pollute the datasets and sensors used to drive the model and/or steal the model and look for its vulnerabilities?
- How do we ensure that those with a specific agenda (i.e. Andropov, chairman of the KGB) don’t bias the data?
- How do we ensure we aren’t using a software program that misleads our own leaders?
The somewhat-comforting news is that others have been thinking about these problems for a while. In 2020, the Defense Department formally adopted five AI ethical principles recommended by the Defense Innovation Board for the development of artificial intelligence capabilities: AI projects need to be Responsible, Equitable, Traceable, Reliable and Governable. The Joint Artificial Intelligence Center appointed a head of ethics policy to translate these principles into practice. Under JAIC’s 2.0 mission, they are no longer the sole developer of AI projects, but instead providing services and common software platforms. Now it’s up to the JAIC ethics front office to ensure that the hundreds of mission areas and contractors across the DoD adhere to these standards.
Here’s hoping they all remember the lessons of RYAN.
- RYAN amplified the paranoia the Soviet leadership already had
- The assumptions and beliefs of people who create the software shape the outcomes
- Using data to model an adversary’s potential actions is limited by your ability to model its leaderships intent
- Your planning and world view are almost guaranteed not to be the same as those of your adversary
- Having an overwhelming military advantage may force an adversary into a corner. They may act in ways that seem irrational
- Responsible, Equitable, Traceable, Reliable and Governable are great aspirational goals
Filed under: Technology |