Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable

Nuclear Experts Say Combining AI with Nuclear Weapons is Inevitable (19459000)

People who are experts in nuclear warfare are confident that artificial intelligence is soon to power deadly weapons. No one is quite sure what that means.

In mid-July, Nobel laureates from the University of Chicago gathered to listen to nuclear experts discuss the end of the planet. Scientists, former government officials and retired military personnel educated the Nobel laureates in closed sessions over a two-day period. The goal was for some of the world’s most respected people to learn about one of most horrifying weapons that has ever been created. At the end, the laureates were asked to make policy recommendations to leaders around the world on how to avoid nuclear warfare.

AI is on everyone’s minds. “We are entering a world of artificial intelligence, and emerging technologies that influence our daily lives but also the nuclear world in which we live,”

Scott Sagan (19459034), a Stanford Professor known for his research on nuclear disarmament, spoke at a press conference held at the end of talks.

This statement assumes that governments will mix AI and nuclear weapons, something everyone I spoke to in Chicago believed.

It’s like electricity, says

Bob Latiff (19459034) is a retired US Air Force Major General and a member of the Bulletin of the Atomic Scientists’ Science and Security Board. Latiff, who is a member of the Doomsday Clock’s Science and Security Board, says that AI will eventually find its way into all aspects of our lives.

The conversation about AI and nuclear weapons is hampered by two major problems. Jon Wolfsthal is a nonproliferation specialist who is the director of global risks at the Federation of American Scientists, and was previously a special advisor to Barack Obama.

What does it mean to give AI the control of a nuclear weapons? What does it mean to [computer chip] give AI control of a nuke weapon?

Herb Lin (19459034) is a Stanford Professor and Doomsday Clock alumni. “Part of the issue is that large language model have taken over the discussion.”

Firstly, the good news. No one believes that ChatGPT and Grok will be getting nuclear codes any time soon. Wolfsthal told me that although there are many “theological” differences among nuclear experts, they are united on this front. “In this area, almost everyone says that we want human control over nuclear weapons decisionmaking,” says Wolfsthal. Wolfsthal, however, has heard whispers about other concerns uses of LLMs at the heart of American power. “A number have said that they want an interactive computer for the president to be able to figure out what Putin or Xi is going do. I can produce this dataset very reliably. I can get all that Xi and Putin have ever said or written about anything, and have a statistically higher probability of reflecting what Putin said,” he says.

I was like, “That’s great. How do you know Putin truly believes what he has said or written? It’s not because the probability is incorrect, it’s based on a premise that cannot be tested,” Wolfsthal explains. “Quite honestly, I don’t think many of the people looking at this have been in a meeting with a president. I don’t pretend to be close to a president, but I’ve been in a room with them when they discuss these things and they don’t trust anyone with this stuff.

Cotton, the military chief in charge of America’s nukes, spoke at length about the importance of AI. He

The idea that a rogue AI could start a nuclear conflict is not what keeps Wolfsthal awake at night. “I worry that someone will say that we need to automate the system and parts thereof, which will create vulnerabilities an adversary could exploit, or it will produce data and recommendations that people won’t be able to understand and lead to bad decision-making,” he says.

Launching nuclear weapons is not as easy as having a leader in China or Russia push a button. Nuclear command and controls are a complex web of early-warning radars, satellites and other computer systems that are monitored by humans. Two humans must work together in a silo in order to launch a nuke if the president orders it. The launch of a nuclear weapon in the United States is the result of a thousand small decisions, all made by humans.

How will AI affect this process? What happens when AI watches the early warning radar instead of a human? “How can you confirm that we are under a nuclear attack?” Can you rely on any other confirmation than visual?” Wolfsthal says. US nuclear policy requires what’s called “dual phenomenology” to confirm that a nuclear strike has been launched: An attack must be confirmed by both satellite and radar systems to be considered genuine. “Can one of these phenomena be artificial intelligence?” I would argue that at this stage, the answer is no.”

The main reason for this is that we don’t know how many AI systems operate. They are black boxes. Experts say that even if they weren’t, integrating them in the nuclear decision-making process would still be a bad thing.

Latiff is concerned that AI systems reinforce confirmation bias. “I am concerned about the value of human control, even if it is retained by a human,” he says. “I have been a commander. I know what it is to be held accountable for my actions. You need it. You need to be sure that the people you work for know there is someone responsible. Who do I blame if Johnny is killed?

AI systems are bound by training data, guardrails and programming, just as they cannot be held accountable when they fail. They cannot see beyond themselves. They are trapped in the human-made boundaries despite their ability to learn and think.

Lin mentions Stanislav Petrov who, in 1983, saved the world by refusing to pass a nuclear alert from the Soviet Air Defence Forces up the chain.

Let’s pretend for a moment that he had passed the message up the command chain instead of remaining silent… as he should have done… and then a world holocaust would have ensued. Lin asks, “Where is the failure there?” “One mistake was made by the machine. The second mistake was that the human didn’t recognize it was a error. How can a human tell that a machine has made a mistake?

Petrov did not know the machine had made a mistake. He made a guess based on his experience. He knew that the US would attack all or nothing. His radar showed him that five missiles had been launched. Five was not a large number. The computers were new and worked faster than before. He made a judgment call.

Can we expect humans to be capable of doing that routinely?” Lin asks, “Is that a reasonable expectation?” “The point is you have to look outside of your training data. You must go beyond your training data in order to be able say: “No, my data is telling me that something is wrong.” By definition, [AI] cannot do that.

Donald Trump, the Pentagon, and the White House have made AI a priority and have invoked nuclear arms races to do so. In May, in a poston X, the Department of Energy stated that “AI is a new Manhattan Project and the UNITED NATIONS WILL WIN.”

Lin says, “I find it awful.” “I knew when the Manhattan Project had been completed, and I was able to tell you when it had been a success. We exploded a nuke. I don’t understand what it means to have an AI Manhattan Project.”



www.aiobserver.co

More from this stream

Recomended