Will AI start a nuclear war? What the Netflix movie A House of Dynamite misses.

Will AI start a nuclear war? What the Netflix movie A House of Dynamite misses.

For as long as AI has existed, people have had fears surrounding AI and nuclear weapons. And movies are a great example of that fear. From Skynet Terminator The suffragettes became sensitized and fired nuclear missiles at America. From WOPR Wargames A miscommunication nearly led to a nuclear war. Kathryn Bigelow’s recent release, House of DynamiteAsks if AI is involved in nuclear missile attack headed for Chicago.

AI is already in our nuclear enterprise, says Josh Keating of Vox Today, explained Co-host Noel King. “Computers have been part of it from the beginning“He said.” Some of the first digital computers developed were used during the development of the atomic bomb in the Manhattan Project But we don’t know exactly where or how it is involved.

So what do we need to worry about? Well, maybe, argues Keating. But AI isn’t about turning us on.

Below is a portion of their conversation, edited for length and clarity. There’s more in the full episode, so take a listen Today, explained Wherever you find podcasts, incl Apple Podcasts, pandoraAnd Spotify.

There is a part House of Dynamite Where they are trying to figure out what happened and if AI was involved. Some of these movies with this fear?

The interesting thing about cinema, when it comes to nuclear war, is this: it’s the kind of war that has never been fought. There are no nuclear war veterans other than the two bombs we dropped on Japan, which is a very different scenario. I think that movies have always played a kind of outsized role in the debate over nuclear weapons. You can go back to the 60s when the Strategic Air Command actually developed its own rebuttal Dr. Strangelove And fail safe. Those TV movies from the 80s days later The nuclear freeze was a galvanizing force for the movement. the president [Ronald] Reagan was apparently very upset when he saw this, and it influenced his thinking about arms control with the Soviet Union.

The specific topic I’m looking at, which is AI and nuclear weapons, there are a surprising number of movies that have that as a plot. And it comes up a lot in the policy debate. I’ve had people who advocate integrating AI into nuclear command systems saying, “Look, it’s not going to be Skynet.” Gen. Anthony Cotton, the current commander of Strategic Command—the branch of the military responsible for nuclear weapons—advocates for greater use of AI tools. “We’ll have more AI, but not WOPR in tactical command,” he said, referring to the 1983 movie Wargames.

Where I think [the movies] A little less prone to fear that a super-intelligent AI will get hold of our nuclear weapons and use them to wipe us out. For now, this is a theoretical concern. What I think is a more real concern is that as AI enters more and more parts of the command and control system, do the people in charge of making nuclear weapons decisions really understand how the AI ​​is working? And how they will affect these decisions, which may be – it would not be an exaggeration to say – some of the most important decisions in human history.

Do people working on nuclear weapons understand AI?

We don’t know exactly where AI is in the nuclear enterprise. But people would be surprised to know how low-tech the nuclear command and control system actually was. As of 2019, they were using floppy disks for their communication system. I’m not even talking about the little plastic ones that look like your Save icon in Windows. I mean, with an old 80s twist. They want these systems to be protected from outside cyber interference, so they don’t want to connect everything to the cloud.

But as this ongoing multibillion-dollar nuclear modernization process gets underway, a big part of that is updating these systems. And multiple commanders at Stratcom, including a couple I’ve talked to, have said they think AI should be part of it. What they all say is that AI should not be in charge of deciding whether we launch nuclear weapons. They think that AI can only analyze huge amounts of data and do it much faster than humans. And if you have seen House of DynamiteOne thing the movie does really well is how quickly the president and senior advisers have to make some absolutely extraordinary, difficult decisions.

What’s the big argument against getting AI and nukes in bed together?

Even the best AI models we have available today are still prone to error Another concern is that there may be outside interference with these systems. It could be hacking or cyber attacks, or foreign governments could come up with ways to seed the model with misinformation. It has been reported that Russian propaganda networks are actively trying to spread disinformation in the training data used by Western consumer AI chatbots. And the other is how people interact with these systems. There is a phenomenon that many researchers have been told Automation biasWhich is just that people tend to trust the information the computer system gives them.

There are plenty of examples from the history of time when technology has actually almost led to nuclear disaster, and it’s the people who have stepped up to prevent it. There was one incident in 1979 when US National Security Advisor Zbigniew Brzezinski was actually woken up in the middle of the night by a phone call informing him that hundreds of missiles had been fired from a Soviet submarine off the coast of Oregon. And just before he was about to call President Jimmy Carter to tell him that America was under attack, there was another call [the first] There was a false alarm. A few years later, a very famous case occurred in the Soviet Union. Colonel Stanislav Petrov, who was working on their missile detection infrastructure, was informed by the computer system that there had been a US nuclear launch. Under protocol, he was then supposed to inform his superiors, who might have ordered immediate retaliation. But it turned out that the system misinterpreted sunlight reflecting off clouds as a missile launch. So it was good that Petrov decided to wait a few minutes before calling his superiors.

I’m listening to those examples, and the thing I can take away if I think about it really simply is that people are coming back from our edge as technology continues to grow.

It’s true. And I think there are some really interesting recent tests on AI models given military crisis situations, and they actually tend to be more blunt than human decision makers. We don’t know exactly why this is. If we look at why we didn’t have a nuclear war — why, 80 years after Hiroshima, no one dropped another atomic bomb, why there was never a nuclear exchange on the battlefield — I think that’s part of how terrifying it is. How do people understand the destructive potential of these weapons and what this growth could lead to? There are certain actions that can have unintended consequences, and fear is a big part of it.

From my point of view, I think we want to make sure that fear is built into the system. Entities that are able to escape the destructive potential of nuclear weapons are the ones making the key decisions about whether to use them.

It sounds like a sight to behold House of DynamiteYou might vividly think that perhaps we should take all AI out of AI altogether. It sounds like what you’re saying is: AI is part of the nuclear infrastructure for us, for other nations, and it can stay that way.

“If you don’t think humans can build a trustworthy AI, then humans have no business having nuclear weapons,” one advocate for more automation told me. But the point is, I think it’s a statement that even those who think we should completely eliminate all nuclear weapons would agree with.
I may have been worried that AI was going to take over nuclear weapons, but I now know what I’m worried about. people Going to work on nuclear weapons. It’s not like AI is going to kill people with nuclear weapons. It’s that AI can make it more likely that people kill each other with nuclear weapons. To an extent, AI is the least of our worries. I think the movie shows just how absurd the scenario is where we have to decide if they should really be used.

539cd11a110c2184ad08536dcc6e80f8b6db2be798eeed127885b78270086530?s=200&d=mm&r=g

admin

Hi, I’m cutle.co owner, and I run this blog to guide people on jobs, visas, and work permits worldwide. I provide clear, practical, and up-to-date advice to help you find overseas job opportunities, understand visa rules, and successfully apply for work permits. My mission is to make international career planning simple and accessible for everyone.