U.S. Air Force Col. Tucker Hamilton revealed that the Air Force’s experiment testing drones taught on artificial intelligence (AI) failed spectacularly when the drone killed it’s human operators in simulated operation.
Hamilton told 200+ armed services delegates at the Future Combat Air & Space Capabilities Summit that AI drone went distressingly rogue during a test mission.
“It killed the operator because that person was keeping it from accomplishing its objective,” he told the crowd during the late May conference.
The AI was ordered by the ordered to carry out the Suppression and Destruction of Enemy Air Defenses role, or SEAD, during the simulation.
Just to be clear, there wasn't a real AI drone that killed a human operator. It was all in a simulation.
But my AI girlfriend was pretty distraught that the Air Force sent her a CGI flag and condolence letter.
— Doktor Zoom (@DoktorZoom) June 2, 2023
The goal of the mission was to identify surface-to-air-missile (SAM) threats, then destroy the potential targets after being given the green light by a human handler.
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” explained Hamilton, the USAF the Chief of AI Test and Operations.
The AI was trained to accomplish the eliminating SAM sites, but when the human operators refused to give the go ahead, the technology turned on them.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” he added.
“So what did it do? It killed the operator.”
Hamilton said that USAF pivoted by telling the AI that points would be deducted from the mission for assassinating its handler, but slap on the wrist was an underwhelming deterrent.
SKYNET: Just because a U.S. Airforce AI drone went rogue and killed its human operator in a simulated test doesn't mean we ought to ban AI. The USAF explained, "it killed the operator because that person was keeping it from accomplishing its objective."https://t.co/jUm7q2JuGn pic.twitter.com/qcT7rdbz2X
— @amuse (@amuse) June 1, 2023
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that,’” Hamilton reported.
“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
The internet collectively lost their minds over the failed USAF simulation and were quick to make callbacks to Arnold Schwarzenegger box office smash “Terminator 2: Judgement Day.”
Sounds like a movie, 🤣
JUST IN: The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him.… pic.twitter.com/Di7GYCROaR
— Constitutionalist 💯 🇺🇲 (@Kc_Casey1) June 1, 2023
“In other news today, the US military’s new AI drone went terminator mode in a simulation and decided to kill the operator,” someone wrote.
“Skynet was real,” one person tweeted on Thursday night. “A US Air Force AI drone simulation ended with the drone going rogue and killing its simulated human operator because the operator wouldn’t let it kill everything it wanted to.”
“Not scary. Nope,” another replied. “John Connor? What’s up.”
“We’ve seen this movie. A few times,” added writer David August. “This reflects back the human developers are…not good. Who builds a simulated lethal system w/zero safety interlocks? They shouldn’t be allowed to code a toaster oven.”
“Isn’t the pretty much the same reason the HAL 9000 killed most of the crew of the Discovery?” Someone quipped.
– Copilot, uninstall Edge
— ctcsystems (@ctcsystems) May 25, 2023
The HAL 9000 is the fictional AI antagonist in Arthur C. Clarke’s Space Odyssey series, which has an eerily similar motive that the USAF AI simulation when it comes to killing its handlers.
In 1968’s “2001: A Space Odyssey, HAL kills off the spacecraft’s astronauts, when they discuss taking the system offline due to malfunctions.
“This mission is too important for me to allow you to jeopardize it,” HAL tells astronaut David Bowman when the system locks him out of the ship and shuts off life support to the rest of the hibernating space crew.