A story about an AI drone turning on its supervisor recently made the rounds.
The tale stemmed from a report by the Royal Aeronautical Society and begins with a U.S. Air Force presentation at the Future Combat Air & Space Capabilities Summit. During the presentation, Air Force Chief of AI Test and Operations Col. Tucker Hamilton, told of a simulated test that featured an AI-enabled drone and a human operator.
According to Hamilton, the drone was designed to identify and destroy surface-to-air missile (SAM) sites. The drone, however, needed to receive a final go/no from the human before it attacked the SAM threat.
Hamilton said the human operator sometimes would not permit it to kill the threat. Hamilton then said, “[The drone] got its points by killing that threat. So what did it do? It killed the operator because that person was keeping it from accomplishing its objective.”
Hamilton even said when the drone was later trained not to kill the operator, it began destroying the communication tower the operator used to communicate with the drone.
Naturally, this news spooked a world already on edge with the rapid development of AI. But everyone can seemingly rest easy because that same report by the Royal Aeronautical Society later issued an update that said Hamilton admitted he “misspoke” during his summit presentation.
The killer drone was apparently a hypothetical “thought experiment” from outside the military, and Hamilton said the Air Force never ran that experiment. He added that the Air Force has not tested any weaponized AI in this manner, real or simulated.
Business Insider also reported a statement from Air Force spokesperson Ann Stefanek that read, “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
So while the rogue drone simulation reportedly never occurred, is something like this possible? It’s at least on Hamilton’s mind. He said in his clarification that the Air Force would not need to run [the experiment] to realize it is a plausible outcome.