US Air Force’s Artificial Intelligence-Enabled Drone Did Not Terminate Its Own Operator, USAF Colonel

In a scene straight out of a Hollywood science fiction flick about sentient robots and artificial intelligence going rogue, an AI simulation test being conducted by the US Air Force (USAF) resulted in a drone killing its own operator for ‘over interference’ while it was executing its missions.

A USAF Colonel revealed this in a seminar by the Royal Aeronautical Society (RAeS) in the United Kingdom last month. 

The USAF has, however, denied any such simulation taking place. The officer later clarified that he spoke out of context and was painting a hypothetical scenario of how an AI-enabled drone might operate if such a simulation is held. 

However, a report of the seminar on a blog post that quoted the officer’s full comments clearly says such an incident has occurred, with his description being a proper narration of the event. 

AI Acting Against Operators

Col Tucker ‘Cinco’ Hamilton, speaking at the Future Combat Air and Space Capabilities Summit in London on May 24, described the simulation where an AI-enabled drone would be programmed to identify and destroy an enemy’s surface-to-air missiles (SAM). A human was then supposed to give the go-ahead on any strikes.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton.

Hamilton, heading the USAF’s AI testing and operations, spoke during the Future Combat Air and Space Capabilities Summit in London in May.

Representation Pic

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he was quoted in the report of the proceedings of the RAeS. 

It escalated further after it was commanded not to kill the human operator. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton added. 

Hamilton, an experimental fighter test pilot, warned against overreliance on AI, saying the test showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI.”

In an August 2022 interview with Defense IQ, Hamilton said, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”

US Air Force Denies

The same report of the event on the RAeS website was updated with Hamilton’s retraction, where he said he “misspoke” and that it was a hypothetical thought experiment from outside the military.

“We’ve never run that experiment, nor would we need to realize that this is a plausible outcome.” He maintained that the USAF had not tested any weaponized AI in this way (actual or simulated). “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

Insider reported US Air Force spokesperson Ann Stefanek, who also denied that any simulation took place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

But because the US military has been experimenting with AI for quite a while and such ‘rogue’ experiences with AI have been reported, its denial does not appear plausible. 

Elon Musk Ukraine
File Image: Elon Musk

In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA).

And late 2022, a report in Wired said the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.

AI Hasn’t Been Friendly Before

In mid-2017, it was widely reported that Facebook (now Meta) researchers shut down one of their AI systems after chatbots began communicating with each in their own language. Even more shockingly, the language used the English script but was unintelligible and could not be understood by humans. 

Similar to the USAF’s reported experiment, the AI also worked around constraints and limitations imposed by its human operators. In this case, the chatbots were rewarded for negotiating but not for conversing in English, leading them to develop their language without human input. They did this with the help of machine learning algorithms. 

And in something particularly frightful, the chatbots displayed devious negotiating skills, expressing interest in a particular object and then offering to sacrifice it later as a fake compromise. 

SpaceX and Tesla founder Elon Musk in 2017 had raised the alarm over AI and the direction in which it was heading and challenged Mark Zuckerberg over his optimism regarding the technology. 

At the Summer Meeting of the National Governors Association, Musk said he has access to the most “cutting edge” AI and that “people should be worried about it.” He has often called for greater regulation and oversight of AI technology development. 

Microsoft CEO Bill Gates and Apple’s ex-founder Steve Wozniak have also warned about AI.