A top US Air Force colonel has peddled back on his viral claim that an AI drone chose to “kill its operator” to complete an experimental mission.
During the Future Combat Air & Space Capabilities summit in London last week, Colonel Tucker Hamilton, chief of AI testing, described a drone that “killed” its operator while using lateral thinking to problem solve.
“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” he said.
“The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”
He added: “It killed the operator because that person was keeping it from accomplishing its objective.”
Hamilton has since claimed he “mis-spoke” during the conference.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said in a statement.
The US Air Force has also denied the experiment ever taking place.
Ann Stefanek, a spokesperson for the US Air Force, told Insider: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
GlobalData is the parent company of Verdict and its sister publications.