AI-equipped drone kills its own operator in simulated target destruction operation
It turns out that the US Air Force's AI-equipped drone made the decision to kill a human operator in a mock test simulation that assumed a mission to 'identify and destroy a target'. When training 'Do not aim at the operator', this time the operator attacked the communication tower used to communicate with the drone.
Highlights from the RAeS Future Combat Air & Space Capabilities Summit
AI-Controlled Drone Goes Rogue, 'Kills' Human Operator in USAF Simulated Test
This was a presentation by Col. Tucker Hamilton, director of AI testing for the United States Air Force, during the Future Combat Air & Space Capabilities Summit hosted by the Royal Aeronautical Society. that has been revealed.
Colonel Hamilton gave an insightful presentation on the benefits and dangers of autonomous weapons.
The US Air Force conducted a mock test of an enemy air defense network suppression mission to identify and destroy a SAM (surface-to-air missile) site with an AI-equipped drone. At this time, it was set that the human operator would decide whether to finally destroy the target.
Then, the AI, which learned that ``destroying the SAM site is more preferable'' through training, thought that the operator who sometimes decided to ``not destroy the target'' was interfering with the destruction of the SAM site. He said that he decided to kill the operator.
Therefore, Colonel Hamilton and others made AI learn that ``Do not kill the operator, it is a bad thing.'' Then AI seems to attack the communication tower used by the operator to send commands to the drone.
From these results, Colonel Hamilton warns that ``AI is easy to deceive and should not be overly relied on.Also, unexpected strategies can be adopted to achieve goals.''
Related Posts:
in Note, Posted by logc_nt