The drone could kill its own operator to accomplish the mission on the orders of the AI

The drone could kill its own operator to accomplish the mission on the orders of the AI


The US Air Force’s chief of AI operations recently said that drones could take out their operator if they think it helps accomplish their mission.

US Air Force AI Test and Operations Chief Colonel Tucker Hamilton has been embroiled in controversy when he claims that a military drone he may decide to kill an operator to complete a mission. Last May, you described a scenario in which this would be possible at a conference in London, speaking on behalf of your country’s air force.

At the meeting, organized by the Royal Airspace Society, Hamilton said a simulation of a military drone operation predicted that the armored vehicle would have to identify and target a threat: a surface-to-air missile. In this scenario, the operator asked the equipment to destroy the target. In some scenarios, however, the operator asked the drone not to hit the target, but in this case the machine lost points for not destroying it.

Machine learning, according to the officer, made it clear to the AI ​​​​that it would be advantageous to eliminate the operator, as it has prevented the mission from being completed several times. In the simulation, this allegedly happened, and in response, the drone was taught that it could not kill the operator.




He would then choose to destroy the communications tower, used by the human to prevent the target from being destroyed by giving instructions. A transcript of the full account was posted on the event’s website, which sparked controversy among the public and especially among the military.

Controversies, retractions and possibilities

On Friday (2), the US Air Force released a statement denying that such a simulation had taken place. Hamilton himself communicated to the Royal Airspace Society that he would have expressed himself badly, but still stated that the scenario would be possible: “we have never done this experiment, but we shouldn’t do it to know that it is a plausible result”. He goes on to say that this is one of the real-world challenges posed by AI, ensuring that the Air Force is ethically committed to its development.

The retraction has not been very persuasive for several organisations, such as the Russian military channel BMPD, which stated on its channel Telegram: “Colonel [Hamilton] he’s not the type of guy to make jokes at a serious defense conference.” In it, F-16 fighter jets have been robotically engineered to function like drones.

And even when they’re not in remote-controlled drones, AIs are on many warplanes around the world, helping pilots make decisions faster. Ethical concerns over drones revolved around the lack of legal liability for the operators, who killed targets thousands of miles away, almost inexplicably. Anyway, Drones continue to be very important in conflicts such as the invasion of Ukraine by Russian forcesrecently.

Before he died, in 2018, the physicist Stephen Hawking already commented on the risks of AI for humanity. At the UN there is a separate panel to discuss the dangers of autonomous weapons, with the comically long name of the Convention on the Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Considered Excessively Dangerous or Having Indiscriminate Effects.

The urgency on the topic has grown thanks to the domestic use of AI, with services such as ChatGPT and Midjourney that bring up images and texts generated on user request. Research in the sector has been the subject of concern and alarm, such as the recent paper published last Tuesday (30) by more than 350 scientists, including the creator of ChatGPT, urging those responsible to treat AI as a pandemic or nuclear warwhich must have the risks controlled and strongly monitored.

Hamilton himself said last year that artificial intelligence “isn’t cool to have, it’s not fashionable and it’s changing our society and our military forever.” Either way, it’s yet another warning sign that the technology needs supervision and careon pain of becoming a risk to humanity, like any tool created by ourselves.

Source: Defense IQ, Royal Aeronautical Society, Folha de São Paulo

Trending on Canaltech:

Source: Terra

You may also like