close
close

Human supervision is not enough for AI war machines


Human supervision is not enough for AI war machines

As artificial intelligence (AI) becomes more powerful – and is even used in warfare – governments, technology companies and international bodies urgently need to ensure its safety. And a common thread in most agreements on AI safety is the need for human oversight of the technology.

In theory, humans can act as a safeguard against misuse and possible hallucinations (when AI generates false information). This could mean, for example, having a human review the content generated by the technology (its outputs).

However, a growing body of research and numerous practical examples of military AI use show that the idea of ​​humans acting as effective controllers of computer systems is fraught with challenges.

Many of the efforts to date to create regulations for AI already contain language that advocates human oversight and involvement. For example, the EU’s AI law requires that high-risk AI systems – such as those already in use that automatically identify people using biometric technologies such as a retinal scanner – must be separately reviewed and approved by at least two humans with the necessary competence, training and authority.

In the military sector The UK government recognised the importance of human control in its February 2024 response to a parliamentary report on AI in weapons systems. The report emphasises “meaningful human control” through providing appropriate training for humans. It also emphasises the notion of human responsibility, saying that decision-making in actions by armed aerial drones, for example, cannot be shifted to machines.

This principle has largely been maintained until now. Military drones are currently controlled by human operators and their chain of command, who are responsible for the actions of an armed aircraft. However, artificial intelligence has the potential to make drones and the computer systems they use more powerful and autonomous.

This includes their targeting systems. In these systems, AI-controlled software selects and locks on enemy combatants so that humans can authorize a weapon attack against them.

Although this technology is not yet widely used, the war in Gaza seems to have shown how it is already being used. The Israeli-Palestinian magazine +972 reported on a system called Lavender that is being used in Israel.

This is reportedly an AI-based target recommendation system that is coupled with other automated systems and tracks the geographic location of identified targets.

Target acquisition

In 2017, the US military launched a project called Maven with the goal of integrating AI into weapons systems. Over the years, it evolved into a target acquisition system. It has reportedly significantly increased the efficiency of the target recommendation process for weapons platforms.

In line with recommendations from scientific work on AI ethics, a human is present to monitor the results of the targeting mechanisms as a critical part of the decision-making process.

Nevertheless, work on the psychology of human collaboration with computers raises important questions. In a 2006 peer-reviewed article, US scientist Mary Cummings summarized how people can ultimately place excessive trust in machine systems and their conclusions – a phenomenon known as automation bias.

This has the potential to undermine the human role in overseeing automated decision-making when operators are less likely to question a machine’s conclusions.

In another study from 1992, researchers Batya Friedman and Peter Kahn argued that the sense of moral agency when interacting with computer systems can be weakened to the point that people do not feel responsible for the resulting consequences. In fact, the work explains that people can even begin to attribute a sense of agency to the computer systems themselves.

Given these trends, it would be wise to consider whether excessive reliance on computer systems, and the potential weakening of human moral self-confidence that this entails, could also have implications for targeting systems. For although the error rate is statistically small on paper, it takes on frightening proportions when we consider the potential impact on human lives.

The various resolutions, agreements and laws on AI help to ensure that humans will play an important role as controllers of AI. However, one must wonder whether, after serving in this role for a long time, a disconnect may occur where human operators begin to perceive real people as objects on a screen.

Mark Tsagas is a lecturer in law, cybercrime and AI ethics at the University of East London.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Leave a Reply

Your email address will not be published. Required fields are marked *