Artificial Intelligence (AI) holds great potential for the military domain but is also seen as prone to data bias and lacking transparency and explainability. In order to advance the trustworthiness of AI-enabled systems, a dynamic approach to the development, deployment and use of AI systems is required. This approach, when incorporating ethical principles such as lawfulness, traceability, reliability and bias mitigation, is called 'Responsible AI'. This article describes the challenges of using AI responsibly in the military domain from a human factors and ergonomics perspective. Many of the ironies of automation originally described by Bainbridge still apply in the field of AI, but there are also some unique challenges and requirements that need to be considered, such as a larger emphasis on ethical risk analyses and validation and verification up-front, as well as moral situation awareness during deployment and use of AI in military systems.
Keywords: Artificial Intelligence; ethics; explainability; human-machine teaming; military systems; testing and evaluation; transparency; validation and verification.
‘Responsible AI’ is a relatively novel transdisciplinary field incorporating ethical principles in the development and use of AI in military systems. I describe the prospects and challenges with Responsible AI from a human factors and ergonomics perspective. There is in particular a need for new methods for testing and evaluation, validation and verification, explainability and transparency of AI, as well as for new ways of Human-AI Teaming.