London Daily News

The Law and Artificial Intelligence in the Defence Sector

By Yasmin Underwood, defence consultant and member of the National Association of Licensed Paralegals (NALP)

Any technology can go wrong. In most cases, it is relatively simple to assign responsibility. But with artificial intelligence, it is far from simple. Who should be held accountable when/if something goes wrong with AI technology? The developer? The owner of the technology? The end user?

Unfortunately, the nature of most modern AI systems (layers of interconnected nodes that are designed to process and transform data in a hierarchical manner) prevents anyone from establishing the exact reasoning behind a system’s predictions or decisions. This makes it almost impossible to assign legal responsibility, in the event of an accident or errors caused by the system.

The Law and Artificial Intelligence in the Defence Sector

Unease about AI in defence

The rise in the use of autonomous/unmanned vehicles (commonly known as drones) in both military and commercial settings, has been accompanied by heated debate as to how legislation is going to be able to keep up with such rapid advances in technology, and the ethical implications of their use in a military setting.

There is the risk that delegation of tasks or decisions to AI systems could lead to a ‘responsibility gap’ between systems that make decisions or recommendations, and the human operators responsible for them. Crimes may go unpunished, and we may even find that eventually the entire structure of the laws of war, along with their deterrent value, will be significantly weakened if lawmakers cannot come to an agreement on some kind of universal legislation.

Bias and discrimination

Although the media might have you believing otherwise, we are nowhere near a world where AI is thinking and making decisions completely of its own accord. The reality is, that AI systems are only as good as the data they are trained on, and while machine learning offers the ability to create incredibly powerful AI tools, it is not immune to bad data or human tampering. Whether that’s flawed or incomplete training data, limitations with technology, or simply usage by bad actors. It is all too easy to embed unconscious biases into decision-making, and without legislation addressing how these biases can be mitigated or avoided, there is a risk that AI systems will perpetuate discrimination or unequal treatment.

To try to alleviate these issues, industry experts have been considering the possibility of an ‘ethics by design’ approach to developing AI. Could legal responsibility be moved up the chain to the developer? Should there be rules of development as well as rules of engagement?

The Law and Artificial Intelligence in the Defence Sector

Where do we go from here?

In 2021, the European Commission proposed the first-ever legal framework on AI, which addresses the risks of Artificial Intelligence. The proposed regulation aims to establish harmonised rules for the development, deployment, and use of AI systems in the European Union, and outlines a legal framework which proposes a risk-based approach that separates AI into four categories: unacceptable risk, high risk, limited risk and minimal risk. With each category subject to different levels of regulatory scrutiny and compliance requirements.

This innovative new framework, led to the 2022 proposal for an ‘AI Liability Directive’, which aims to address the specific difficulties of legal proof and accountability linked to AI. Although at this stage the directive is no more than a concept, it offers a glimmer of hope to legal professionals and victims of AI-induced harm, by introducing two primary safeguards:

  1. Presumption of Causality: If a victim can show someone was at fault for not complying with relevant obligations, and that there is a likely causal link with the AI’s performance, then the court can presume that this non-compliance caused the damage.
  2. Access to Relevant Evidence: Allows victims of AI-related damage to request the court to disclose information about high-risk AI systems. This should help in identifying the person/persons that may be held liable and potentially provide insight into what went wrong.

While one might argue that this new conceptual legislation would not solve all our legal issues, it’s certainly a step in the right direction.

In addition, there are policy papers such as the UK 2022 Artificial Intelligence (AI) Strategy, and the US Department of Defence Responsible Artificial Intelligence Strategy and Implementation Pathway 2022.

These provide important guidance to both tech developers and their military end users, on adhering to international law and upholding ethical principles in the development and use of AI technology across defence. They also present opportunities for data scientists, engineers and manufacturers to consider using ethics-by-design approaches when creating new AI technology. Aligning the development, with the related legal and regulatory frameworks, to ensure that AI and autonomous systems are developed and deployed in defence in a manner that is safe, effective, and consistent with legal and ethical standards.

The Law and Artificial Intelligence in the Defence Sector

ABOUT THE AUTHOR

Yasmin Underwood is a defence consultant at Araby Consulting and member of the National Association of Licensed Paralegals (NALP), a non-profit membership body and the only paralegal body that is recognised as an awarding organisation by Ofqual (the regulator of qualifications in England). Through its Centres around the country, accredited and recognised professional paralegal qualifications are offered for those looking for a career as a paralegal professional.

 

Web: http://www.nationalparalegals.co.uk

Twitter: @NALP_UK

Facebook: https://www.facebook.com/NationalAssocationsofLicensedParalegals/

LinkedIn – https://www.linkedin.com/company/national-association-of-licensed-paralegals/

 

References

Ambitious, safe, responsible: Our approach to the delivery of AI-enabled capability in Defence June 2022 GOV.UK. Available at: https://www.gov.uk/government/publications/ambitious-safe-responsible-our-approach-to-the-delivery-of-ai-enabled-capability-in-defence/ambitious-safe-responsible-our-approach-to-the-delivery-of-ai-enabled-capability-in-defence

Artificial Intelligence and the Future of Warfare. M. L. Cummings International Security Department and US and the Americas Programme January 2017. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

Defence Artificial Intelligence Strategy June 2022, UK Ministry of Defence. GOV.UK. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1082416/Defence_Artificial_Intelligence_Strategy.pdf

A pro-innovation approach to AI regulation March 2023. Department for Science, Innovation & Technology. GOV.UK Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146542/a_pro-innovation_approach_to_AI_regulation.pdf

Government refuses to rule out development of lethal autonomous weapons in parliamentary debate November 2021. UK action. Stronger UN. Better world. Available at: https://una.org.uk/news/government-refuses-rule-out-development-lethal-autonomous-weapons-parliamentary-debate

AI’s black box problem: Challenges and solutions for a transparent future. May 2023. SHIRAZ JAGATI. Available at: https://cointelegraph.com/news/ai-s-black-box-problem-challenges-and-solutions-for-a-transparent-future

7 Times Machine Learning Went Wrong April 2023. By Megan Ellis. Available at:  https://www.makeuseof.com/examples-machine-learning-artificial-intelligence-went-wrong/

UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research. December 2021. Available at: https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616

Convenient Killing, Armed Drones and the ‘Playstation’ Mentality. Sept 2010. Chris Cole, Mary Dobbing, and Amy Hailwood. https://dronewarsuk.files.wordpress.com/2010/10/conv-killing-final.pdf

The EU Artificial Intelligence Act. June 2023. Available at: https://www.artificial-intelligence-act.com/

Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (ARTIFICIAL INTELLIGENCE ACT) and Amending Certain Union Legislative Acts. Brussels, 21.4.2021. Available at:  https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

Artificial intelligence liability directive. BRIEFING EU Legislation in Progress. February 2023. Available at: https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf

U.S. Department of Defence Responsible Artificial Intelligence Strategy and Implementation Pathway. June 2022. Available at: https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF

 

 

 

Follow Us

Pin It on Pinterest