TOPICS
In what ways are discussions about the ethics of LAWS continuous with historical discussions about the ethics of increasingly autonomous weapons? Do LAWS pose a distinctive, in-principle problem to their ethical application? What complexities arise in the development of autonomous weapons in a global, multi-cultural context?
PANELISTS
Claire Finkelstein, Algernon Biddle Professor of Law and Professor of Philosophy, University of Pennsylvania. “Autonomous Armed Robots and the Principle of Distinction: Does Robotic Killing Violate the Laws of War?”
Experts in the laws of war have been grappling with the ethics and legality of semi-autonomous weapons systems such as heat-sensing lethal drones. Many have praised the use of technologies on the grounds that they provide greater distinction than the indiscriminate bombing of the wars of the past. Despite numerous accounts of civilian collateral damage from remote targeting, the principle of distinction seems better supported by such technology, as long as human deployment techniques can take advantage of it. A new type of remote instrumentality of war, however, may prove more controversial. Remotely fired or movement sensing machine guns, such as that which killed the Iranian nuclear scientist Moshen Fakhrizadeh, makes use of AI technology yet is subject to human control. This paper will consider whether the next generation of AI based lethal technologies comports with the basic requirements of the law of armed conflict (LOAC), and whether such instrumentalities should be assessed differently from traditional lethal drone technology from a legal and ethical standpoint. It will also address questions of responsibility for collateral damage from both types of technology.
Denise Garcia, Associate Professor of Political Science and International Affairs, Northeastern University, co-author of IEEE SA Report. “Common Good Governance and the Militarization of AI”
I will introduce the challenges posed by the heightened use of autonomous weapons and the increasing militarization of Artificial Intelligence (AI) to the existing international legal order by appraising the relevant parts of international law: state responsibility, the law on the use of force, international humanitarian law (IHL), human rights law, and international criminal law. Most observers have rightly focused on IHL (or the law of armed conflict); however, it is essential to determine the impact on the totality of the legal framework governing international relations. Additionally, I will also examine the diplomatic efforts, obstacles, and opportunities at the United Nations, since 2014 to create a new treaty on autonomous weapons by explaining the role of key actors involved: states, scientists, the International Committee of the Red Cross, civil society, and the Stop Killer Robots Campaign, to determine what the prospects for future regulation will be.
Ariel Conn, heads the IEEE-SA Research Group on Issues of Autonomy and AI for Defense Systems, and she led the effort behind the document, the Ethical and technical challenges in the development, use, and governance of autonomous weapons systems.
A lot of work has been done to identify and clarify ethical and legal concerns regarding autonomous weapons systems (AWS), and many ethical principles have been broadly agreed upon. However, these principles are not easily translated into programmable code, and much of the terminology in the principles can be vague or inconsistently defined among disparate groups. The IEEE-SA convened a small group of experts to look at various sets of principles from a more technical, pragmatic perspective. The group identified 10 categories of challenges that need to be addressed in order to transform high-level ethical principles associated with AWS into practice. I’ll review the work of the IEEE-SA report, with a special focus on the challenges that get to some of the most fundamental ethical and technical questions in the AWS debates: How realistic is it to expect a human to “control” an AWS, and if direct control of the system isn’t possible, how well can the outcome of using the system be predicted and/or controlled?