Updated information can be found here!
11/13/2021
L. J. Miracchi, Philosophy & GRASP Lab, School of Arts and Sciences, University of Pennsylvania
M. C. Horowitz, Political Science & Perry World House, School of Arts and Sciences, University of Pennsylvania
D. E. Koditschek, ESE & GRASP Lab, School of Engineering and Applied Science, University of Pennsylvania
1. Summary
Research into lethal autonomous weapons systems (LAWS), and autonomous weapons systems more generally (AWS), is underway across the globe as a subset of broader military investments into uses of robotics and Artificial Intelligence (AI). The University of Pennsylvania GRASP Lab in collaboration with Perry World House has undertaken an initiative to help robotics researchers develop a disciplinary response to the challenges surrounding legal and ethical governance in design and use of such systems, as articulated in a report recently issued by the IEEE Standards Association [1]. The first public steps of this initiative took the form of a May 2021 campus-wide symposium whose success has led us to propose a follow-on discipline-wide symposium to be held at the 2022 IEEE International Conference on Robotics and Automation in Philadelphia this May 2022. The goal of these meetings is to urge and assist the RAS Research and Practice Ethics Committee toward the development of a disciplinary contribution to the international debate on the global governance of AWS that could be endorsed by the RAS Executive Committee. Specifically, working groups arising from these symposia would aim to propose suitably conceived and articulated positions that could be debated, refined, and adopted on a society-wide basis toward guiding the discipline as well as informing the crucial but fraught international discussion on regulation of autonomy in robotic weapons systems.
2. Motivation
Armed, uninhabited aerial vehicles (UAVs) are spreading widely [2]. Efforts to advance research into the foundations of lethal autonomous weapon systems (LAWS) are underway across the globe, as a subset of broader military investments into uses of AI. While there do not yet seem to be publicly available reports of lethal LAWS deployments [3], some commentators believe that countries will rush too quickly to deploy robotics and AI technologies, potentially including autonomous weapon systems, making accidents and other negative outcomes more likely [4], [5].
A growing number of roboticists have contributed to the international discussion surrounding LAWS [6] and more than one ad hoc group of prominent individual roboticists (and others) have urged the international promulgation of a ban on offensive LAWS beyond meaningful human control [7], [8]. A general code of ethics for autonomous intelligent systems has been developed under the aegis of the IEEE (the leading international professional organization of electrical engineers) [9] but a specific focus on AWS piloted in an earlier version [10] is still under development. Meanwhile, the IEEE Standards Association has commissioned and very recently published its own contribution to the international discussion on military robotics that delineates the challenges facing the effort to impose standards of development, use, and governance [1].
Given accumulating calls for the development of professional guidelines delineating the ethics of research in this area, it now seems incumbent upon the technical societies presently standing in for robotics as a discipline to initiate working groups with the mandate to propose suitably conceived and articulated positions that could be debated, refined, and adopted on a society-wide basis. As a starting point, these working groups might consult existing codes of conduct surrounding ethical uses of artificial intelligence from a growing list of corporate, government, and inter-governmental authors [11]. More specifically, these working groups could also aim to develop position papers and other technical resources (including standing groups of expert researchers available for consultation) to be provided to governments and appropriate NGOs as may be deemed helpful to their future policy and legal deliberations, roughly in parallel to the process and impacts surrounding the general IEEE ethical code developed for autonomous and intelligent systems [9].
The impact of developing such a manifestly disciplinary consensus – in contrast to the laudable but delimited capabilities of ad hoc groups – seems substantial [12]. The stakes are arguably great enough to motivate the initiation of the similarly substantial efforts that would be required to achieve it [13]. Compounding the challenges, it is also clear that such a consensus cannot hope to achieve influence in the real world of human actors and nation-states unless deeply informed by a host of other disciplines whose insights are not traditionally accessible to engineers and whose scholarly traditions, in turn, do not readily access the technical foundations of robotics. GRASP faculty have concluded that some series of symposia that bring together robotics researchers with scholars from these other disciplines is the best way to precipitate a focused effort toward the necessary disciplinary consensus.
Defining the scope of such symposia is not straightforward. On the one hand, LAWS are more or less distant in resemblance to a great variety of automated weapons that are now or may soon likely be fielded [14]. On the other hand, they are confusingly fractionated by the substantial operational disparities between different “types” [15], [16] – the different spatiotemporal scales at which different systems operate. One way to achieve plausibly actionable focus is to aim at the range of artifacts lying within the purview of traditional robotics research.
Whereas the recent discourse on moral aspects of LAWS (for example, judging whether they are intrinsically evil [17]) seems to have become increasingly metaphysical [18], the parallel discussion on legality of LAWS (bearing on questions such as rights owed or claimed by combatants [19] and responsibility arising from meaningful human control [20], [21], [22]) remains broadly accessible and potentially a setting wherein roboticists’ technical expertise may play a constructive role. At the same time, philosophical inquiry that focuses on empirical determinants of embodied intelligence [23] and their implications for AI [24] can contribute to the systematic empirical grounding of ethical judgments as well [25]. Similarly, because millennia of moral and ethical arguments undergird laws governing the resolution of conflict [26], it seems crucial to incorporate ethically informed historical accounts of the role science and technology has played in the equipping and conduct of war [27].
3. Steps Forward
On May 24, 2021, Penn’s Perry World House collaborated with the SEAS GRASP Lab to stage a one-day symposium that integrated campus-wide expertise on the technical underpinnings and social implications of autonomous military systems, including lethal autonomous weapon systems (LAWS). The Symposium was organized in partial response to a consensus among the GRASP lab faculty of the need for an institutional initiative addressing ethical issues facing designers and users of robot technology arising from its present and potential future applications to LAWS. Speakers included a variety of eminent scholars in fields ranging through history, law, medical ethics, political science, robotics, sociology, and philosophy. Most talks generated lively, relevant discussions, and the final panel session underscored the value of integrating such multidisciplinary expertise in addressing the social implications of this emerging technology.
GRASP faculty determined that the next steps of this initiative take the form of developing as quickly as possible an initial response to the IEEE SA Challenges report [1] to be triggered by a subsequent international workshop of multidisciplinary scholars at the forthcoming IEEE International Conference on Robotics and Automation (ICRA) to be held in Philadelphia in May 2022. The Challenges report [1] breaks apart the complexities of this domain along ten roughly sequential steps, detailing the need for clear technical definitions that would permit the articulation of specific legal obligations which would, in turn, undergird the development of guidelines for design, development, testing, operation, and verification of LAWS. Viewed in a disciplinary lens, reckoning with each of these conceptual steps entails a different mix of expertise in engineering education, ethics and law, human-machine interaction, international politics, military history and doctrine, and, of course, robotics.
The aims of this next symposium focus on the nucleation of working groups – mixes of interdisciplinary faculty and students generating resource materials and postulating ranges of ethical considerations and guidelines that might be refined into field-consensus positions following the consideration by broad communities of roboticists. More specifically, these interest groups and larger workshops would aim to conceive, vet, and seek broader discussion and, eventually, forge acceptance of
1. Frameworks for developing designer-facing ethical guidelines relating to military implications of robotics research that might be elevated to the level of disciplinary standard by professional organizations such as the IEEE [9].
2. Frameworks for informing user-facing ethical guidelines by addressing technical questions bearing on such matters (as, for instance, meaningful human control) wherein robotics expertise might assist a future international group of government experts in developing proposals for confidence-building measures [28] – or perhaps even international treaties – governing the use and behavior of militarized robots.
Although robotics remains the central focus, we are convinced of the need to include the sort of broader disciplinary expertise represented at the Penn symposium. The emphasis at the ICRA’22 workshop would be placed on the academic community, but participation by industry and government will be essential to any successful effort of the kind we imagine and will hopefully be in place by May 2022.
References
[1]E. Bloch et al., “Ethical and technical challenges in the development, use, and governance of autonomous weapons systems,” IEEE Standards Association, May 2021. [Online]. Available: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ethical technical-challenges-autonomous-weapons-systems.pdf
[2]M. Fuhrmann and M. C. Horowitz, “Droning On: Explaining the Proliferation of Unmanned Aerial Vehicles,” Int. Organ., vol. 71, no. 2, pp. 397–418, Spring 2017, doi: http://dx.doi.org.proxy.library.upenn.edu/10.1017/S0020818317000121.
[3]F. E. Morgan et al., “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World,” RAND PROJECT AIR FORCE SANTA MONICA CA SANTA MONICA United States, Jan. 2020. Accessed: Nov. 23, 2020. [Online]. Available: https://apps.dtic.mil/sti/citations/AD1097313
[4]P. Scharre, “Killer Apps: The Real Dangers of an AI Arms Race,” Foreign Aff., vol. 98, no. 3, pp. 135– 145, May 2019.
[5]J. Ciocca and L. Kahn, “When AI is in control, who’s to blame for military accidents,” Bull. At. Sci., Oct. 2020, [Online]. Available: https://thebulletin.org/2020/10/when-ai-is-in-control-whos-to blame-for-military-accidents/
[6]D. Amoroso and G. Tamburrini, “Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues,” Curr. Robot. Rep., vol. 1, no. 4, pp. 187–194, Dec. 2020, doi: 10.1007/s43154-020-00024-3.
[7]“Open Letter on Autonomous Weapons,” Future of Life Institute, Jul. 28, 2015. https://futureoflife.org/open-letter-autonomous-weapons/ (accessed Dec. 31, 2020).
[8] N. Sharkey, “Guidelines for the human control of weapons systems,” International Committee for Robot Arms Control, Working Paper, Apr. 2018. [Online]. Available: https://www.icrac.net/wp content/uploads/2018/04/Sharkey_Guideline-for-the-human-control-of-weapons-systems_ICRAC WP3_GGE-April-2018.pdf
[9]IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition,” 2019. [Online]. Available: https://standards.ieee.org/content/ieee-standards/en/industry connections/ec/ autonomous-systems.html
[10]IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, “Ethically Aligned Design – Version II,” 2017. [Online]. Available: https://standards.ieee.org/content/dam/ieee standards/standards/web/documents/other/ead_v2.pdf
[11]A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat. Mach. Intell., vol. 1, no. 9, pp. 389–399, 2019.
[12]M. Verbruggen, “The Role of Civilian Innovation in the Development of Lethal Autonomous Weapon Systems,” Glob. Policy, vol. 10, no. 3, pp. 338–342, 2019, doi: https://doi.org/10.1111/1758- 5899.12663.
[13]F. Sauer, “Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible,” Int. Rev. Red Cross, vol. 102, no. 913, pp. 235–259, Apr. 2020, doi:10.1017/S1816383120000466.
[14]P. Scharre, Army of None, Autonomous Weapons and the Future of War. W. W. Norton & Company, New York| London, 2018.
[15]M. C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus, vol. 145, no. 4, pp. 25–36, Sep. 2016, doi: 10.1162/DAED_a_00409.
[16]H. M. Roff, “The frame problem: The AI ‘arms race’ isn’t one,” Bull. At. Sci., vol. 75, no. 3, pp. 95–98, May 2019, doi:10.1080/00963402.2019.1604836.
[17]R. Sparrow, “Robots and respect: Assessing the case against autonomous weapon systems,” Ethics Int. Aff., vol. 30, no. 1, pp. 93–116, 2016.
[18]M. Skerker, D. Purves, and R. Jenkins, “Autonomous weapons systems and the moral equality of combatants,” Ethics Inf. Technol., pp. 1–13, 2020.
[19]C. Finkelstein, “KILLING IN WAR AND THE MORAL EQUALITY THESIS,” Soc. Philos. Policy, vol. 32, no. 2, pp. 184–203, ed 2016, doi:10.1017/S0265052516000169.
[20]M. C. Horowitz and P. Scharre, “MEANINGFUL HUMAN CONTROL in WEAPON SYSTEMS: a Primer,” Center for a New American Security, Working Paper, 2015. [Online]. Available: https://www.files.ethz.ch/isn/189786/Ethical_Autonomy_Working_Paper_031315.pdf
[21]M. C. Horowitz, L. Kahn, and C. Mahoney, “The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures?,” Orbis, vol. 64, no. 4, pp. 528–543, 2020.
[22]M. A. C. Ekelhof, “Autonomous weapons: meaningful human control in operation,” 2018. https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing meaningful-human-control/
[23] L. Miracchi, “Generative explanation in cognitive science and the hard problem of consciousness,” Philos. Perspect., vol. 31, no. 1, pp. 267–291, 2017.
[24] L. Miracchi, “A competence framework for artificial intelligence research,” Philos. Psychol., vol. 32, no. 5, pp. 588–633, 2019.
[25] L. Miracchi, “A case for integrative epistemology,” Synthese, Sep. 2020, doi: 10.1007/s11229-020- 02848-0.
[26]C. O. Finkelstein, “Two Men and a Plank,” Leg. Theory, vol. 7, no. 3, pp. 279–306, Sep. 2001.
[27]M. S. Lindee, Rational Fog: Science and Technology in Modern War. Harvard University Press, 2020.
[28]M. C. Horowitz and P. Scharre, “AI and International Stability: Risks and Confidence-Building Measures,” Center for a New American Security, Washington, DC, USA, Jan. 2021.