Updated information can be found here!
* View Presentations Recordings here…
Schedule
9:00am | Welcome & Introduction Mark Yim, Asa Whitney Professor of Mechanical Engineering; Director, GRASP Lab |
9:15am | Michael C. Horowitz Richard Perry Professor and Director, Perry World House Slide deck of presentation |
9:45am | Susan Lindee Janies and Julian Bers Professor of History and Sociology of Science Slide deck of presentation |
10:15am | Claire Finkelstein Algernon Biddle Professor of Law and Professor of Philosophy |
10:45am | Break |
11:00am | Vijay Kumar Nemirovsky Family Dean of Penn Engineering |
11:30am | Jonathan D. Moreno David and Lyn Silfen University Professor; Professor of Medical Ethics and Health Policy and of History and Sociology of Science; Penn Integrates Knowledge (PIK) Professor Slide deck of presentation |
12:00pm | Lunch |
1:30pm | C.J. Taylor Raymond S. Markowitz President’s Distinguished Professor of Computer and Information Science; Associate Dean, Diversity, Equity and Inclusion Slide deck of presentation |
2:00pm | Jessa Lingel Associate Professor of Communication Slide deck of presentation |
2:30pm | Break |
2:45pm | Lisa Miracchi Assistant Professor of Philosophy Slide deck of presentation |
3:15pm | Daniel E. Koditschek Alfred Fitler Moore Professor of Electrical and Systems Engineering Slide deck of presentation |
3:45pm | Break |
4:00pm | Panel Discussion Moderators – Dr. Mark Yim & Dr. Michael Horowitz |
5:00pm | Adjournment |
Abstracts
9:15am – 9:40am
Michael C. Horowitz, Richard Perry Professor and Director, Perry World House
This talk will review the prospects for international agreements relating to constraints that might be imposed upon LAWS designers and users. Although the advent of these new technologies seems to introduce new features to the mix of challenges and opportunities, future prospects for successful negotiations are usefully informed by examining the history of past efforts, focusing particularly on aspects of military technologies that have proven to be escalating or de-escalating over the last century of industrialized human conflict. The talk will conclude with thoughts about the ways government agencies and experts seeking to find international accords limiting the dangers of LAWS might benefit from engineers’ technical insights.
9:45am – 10:10am
Susan Lindee, Janies and Julian Bers Professor of History and Sociology of Science
Technologies vex social orders – none more so than improving weapons systems which increase asymmetrical risks while deskilling their use (for example, affording a peasant the capacity to kill a samurai). UAVs further accelerate these historical disparities of asymmetry, skill and specificity (the targeting of specific individuals vs countries, cities, forts), as they move ever more baroque levels of interdisciplinary expertise into the machine itself. In this respect they resonate with earlier technologies that also challenged expectations about risk and vulnerability, though they also raise new questions about autonomy and violence. This presentation explores the rise of robotics and AI technologies with special attention to the medical and psychological elements “built in” to their use, as these apparently autonomous machines operate as surrogate minds. And unlike earlier technologies like muskets, submarines, or even nuclear weapons, all of which challenged the social order in various ways, robotics/AI technologies literally mean to replace the individual body of the soldier, in ways that undermine the logic and moral grounding of modern war. As products of information and computing, they furthermore operate in the precise domain where so much daily life unfolds, and where so much vulnerability is therefore situated.
10:15am – 10:40am
Claire Finkelstein, Algernon Biddle Professor of Law and Professor of Philosophy
This talk will examine the status of lethal autonomous weapons (LAWS) under the Law of Armed Conflict (LOAC). From the standpoint of military effectiveness, one of the most potentially useful aspects of LAWS is their ability to enhance follow-through on threats and reduce response time. A response to a nuclear strike is a frequently cited example, but a more common instance would be the potential for accelerated responses to cyber-attacks, where speed of response is also critical. Removing the human element of indecision and hesitation from our military responses may provide a significant advantage in establishing credibility with our adversaries and deterring first strikes. Any such autonomous or automated decision to engage in preemptive force, however, poses complications from the standpoint of the legality of our responses under LOAC. In this talk, I will examine the applicability of traditional principles of jus ad bellum to autonomy and automaticity with regard to the initiation of force. I will analyze situations in which a manual military response would violate the proportionality principle under the jus ad bellum, but when rendered autonomous it may allow us to assess proportionality from the point of design rather than from the point of deployment. This would allow some actions that would violate proportionality to pass legal, and perhaps ethical, muster. But is this legitimate? If we come to the conclusion that this sort of “bootstrapping” with regard to proportionality violates legal and moral principles, that would provide a basis for insisting on a human in the loop at the point of deployment, and not just at the point of design.
11:00am – 11:25am
Vijay Kumar, Nemirovsky Family Dean of Penn Engineering
A comprehensive engineering education requires a balance of technical training in scientific foundations and design experience in concert with ethical foundations bearing on the social obligations of tool builders and technology creators. This talk will introduce the Penn Symposium on the Social Implications of Autonomous Military Systems as a timely component of the latter aspect of engineering pedagogy seen from the perspective of an engineering school dean. I will comment on my 3-decade experience as a robotics researcher supported by a mix of National Science Foundation, Department of Defense and industrial funding. I will then use my briefer experience serving in the White House Office of Science and Technology Policy to consider the broader issues of how engineering researchers’ roles as ethical actors and teachers comport with their responsibilities to society in general and their sponsors in particular. I will close by speculating on how engineers’ insights into the technology help inform society’s emerging norms and the ethical obligations to be imposed upon robotics.
11:30am – 11:55am
Jonathan D. Moreno, David and Lyn Silfen University Professor; Professor of Medical Ethics and Health Policy and of History and Sociology of Science; Penn Integrates Knowledge (PIK) Professor
A first goal of technology-focused ethicists weighing in on LAWS would be to understand the terms of discussion. After some preliminaries about relevant concepts such as just war and dual use I will trace the key elements of the debate as it has unfolded in the U.S. in the last decade, especially by contrasting automatic with autonomous systems and the modes and senses of autonomy in weapons design. U.S. planners have given assurances both that no fully autonomous system is contemplated, that in any case full autonomy is a misleading idea, and that the practical question is rather how humans and machines will function as teams. But how can the implications of human-machine machine teaming be assessed, especially for warfighters themselves? I will conclude by describing a recently funded project that will set out the parameters for ethical experiments on warfighters and AI-enabled technology.
1:30am – 1:55am
C.J. Taylor, Raymond S. Markowitz President’s Distinguished Professor of Computer and Information Science
This talk will open by offering some background examples of the sort of engineering research problems that have been supported by military and civilian sponsors over the past few decades of US robotics. The focus will then narrow to assess the present state of the art in autonomous intelligent systems that can be reliably fielded in unstructured outdoor environments. A final more speculative set of remarks will consider the likely near-term future for such systems, setting out the key foundational problems that remain to be solved as well as the chief hurdles to overcome in practice.
2:00pm – 2:25pm
Jessa Lingel, Associate Professor of Communication
Human users of contemporary information technologies are shaped in their behaviors and capabilities by the design and nature of the systems and devices they use. This talk will review in turn some of the ways people have tinkered with and altered the capabilities or applicability of such systems and devices. Such considerations may help suggest guidelines for designers of such systems and devices that promote their ethical use or potentially constrain their unintended abuse.
2:45pm – 3:10pm
Lisa Miracchi, Assistant Professor of Philosophy
Perspectives from cognitive science and the philosophy of mind support the view that ethical issues around LAWS should be conceptualized not as ethical considerations for a certain kind of artificial agent but rather as ethical considerations for the use of certain kinds of tools. These issues broadly divide into three distinct domains: ethical difficulties related to actions (here, the use of tools) under uncertainty; imperatives of human dignity when human agents are removed from immediate action; and the complexities arising from cultural variation around the use of tools and weapons under uncertainty. In the first regard, given that planning and action already gets deferred to existing automated weapons systems, their increasing autonomy increases the urgency of questions concerning what kind and extent of uncertainty is allowed. For example, must LAWS be as (or more or less) reliable than humans? Questions related to human dignity seem best informed by considering the long history of weapons that have been supposed to deny human dignity. Applications of global feminist work to the LAWS context help inform the development of universal standards free of colonialist impositions. For example, what are our obligations to other countries whose people have different cultural views and practices about action under uncertainty and questions of human dignity?
3:15pm – 3:40pm
Daniel E. Koditschek, Alfred Fitler Moore Professor of Electrical and Systems Engineering
This talk will consider the role that robotics researchers might play in helping advance ethical guidelines or international agreements governing the development, deployment and use of LAWS. For context, I will provide a brief glimpse of the nature and extent of US military funding for academic research in robotics by reference to the situation in my own lab. I will then review the scope and content of a recently developed IEEE statement on Ethically Aligned Design as an example of how disciplinary consensus can be developed and expressed relative to ethical responsibilities arising from the social implications of emerging technologies for autonomy. I will close with a speculative sketch of how an international disciplinary effort within robotics informed by allied disciplines might achieve a comparable statement more specifically addressing LAWS.