Human-Machine Ethics: Experiments in Moral Responsibility

Author: David Burke, MS, Principal Scientist, Galois, Inc.

Abstract Background: The success of any human-crewed interstellar mission depends on the existence of effective human-machine relationships. We anticipate that machines during such a mission won’t simply play the part of a supporting, background role, like an autopilot. Instead, navigating the demands of such a mission means that machines need to be equal ethical partners with humans, making decisions under conditions of irreducible uncertainty, in scenarios with potentially grave consequences.

Abstract Objectives:
The objective of our work is to identify the salient factors that would either encourage or discourage effective partnerships between humans and machines in mission-critical scenarios. Our hypothesis is that there needs to be ethical congruence between human and machine: specifically, machines must not only understand the concept of moral responsibility; they must be able to convey to humans that they will make decisions accordingly.

Abstract Methods:
Using participants obtained through Amazon Mechanical Turk program, we conducted experiments to tease out salient differences between trust granting to humans and to machines by adapting the well-known trolley problem. In this scenario, a trusted advisor (either human or machine) gives ethical guidance for a situation that turns out catastrophically. We looked for differences in how blame (moral responsibility) was apportioned.

Abstract Results: The empirical differences were unambiguous and striking: in scenarios with a trusted human advisor, blame was most often shared between humans, but when the advisor was a trusted, intelligent robot, the advisor was not assigned accountability. These results demonstrate that as things stand today, people are not willing to grant moral responsibility to intelligent machines, even when the actual behavior in question is identical to that of a human.

Abstract Conclusions: Our experiments demonstrate that intelligent behavior, by itself, is not sufficient for demonstrating the ethical congruence necessary for humans to grant moral agency to machines. Our future research focuses on devising conceptual approaches to enable machines to demonstrate that they understand the stakes involved for humans when confronted by an ethical dilemma/violation. Only then will machines be considered equal decision-making partners.

Leave a Reply