Date(s) - 02/20/2020
3:30 PM - 4:30 PM
100 New Engineering Building
Autonomous vehicles (AVs) could soon drive as well or better than humans. Because human drivers regularly make important ethical decisions, AVs also must be designed to behave ethically. Ethical decisions for AVs are often presented as “trolley problems,” in which an AV must choose between two costly outcomes. A popular method for finding ethical algorithms for AVs elicits human judgments about what AVs should do in these scenarios in the hopes of designing AVs to do what most human drivers would approve of, which we call the ‘Trolley-Preferences’ method. One of its limitations is that it assumes that AVs have the same options and observational powers as human drivers.
We present a new method, the ‘Data-Theories’ method, which involves the modeling of three scenarios in which an AV is exposed to risk on the road, to determine possible actions for the AV, and to calculate expected injuries from historical car accident data. We then appeal to ethical theories to determine the important ethical considerations for and against each of the AV’s options. We show how this method is better suited both to determining an AV’s full set of options and to capturing all relevant ethical considerations than the Trolley-Preferences method.
Our results demonstrate that AVs have options that human drivers do not. Designing AVs to mimic the most ethical human driver would not ensure that they do the right thing. Our method can identify an AV’s range of options as well as their likely consequences, both of which are crucial inputs to the design of ethical algorithms for AVs. Moreover, our method appeals to ethical theories instead of ethical preferences, and so is designed to produce ethical algorithms backed by theory as opposed to public opinion. While ethical theories can often disagree about what should be done, disagreement can be reduced and compromises found with a more complete understanding of the AV’s choices and their consequences.