Friday, May 27, 2016

Self-Driving Cars, Run-Away Trams, & “Unavoidable” Accidents

How should the (self-driving) cars be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants or should it protect the occupants at all costs? Should it choose between these extremes at random? -- Why Self-Driving Cars Must Be Programmed to Kill -- MIT Technology Review, October 22, 2015

A primary cause of philosophical illness: a one-sided diet. One nourishes one's thinking with only one kind of example.--- L. Wittgenstein (1967) Philosophische Untersuchungen, §593 (Suhrkamp: Frankfurt a.Main) p.189. (My translation. - EGR)

Who Broke the Copy Machine? And Other Follies.
Vignette, the First: In the main office of his company, Larry waits patiently for a colleague to finish using the common copy machine. Larry needs to duplicate a report for a later meeting. At his turn, Larry loads the machine and pushes the copy button. A strange, loud groan issues from the machine. The attention of everyone in the office is now diverted to an astonished Larry. The machine goes dead and smoke issues from it. The office manager comes over and fretfully asks Larry, “What have you done?” In the ensuing atmosphere of reproach no one mentions -- if they even know -- that the office manager, himself, put off maintenance for the copier, doubling the service interval, at the behest of the company president that expenses be reduced wherever possible.

Vignette, the Second: Mack drives a tram (trolley car) the movement of which is constrained by a track. Mack can only make it stop, go, speed up, slow down or switch over to a connected track. One day, while approaching a blind curve, Mack realizes that his braking mechanisms don’t work: he is in a runaway vehicle. Rounding a blind curve yet picking up speed due to descending an incline, Mack sees a group of people ahead on the track. He fears the worst. Luckily there is a parallel track he can switch over to. Alas, there is already a single person ahead on that track. It is, it seems, a matter of life-or-death. Switch or not-switch. Kill many or one? Which should he choose to do? (See, for example, Thomas Cathcart (2013) The Trolley Problem)

Vignette, the Third: A group of automotive engineers is perplexed. They are constructing the control programs for a self-driving car. No passenger will be a driver, i.e. make driving decisions. They are aware of some form of Vignette, the Second. (They don’t stop to think of examples like Vignette, the First.) The engineers jump to a dismaying preliminary conclusion: in the (likely rare?) event that an unavoidable accident threatens risking human damage and death, the car must, willy-nilly, be programmed for a lethal decision. If faced with the choice of Vignette the Second the car must be programmed to kill, whether many or one. But, in the aftermath, who -- they worry -- will bear responsibility for the damage? (See Will Knight, “How to Help Self-Driving Cars Make Ethical Decisions”)

Responsibility & Intervention. The three vignettes appear to have similar underlying structures. However, there are substantial differences among them. In the first and second vignettes, the actors whose choice behavior we are considering are caught up in a chain of events which they have had no part in structuring. They are, so to speak, minor officers on the Titanic. In the third vignette, the actors are Planners not confronted with choices that immediately threaten catastrophe. The Planners -- and the analysts cited above, i.e. Thomas Cathcart and Will Knight -- mistake the second vignette as a model for the first or third vignettes, respectively, by conflating important distinctions:
a. they confuse “Role” with “Function;”
b. they confuse “Choice” with “Responsibility;”
c. they invoke a vague notion of “unavoidable” that allows equivocation between “Intervention” and “Prevention.” (See Intervention: helping, interfering or just being useless?)
In Vignette, the First Larry’s role, for which he is responsible, is as a procedure-follower. If machinery is involved, Larry is, shall we say, an Engager. He is not himself a duplicate-producer. He employs the copier; it is the copier which produces duplicates. The casual speech of office politics may confound these distinctions, but morally and legally, should it come to that, his having followed procedures may protect him, even if the machinations of other organizational members would have him take the blame. (These distinctions are often disregarded in cases of organizational “disruption” based on what is masqueraded as “performance evaluation.” )

Once started, Larry was not likely able to intervene in the rapid self-destruction of the copier, even had he realized what was happening. Prevention, which is neither part of Larry’s role nor responsibility had been deliberately left undone by the office manager -- a risk taken that failed. The destruction was practically unavoidable, once initiated. But Larry, despite facing issues of responsibility had no practical choice in the matter.

In Vignette, the Second Mack is, also, a procedure-follower, an Engager, whose personal responsibility is delimited by the well-functioning of his machine. The “choice” between not stopping killing many or killing one does not relieve those responsible for accident preventive procedures of their responsibility. This consideration is regularly ignored by those who take advantage of some person’s being a link in a chain of events ending in catastrophe to charge that person with major responsibility for the catastrophe. Mack has a choice, but no fair responsibility.

In Vignette, the Third The automotive engineers worry, as they should, that in order to make their autonomous cars salable, they may end up being held responsible, to some extent, for the damage that results. Just consider the scandal with Volkswagen where their vehicles were programmed to create false pollution reports to enhance salability. There are many other examples of vehicle recall and successful lawsuit that come easily to mind. The engineers are, by role AND function, responsible actors in that they contribute to the production of the outcomes, although not so much as their organizational leadership who may determine what the engineers are permitted to do..

NOTE WELL: It is at this point that the vagueness of the term, “unavoidable,” comes into play. In the debate on autonomous vehicles the invoking of "unavoidable accidents" functions to buffer those in leadership positions from the moral hazards they run as they give considerations other than product safety higher priority. “Unavoidability” provides a shield for morally questionable leadership. (See Buffering: Enhancing Moral Hazard in Decision-Making?)

Avoidability. To call something “unavoidable” is not to make a simple empirical judgment. Just consider: if an event is called “unavoidable,” what observations, by themselves -- even with trustworthy instruments -- could support that judgment? It’s a slightly different matter if we modify “unavoidable” with adverbs such as “practically,” or “economically.” These allow us to consider narrower options on which we may find an easier consensus as to what is desirable or not. But there is likely still some necessity for theoretical argument.

Basically, to call something “unavoidable” is to make either of two negative possibility claims:
1. It cannot be intervened in so that its potential effects are nullified; or
2. It cannot be prevented so that its potential effects are nullified.
At this stage of our knowledge (2015) -- examples of practically non-intervenable events would be -- once initiated -- an atomic bomb explosion, the short-range discharge of a bullet, or the ingestion of cyanide. However, they all could be and are preventable prior to their initiation.

It would seem that, logically, any intervenable event is preventable. But there are some things we cannot logically prevent prior to their initiation, because the character which prompts us seeing them as needing prevention does not manifest itself prior to the event, e.g. mayhem, having developed from mere rough-housing. Or disregard for traffic signals. Or possible future adult criminality in children. Attempts at pre-emptive treatment for such examples risk violations of legal and other rights.

Solutions? There is yet another rub: many interventions or preventative actions may be seen as too monetarily costly for individuals, or even large communities; for example, such actions as providing closely monitored special limited access roadways for confining the use of self-driving cars.

In the end, self-driving cars may be treated as many would like to do in the U.S. with firearms: requiring vetted ownership licenses and heavy usage restrictions. This is much more likely since human-driven cars kill more people every year than do firearms. (See Gun Fun or Safe Kids? Must We Make Tradeoffs?)

Unlike with firearms there is, at this time in the U.S. no widespread, traditional use of autonomous vehicles (AV’s) to damp the enthusiasm for imposing restrictions. Nor is there any Constitutional basis for arguing every citizen’s right to own AV’s. But what counters the enthusiasm for highly restricted use of self-driving cars -- e.g. consumer mistrust in their programming and intentions by designers to block owner attempts to intervene at will -- will also counter market enthusiasm to invest in their manufacture.

I will close the present considerations with a more sanguine opinion:
The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall -- Bonnefon, J et al. (2015) Cornell University Library. Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? arXiv:1510.03346 [cs.CY]
Afterthoughts. The moral hazards risked by the leadership on issues pertaining to self-driving cars may well become the physical hazards of those affected by that leadership’s decisions. And with it, American worship of the automobile may come to acknowledge the blood sacrifice it has long been paying. Greater evils may be avoided at the cost of creating lesser ones. But it may not be a matter of the length of casualty lists.

To pursue these issues, see Leadership vs. Morality: An Unavoidable Conflict?

Cordially, -- EGR