Can self-driving cars be moral?

      3 Comments on Can self-driving cars be moral?

Caltech physicist Sean Carroll presents one of the best science podcasts, called Mindscape, and a recent episode featured philosopher Derek Leben. Leben has been researching how self-driving vehicles might be programmed when faced with what we would typically call a moral dilemma if faced by a human. For instance, if there were no choice but to swerve the car into a brick wall, possibly killing a passenger, as opposed to running over a pedestrian, how should the car “choose” between the options?

This hour-long podcast winds up reviewing the most common strains of moral philosophy, but it does quickly devolve into “philosopher-speak,” so I thought a brief set of definitions might be in order before listening to this fascinating conversation. I call these philosophies “vectors” rather than categories, because they tend to defy nice clear boundary lines, but instead “point in different directions” with their variants further “blurring the lines” between similar moral philosophies.

I wrote about all of these categories in a series of posts from last year called “Good People Disagree.” My thesis there is that these vectors arise, and causing “good people to disagree” about important questions of moral impact, because different parts of our brains have evolved these different approaches as alternative improvements to our chances of survival and procreation of the species, and the brain parts are competing with each other for dominance.

Sometimes one mode of “moral decision-making” works better than another, and in the end human survival has depended on the relative probabilities of each leading to offspring. In short, if your parents made really bad moral choices to the point of blunting their ability to procreate, then you probably would not be around to be reading this. I will link to key posts on each vector for further elaboration.

The deontology vector

When you hear the term deontology, think instead “rules.” Immanuel Kant’s name is invariably brought up in any deontology conversation, and indeed it comes up in this podcast. Kant proposed that we can define a mutually-agreeable set of rules to get us through the most difficult moral choices. Prolific science writer Isaac Asimov famously articulated his “Three Laws of Robotics” as a first-stab at robot deontology. [1] Both Carroll and Leben dismiss these rules as too simplistic, although Leben sees them as a good starting point.

The common perception is that self-driving cars will make these critical life-or-death decisions based on a series of “If-then” programmed rules. This approach is likely insufficient. Someone will not be happy with a given rule, especially when the outcome turns out to be damaging to someone with the power to sue.

The consequentialist/utilitarian vector

Consequentialism and utilitarianism follow similar paths down an “ends-based” vector, where, in the first case, you are trying to maximize good consequences and minimize bad consequences of the choice facing you, while in the second case you are trying to maximize an economic and moral “end point” called utility. The major difference from deontology is that in deontology, you are trying to set up rules from the outset, while in these two cases you are trying to “get to a good end” regardless of the rules.

The issue here with self-driving cars is who defines that “utility” factor or the “optimal consequence”? Carroll and Leben discuss the “tyranny of the majority” and how some people wind up getting valued less than others in any “ends-based” moral discussion.

The virtue vector

The virtue conversation typically harks back to Aristotle, who defined “the moral life” as embracing a set of optimal qualities, or virtues. For example, courage would be an optimal virtue in the middle ground between cowardice on one end and rashness on the other. The idea here is to somehow program a self-driving vehicle to model the best human virtues. Besides the difficulty in doing that, it is not clear that this approach would arrive at any better rules or outcomes than the first two vectors described above.

The Rawls “Maximin” Principle

Dr. Leben comes out in the end as an advocate for the contractualist approach and the maximin principle championed by the philosopher John Rawls in his seminal text A Theory of Justice (1971). Under this principle justice is defined as that set of actions that are of “the greatest benefit to the least-advantaged members of society.” His famous thought experiment called “the veil of ignorance” suggests that we imagine what kind of world we would want to have if we had no idea where and under what circumstances we might be birthed.

If we did not know whether we would be coming out as rich or poor, or having some sort of disability, or perhaps being born into a socially-repressed racial or ethnic group, how would we want society to treat us? The “maximin principle” attempts to define the “best possible minimum” that society can apply to the least-advantaged. Leben suggests that this principle can, to a large extent, be programmed, especially if self-driving cars will be able to process a huge amount of data within a short amount of time in order to determine “best of the worst outcomes” in application of the maximin principle.

The moral conversation

In my series from last year, I concluded by suggesting that human moral dilemmas need to draw upon all of these vectors (and some others) in a continual “conversation” trying to weed out the worst features of each model and getting the best outcomes for human society “on the table.” In the end moral decision-making is a communal effort, as we try to live together in some sort of “just” society.

The ability to fully program autonomous vehicles to implement these often-messy vectors is probably still several leaps of technology away, but this exercise can get us past simplistic “trolley problems” [2] and tackling some real-world moral dilemmas with better information. In the meantime, somebody is programming these vehicles, and it seems like the read of “the community” ought to know how some critical decisions will be made.

The conversation does pose the interesting question of whether our human inventions can be “moral” in any sense. Let me suggest that we have often let our created systems “define their own morality,” quickly getting out of our control. The internet with its social media effects on popular culture and our democratic institutions is probably the best case of our willing handover of human values. Self-driving cars and trucks are already here, and they are already making life-or-death decisions that we humans have previously allocated to our collective “moral sphere.”


Notes:

  1. The Three Laws of Robotics are (briefly): (1) the robot should not allow a human to come to harm, (2) the robot should obey all human orders unless Law #1 is violated, and (3) the robot should protect itself unless it conflicts with the first two laws.
  2. “Trolley problems” are hypothetical moral dilemma “thought experiments” where you are forced to choose, say, between diverting a train car away from running into several people versus one lone person on another track.

For additional posts on probability, volition and ethics, follow the Dice icon back or forward where it appears.

Prior Dice  Next

3 thoughts on “Can self-driving cars be moral?

  1. Pingback: Brett Kavanaugh, moral luck and the veil of ignorance – When God Plays Dice

  2. Pingback: Bad habits and the Bayesian brain – When God Plays Dice

  3. Pingback: COVID-19 and real-life lifeboat ethics – When God Plays Dice

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.