“Life can only be understood backwards; but it must be lived forwards.” – Søren Kierkegaard (1813–1855)
A major flaw in the “teleological” (end-based) ethical models I have been presenting in recent posts is that how you intend for the ethical dilemma to turn out, and how it actually does turn out, may well be two different things. In my earlier example of the likely first ethical dilemma in human experience, you must choose whether you should fight with the stranger, or instead extend a welcoming hand. In either case, the outcome, no matter how good your intention, is dependent on outside, and even probabilistically-random, conditions beyond your control.
This probabilistic view of “end states” was given the name of “game theory” by economists, which I introduced in an earlier post about the “North Korea problem.” It was so named because games proved to be the best-understood, limited-scope examples for illustrating this line of reasoning. But in reality, this is the biological essence of survival in the animal world. Those organisms that best predict the more optimal end states going forward are more likely to survive and reproduce. Those organisms with “hard-wired,” inflexible survival rules, as well as those with rose-colored views of the “good end,” paradoxically, often meet a “bad end.”
When you apply this “prediction of a good end” to human experience, you can explain a lot of our conception of volition (or free will or choice). The multiple decision-making portions of our brains point us to the biological consensus of the most probable way to achieve our preferred “end state.” This usually works, but because of basic probability math and outside forces beyond our control, sometimes it doesn’t.
Humorist Richard Armour once joked about Henry Clay, who famously proclaimed, “I’d rather be right than be President,” that the nation’s voters took him at his word in each of the presidential elections of 1824, 1832, and 1844. [1] He never reached his desired “end state” despite his principled intention and multiple attempts at his objective.
In the U.S. variant of football, Monday water cooler conversations at places of employment are often filled with conversations about economic game theory, in the guise of re-hashing all the mistakes of the losing quarterback in the preceding weekend’s contest. Inevitably, the losing team’s quarterback made some low-probability-of-success calls that indeed failed to produce. But he also likely made some high-probability-of-success calls that failed. Both are frustrating, but for different reasons.
But “Monday morning quarterbacking” only works in hindsight; the next game will be equally uncertain at its most critical moments. And so is almost every important thing in life.
Despite the best of our intentions, any attempt at directing a “good end” outcome is usually probabilistic in its odds of actually coming to fruition, and those probabilities are not always under our control. Thus, unintended consequences may well occur whenever we attempt a teleological, goal-directed solution to an ethical dilemma.
For instance, with best of intentions, we allocate public funds to help alleviate poverty, but a portion of the funds may get misused, or the process may even exacerbate some long-term poverty for some, while literally saving the lives of others. Or we give a tax break to land a new retailer on Main Street with the expectation of new jobs created in the community, and then a competitor down the road collapses and goes out of business.
We may even let the possibility of those unintended consequences prevent us from taking any action when faced with a moral dilemma. So sometimes, you need to step out and “do the right thing” even if you fail. It may simply be the probabilities at work, although you are best off if you know what those probabilities are from the outset. Martin Luther said to “Sin boldly,” but let me suggest that it also helps to know the math and “Sin wisely.”
And so, here is the sometimes-fatal flaw in teleological thinking. The desired “good end” cannot stand alone, just as the best rules can drown in exceptions to those same deontological rules. Ethicist Richard Niebuhr once noted that, when driving a car, neither a knowledge of the rules of the road nor a knowledge of your destination is alone sufficient to get you there. In reality, you need both rules and ends (and more) to get where you want to go.
Notes:
- Armour, Richard. It All Started with Columbus: Being an Unexpurgated, Unabridged, and Unlikely History of the United States from Christopher Columbus to the Present for Those Who, Having Perused a Volume of History in School, Swore They Would Never Read Another. McGraw-Hill, 1953.
For additional posts on probability, volition and ethics, follow the Dice icon back or forward where it appears.
Pingback: Who is my neighbor? – When God Plays Dice
Pingback: When you can’t save them all – When God Plays Dice
Pingback: The ethical theory of “Sucks to be you!” – When God Plays Dice
Pingback: The moral conversation around coronavirus vaccine priorities – When God Plays Dice