A long-standing battle between two different philosophies of “doing statistics” appears to play itself out in trying to understand how the human brain exercises volition (or “choice”, or “free will”), and it appears that our brains have decided to stand with the “Bayesians.”
All life on this planet naturally seeks to “change the odds” of its own survival and procreation. Even plants “learn” (in a rudimentary sense) how to bend toward the light or to extend their root systems toward water. The more successful “learners” reproduce. We humans can often watch ourselves doing exactly that as well, changing our odds of our own survival, plus we have the ability to articulate that process as we experience it, and then pass that experience on to others. This is an important component of what we call human volition, or “free will.”
But we also make these “choices” unconsciously as well every second of every day, as do all other living things on the planet. And these choices look very probabilistic when you look more closely at them, which brings this topic in line with a very common theme in this blog.
Inferential versus Bayesian statistics
In classic inferential statistics, which you likely learned and quickly forgot in college, you parse out generalizations, predictions, and estimations based on samples from a larger data set. A classic classroom example would be to try to infer the color makeup of billiard balls hidden in a shopping bag by drawing out a random sample of those balls. This is the statistics of most political polling. What does a random sample of 1000 potential voters tell us about the much larger population of those who will actually go on later to cast a vote? Often these predictions can be incredibly accurate. And sometimes not.
This is also the mode of much applied scientific research. For instance, you look at a mass of disease data searching for correlations, which hopefully lead (although not always) to fundamental causations. For instance, you can analyze collected medical data looking for correlations between say, smoking and lung cancer. And while correlations do not necessarily indicate causations, inferential statistics have taken us a long way in human progress toward disease detection and eradication.
The alternative Bayesian approach to statistics, named after the British minister and amateur mathematician Thomas Bayes (1701?–1761) starts instead with a presumption of prior probability, say of that correlation between smoking and lung cancer, and interprets any subsequent data found as either improving the statistical probability of that relationship or reducing those odds to come up with a more trusted posterior probability. The underlying math is called Bayes Theorem in Bayes’ honor. The controversy around this approach has largely been based on that “initial presumption” and its validity. Often the “initial prior” begins at an arbitrary 50-50 odds (i.e., “I don’t know”), which has long been anathema to statistical purists. [1]
Regardless of the statistical theory debate, what makes “Bayesian inference” a good model for understanding brain behavior is that it is an evolving process over time. In other words, as time moves on and more information is gathered, Bayesian inference often moves toward an incrementally more accurate explanation or outcome, as the “posterior probability” loops back to become the new “prior,” and continuously gets updated. Classic inferential statistics, on the other hand, sometimes gets lost in what is called “P-hacking,” which is cutting the growing reams of available data in every way possible until one of the analyses yields a “P-value” of statistical significance, suggesting correlation. A lot of bad scientific research can be attributed to researchers to P-hacking through reams of data until a publishable result is found. [2]
The Bayesian approach to statistics is largely responsible for the improvement in weather forecasting, as one example. New data constantly update old models, and our ability to forecast weather farther into the future grows with them.
The brain as an “inference engine”
Why do we have a brain at all? The human brain has its evolutionary roots in the brains that evolved in the earliest vertebrates, such as fish. [3] Neuroscientists Daniel Dennett and Karl Friston are two of the best-known theorists who describe the vertebrate brain, whether in be in a fish or a human, as an inference engine. In other words, your brain’s primary role is to take inputs from your sensory organs (eyes, ears, skin, etc.), combine those inputs with stored memories, and then make probabilistic inferences, generating a bodily action that is most likely to prolong your survival, at least until you can procreate. [4]
Finally, those same actions initiated by the brain constantly feed back as memories (although not necessarily conscious ones) for use as input for improving the process the next time around, just as in the Bayes diagram above. Thus, the Bayesian process of “continually improving the odds” plays itself out. If you are a gazelle trying to escape an attacking lion, your brain is processing all of this input and making a probabilistic “guess” as to which way to leap next. If the guess is a good one, you “probably” will now get to add one more reinforcing “good memory” to the Bayesian process for improving the probability evaluation the next time around. If the guess was wrong, well then, those genetics won’t propagate.
A less graceful and more random example is found in the many tiny brown anole lizards that scatter every time I leave my Florida home. Which way will they turn to avoid the heavy foot fall or the automobile tires? Most of the time, their tiny-brain Bayesian probability calculation “wins,” but sometimes it fails. In evolutionary terms, these migrants, likely introduced from the Caribbean islands, are but geological minutes into a survival challenge from threats that did not exist for their prior million years of existence (i.e., cars and humans). They are still “working out the new math.”

Source: Wikipedia
I have written in a prior post about the human example of a baseball player constantly learning how to be a better hitter. I like this example because the probabilities are so well documented, whereas we usually don’t see ourselves acting probabilistically when we, say, scratch our nose. A player who “learns” how to consistently improve his “hit percentage” from 1-out-of-4 to 1-out-of-3 “successes” each time at bat (going from a .250 to a .333 batting average) advances through the professional ranks and gets richer in the process. The hitter is not usually consciously thinking in terms of probabilities (the ball comes far too fast for that) but the brain, according to the Dennett-Friston Bayesian model, is “doing the math” anyway in a less-conscious, “hard-wired” version.
Minimizing “surprise”
The aforementioned Karl Friston has built on the work of the 19th century physicist and mathematician Hermann von Helmholtz, who first proposed that our brains were perceiving and acting in a probabilistic manner based on all sensory and memory inputs available. Our brains, Friston says, act in ways that reduce the difference between our Bayesian “prior probability” (the initial assumption based on “the last time you were here”) compared with the newest inputs from your senses and memories. Friston calls this process “minimizing surprise.” [5] Our brains are constantly trying to minimize “prediction error,” whether it is in hitting a baseball, choosing something good to eat, or in leaping to avoid the attacking lion.
The neurochemical process behind this “decision-making” is well beyond the scope of this post, and is still under scientific investigation. However, as I wrote about in past posts, we already know that our brain neurons are primed to do a kind of “logarithmic math” in interpreting the intensity and frequency of sound, as well as the brightness of light. Compared to that, Bayes Theorem is likely a piece of cake; its math is relatively simple. [6]
Habits good and bad
Which brings us to the idea of habits. When you interpret the brain as acting probabilistically, either consciously or not, then the idea of a “habit” is simply behavior that has become so routine that its “probability choice” becomes more predictable and unconscious. The conflict in humans is that classic natural selection works primarily to get a creature to the point procreation, and genes that select for a better, longer life after that point may not make it through the selection process.
One might see the conflict between “productive” short-term behavior, say sexual activity, and group-beneficial behavior, better for the survival of the species in general, as a root defining force in human ethical and social mores. “Bad habits” are often those very behaviors that “don’t play well” in long-term social relationships, but do have short-term benefits.
I have written in the past about the classic introductory psychology example of the “fruit versus cake test.” Presented with a plate of attractive fruit along with some delicious-looking cake, we each likely “instinctively” reach for one or the other. Indeed, our personal “cake-vs-fruit percentage” probably tells a lot about us. Yet repeated studies show that a stressor as small as trying to remember a seven-digit number can “tip the probability” for many of us toward the more short-term pleasure offered from the cake (Guilty as charged!).
And so, we can think of bad habits as the self-destructive “prior probabilities” deeply ingrained in our brains that can likely be changed only through repeated and successful “Bayesian challenges” that overcome that “prior” with new and better reinforcement memories. “Good habits,” on the other hand, are similar probabilistic behaviors that keep us in better long-term personal and “community” health, that likewise need to be continually strengthened through repeated “Bayes testing.”
Australian singer-songwriter Billy Fields had a top hit in his own country in 1981 with a great song about “Bad Habits,” which has a more recent cover recorded by Lake Street Dive’s lead singer Rachael Price.
Notes:
- A good source for more information on Bayes Theorem is McGrayne, Sharon Bertsch. The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted down Russian Submarines, and Emerged Triumphant. Yale University Press, 2012.
- Neurologist and science podcaster Steven Novella is a frequent critic of P-hacking. See Novella, Steven. “Statistical Significance and Toxicity.” Science Based Medicine, 27 Mar. 2019.
- One of the best books on this subject is Shubin, Neil. Your Inner Fish: the Amazing Discovery of Our 375-Million-Year-Old Ancestor. Penguin Books, 2009.
- Note that this process is not necessarily “designed” or “intentioned,” rather it is the mathematical result of slight advantages to procreation that turn into sizable changes over millions of years. My favorite simplification of this math is that “if your parents didn’t live long enough to procreate, then you won’t either.”
- Raviv, Shaun. “The Genius Neuroscientist Who Might Hold the Key to True AI.” Wired, 19 Nov. 2018.
- Basically, Bayes theorem is calculating the probability that Event A will occur given that Event B has already occurred:

Source: Wikipedia
For additional posts on probability, volition and ethics, follow the Dice icon back or forward where it appears.
Pingback: Can self-driving cars be moral? – When God Plays Dice
Pingback: Schlemiels, schlimazels, probability and free will – When God Plays Dice
Pingback: Gaslighting and the ethic of veracity – When God Plays Dice
Pingback: Probability in 1000 words – When God Plays Dice
Pingback: The roll of the COVID-19 dice – When God Plays Dice
Pingback: Probability, uncertainty and inanity with the coronavirus – When God Plays Dice
Pingback: Betting your life on “The Truth” – When God Plays Dice
Pingback: Inoculating your brain against misinformation – When God Plays Dice
Pingback: Hurricane Ian, probability, and the Greek Fates – When God Plays Dice