Say you are playing a card came and someone hands you a shuffled deck of cards for you to examine, and while going through the deck you see three cards of the same value in a row in the deck, say, three kings. You likely say, *“Whoa, this deck is not properly shuffled; the cards aren’t random.”* But you may well be wrong.

It *could be* a bad shuffle, mind you, but in any well-shuffled deck the odds of *any three* cards of the same value appearing together are actually the same as any other three cards in the deck appearing together. You just don’t notice, for instance, a three of hearts, a six of spades and a queen of clubs appearing together. Randomness in the real world is always more “clumpy” than you think it is. [1]

The primary practical implication of this example of “clumpiness” is that much of the profit model of the gaming industry is based on convincing you that you are seeing a “pattern in the randomness,” when you are more often just seeing “natural clumping.” Slot machines and dice games are the best examples of profiting off imagined patterns that are really just random clumps.

This post is an observation of several ways in which we confuse coincidence and fate when we observe the world around us. Here are a couple more of my favorites, with some real-world applications and misapplications:

**The expert coin flipper in your midst**

In a variant of an example I first came across in an early edition of Burton Malkiel’s *A Random Walk Down Wall Street*, I have had classes of about 30 students look for the “expert coin flipper” in the room, the person who can flip four or more heads in a row. The students all stand up and each flips a coin. Those that flip a tail must then sit down, and those still standing flip again. After four flips, there will almost always be at least one student standing, and often you will see five or six heads in a row in this process starting with just 30 initial players.

In truly random coin flips, each flip is *independent* of the one before it, and so each flip has a fifty-fifty outcome, no matter how many heads came before. Even then, it just *seems* “unlikely” or “lucky” that someone can flip four heads in a row, and it is. However, if you start with 30 people, 15 on average will remain after the flip, 7 or 8 after the second flip, 3 or 4 after the third, and so on. The small initial probability becomes a near-certainty because of *the law of large numbers* (which are not even so large in this case).

I have written before about the problem of “P-hacking” in some scientific research, which is an example of the coin-flipper phenomenon in practice. Researchers with poor math skills may try to demonstrate that the statistical “P-value” of their analysis is confirming for them that the relationship they are proposing, say between a particular proposed “cause” and a subsequent medical condition, indicates the results are *not* “just random.”

But if you were to keep analyzing the *same* large data set in multiple ways you are effectively starting out with the “30-player coin flip,” and the odds are good that at least one of your tests is going to show a “positive correlation” when you are instead simply seeing that “expert coin flipper” in action. The adjustment for this error is usually pretty simple. You re-run the tests to generate a new data set, and then see if you get the *same* “expert coin flipper” showing up as the winner. That *could* happen, but probably (and usually) not.

**Cooking the books**

Here is the test: You have been given a list of 100 individual bank account balances, and then you make up a list of 100 “random” balances for those same people just off the top of your head trying to diversify the amounts selected without looking at the original list. You then hand both lists to an independent evaluator and ask her to predict which of the two lists is “faked.” If your evaluator knows a statistical property of “clumped randomness” called *Benford’s Law*, that person can likely pick out the real list every time.

It turns out that all the evaluator needs to do is count the occurrences of the *first digit* of each account balance. In other words, count the total number of balances that are between $100.00 and $199.99, or between $1000.00 and $1999.99, or between $10,000.00 and $19,999.99, etc., and record the count as a “one.” Counter-intuitively, in a truly random selection of bank balances, the first digit will be a *one* about 30% of the time (or about 30 of these accounts). It will be a *two* about 17-18% of the time, and an *eight* only about 5% of the time.

That’s *Benford’s Law*, which I have found mystifies even many people who are good at math. I wrote an explanation of it an earlier post. A financial analyst warned authorities about Bernie Madoff’s massive fraud using this technique years before it became public in 2008, but he was ignored. In making up your own “fake” account balances, you (and Bernie Madoff) likely would try to “balance out” the array of numbers across the spectrum of digits from one to nine, but you would not really be acting like “nature’s randomness,” which is much clumpier than you think. [2]

**Do bad things happen in threes?**

I also wrote about this phenomenon a while back, but I still cringe a bit whenever I hear this expression. The short answer is *“No, they don’t,”* but there is a very natural occurrence called *Poisson clumping* that seems to “trick our mind.”

*Poisson processes *( pronounced ‘*pwa-sahn’*) are time-based random events that occur frequently in biology, like cell replication, and in human experience, like telephone calls coming into a call center, or the deaths of celebrities. These processes exhibit a long-term “average rate” of occurrence over time, and over a long period with many observations they appear to be statistically “normal,” exhibiting the familiar bell-shaped curve.

However, because the time measurement has a “hard stop” at *zero* days, minutes, or seconds between successive occurrences (“time zero” being right now), the normal curve gets “squished” toward the zero end when we are measuring short periods of time and a small number of events. Even if each celebrity death is “statistically independent” from any others, our human short attention span often perceives Poisson clumping with the random inevitability of this squashed probability curve yielding, say, three important deaths within a short (and arbitrary) amount of time. [3]

We don’t typically ascribe a “pattern” with two near-simultaneous events, and the odds often start to drop significantly with four or more deaths within our notoriously short attention spans. Add to this our very imprecise definition of “celebrity” and a fuzzy relevant time frame, and we are basically “spooked” over very natural happenings.

**Visual clumps**

*Pareidolia* is essentially a “brain misfire” when it encounters “image clumping.” When our brain perceives random patterns that have “clumped” to resemble characteristics of the human face, it becomes very hard *not* to interpret the image as a face, such as with the famous “face on Mars”:

When view in different lighting and from a different direction, however, the effect disappears:

Evolutionary biologists have proposed various survival advantages to having this “brain misfire,” and they all lend credence to the conception of the brain as primarily a “probability evaluator” or “inference engine” that is usually correct, but sometimes wrong. We are “hard wired” to look for visual “clumps” in nature. Better that we *think* we see a lion in the grass and are in fact wrong than to *ignore* the face-like visual clumps and be wrong “just once.” Only one of these humans is likely to pass on genes to the next generation.

The short message here is that, to resurrect a popular but crude car bumper sticker from the 1990s, *“Shit happens,”* but it happens *probabilistically*, and more often in clumps that we would think.

Notes:

- This is an example of the law of large numbers. The odds of
*any*three card sequence occurring are tiny. Indeed, it has been estimated that there are more shuffle combinations in the normal 52-card deck than there are atoms on Earth. But like lotteries, “winners”*will*happen because of the massive numbers of “tickets sold.” - The short explanation here is that humans count in even-spaced “tens” but nature does not. Money and living objects tend more often to grow
*logarithmically*. - Technically the
*Erlang distribution*models the continuous time spans between events, while the discrete*Poisson distribution*counts the number of events within a particular time span.

Pingback: Chasing Benford’s Law down an election rabbit hole – When God Plays Dice