A recent post looked at the concept of *Markov chains* to help us see the process by which people change allegiance from one restaurant to another, or one political position to another. This post follows up with some of the math behind Markov chains and gives you access to a spreadsheet to let you experiment with the concept.

If any math pedants are reading this, I am intentionally presenting my own very simplified overview and examples of this concept. I know it gets much more complicated pretty quickly.

As a review, the mathematics behind Markov chains were developed by the Russian mathematician Andrey Markov (1856–1922). Markov chains look at how the “states” of dynamic systems change over time, where “time” might mean years, days, seconds, or even nanoseconds. The basic idea is that what happens in the immediate future, **“time t+1”**, is dependent on the current state, **“time t”**, plus and minus the changes impacting that state. [1]

**A “Two Node” simulation**

The simplest examples are binomial, like the aforementioned restaurant example, where we either stay with **Restaurant A** for our next visit, or switch to its startup competitor, **Restaurant B**, with each option having an independent probability, and the two probabilities add up to 1 (or 100%). In math terms, the probability of staying loyal is **P(A/A)**, which is probability of staying with **Restaurant A** in **“time t+1”** given we were at **Restaurant A** at **“time t”**. The alternative is **P(B/A)**, the probability of switching to **Restaurant B**, given that we were at **Restaurant A** last time. I call this latter probability the *defection rate.*

And then **Restaurant B** also has its own “loyalty rate,” **P(B/B)**, and its own defection rate, **P(A/B)**, as shown below, which also add up to 1:

In this example, 10% of **Restaurant A’s** customers will “defect” to the new **Restaurant B** with every new day, while 90% loyal. But once there, 5% of** Restaurant B’s** customers will “defect back” to **Restaurant A**, with remaining 95% now loyal instead to the new **Restaurant B**. The question is, then, what will happen to each restaurant over time, as each day brings a new set of defections in each direction?

There is a straight mathematical solution to a simple Markov chain like this one, but I wrote a spreadsheet [see Note 2 below] that walks through each “new day” until the business at each restaurant stabilizes (and it will if the defection rates don’t change). If we start the existing **Restaurant A** at 1000 customers the first day, and the new **Restaurant B** at zero customers, then the days map out like this (rounding to whole customers, which slightly impacts our final answer, but leaves bodies intact):

Note that one hundred people (10% of Day 1 customers) defect to **Restaurant B** in Day 2 and 90 more (10% of the remaining 900) in Day 3. However, by Day 3, 5% of the first 100 defectors (5 people) “defect back” to **Restaurant A**. This back-and-forth continues, with **Restaurant B** “winning” the defection war, but at a declining number each day. By day 36, the numbers stabilize, where the *total number* of defections from either side are equal, even though the *defection rates* are different.

Because we didn’t dissect people in this simulation, our **Restaurant A** number stabilizes at 335 people, but if we didn’t round each day’s new “state,” this would stabilize at one-third, or 333.333~ people because of the relative defection rates in this case. The time it takes to stabilize will vary, as well as that final stabilization number, depending on the size of the initial states, the absolute size of the defection rates and their relative difference.

In that earlier post, I suggested that this same approach can be used to project how people “change their minds” on political candidates and positions. We may not even realize it, but each new piece of political information that comes into our daily information “feed” can subtly change the “loyalty rate” and the “defection rate.” Minds are changed one at a time. Add up a few thousand people, however, with the changes in each direction balancing against one another, and you can see how candidates gain voter share, lose it, and stabilize their standing over time.

You can click on this spreadsheet to play with these numbers yourself. Just enter values into the boxed cells for defection rates and the initial number of people for each “restaurant” or “political position.”

**A Three Node simulation**

What happens if we have *three* restaurants or political candidates to choose among? This aforementioned spreadsheet also has a tab for a three-node simulation. The complexity is that we now have three pairs of “defection rates” to figure out. Let’s assume that, instead of days as our time period, we will track weeks. With Markov chains, the time between “states” is whatever we want it to be.

In this example, assume that there are three political candidates, each starting with 1000 supporters. We have to estimate (or poll) how many of **Candidate A’s** supporters defect to **Candidate B** and **Candidate C ***each week*, respectively, and repeat the pairings for the other two candidates. Here are our estimates:

Note that for any candidate, the sum of the two defection rates plus the “loyalty rate” must be 1 (or 100%). Here are the results of the first ten days (again rounding to whole people):

This simulation stabilizes at about 35 weeks, unless something changes the defection rates along the way:

Given our assumptions, **Candidate C** is going to get the plurality of votes if we can hold out for enough weeks. We could bump the number of candidates up, and the same Markov chain math works, but the number of defection rate pairs grows exponentially, quickly getting unwieldy.

In all these cases, note that a major factor is the net “capture.” If we want people to our side, we need to not only attract them to our position or restaurant, but we also need to keep them coming back.

Notes:

- I have explored the idea of
**“time t+1”**in a non-mathematical way in an earlier post about how ants make decisions. - Click here to download an Excel spreadsheet with tabs for both the two-node and three-node Markov Chain simulations above. Enter your own “defection rate” data and initial state values in the boxed cells in the spreadsheet and then view the changes to the results.