Copyright © 2017 K Carpenter Associates Inc. All Rights Reserved.
Website Design by Drip Drop Creative
The 100% problem with the 90% solution
One of my favorite quotes is from a gentleman named Olin Miller: “In order to be absolutely certain about something, one must know everything or nothing about it.” In almost all of my training courses, keynote speeches, and presentations, I include this quote on a slide near the end.
I have been reminded of the wisdom of Mr. Miller’s insight recently as I’ve been reading two books: The Signal and the Noise by Nate Silver (which I just finished), and How to Measure Anything by Douglas Hubbard (which I’m halfway through). One common thread between the two books is Bayesian thinking.
The Rev. Thomas Bayes was an 18th-century Englishman who is well known among statistics geeks. Rev. Bayes came up with a very simple formula for how we should update our beliefs when we receive new information. In fact, David C. Knill and Alexandre Pouget of the University of Rochester published a paper in Trends in Neuroscience back in 2004 making the case that our brains use Bayesian logic all the time when trying to formulate a sensible explanation for what’s going on in the world around us, based on the sensory information the brain is receiving at the time. We are constantly weighing the probabilities of different possible explanations for what we are seeing, hearing, feeling, smelling, etc. This is how optical illusions work; the explanation our brain settles on as the “most likely” isn’t the correct explanation at all.
Silver and Hubbard both recommend Bayesian thinking as a way to formulate and modify our beliefs about the world as we gain new information. Hubbard even defines a “measurement” as a piece of information which changes our uncertainty regarding some parameter or event. Since Bayesian logic is all about how to modify probabilities in the light of new information, this dovetails nicely with Hubbard’s enthusiasm for Bayesian thinking.
For example, if you believe that the probability of war breaking out between India and Pakistan is 20%, and you read a news article about Pakistan sending additional troops to the Indian border, how should you revise your estimate of the probability of war? It will almost certainly go up, but by how much? Or if you believe there’s a 90% probability that increasing foot patrols by police will cause a decrease in violent crime, and you read about a city in which foot patrols were increased but violent crime rose anyway, by how much should you adjust your belief in the effectiveness of this approach? In this case, the new information runs counter to our theory, so our estimate of the probability that the theory is correct should go down. But by how much? Reverend Bayes has the answers.
In the foot patrol/violent crime example, you have to ask yourself, “If the theory is true – i.e., if there really is a cause-and-effect relationship between increased foot patrols and decreasing violent crime – what is the probability of an increase in violent crime occurring anyway from one year to the next, despite the increase in foot patrols?” This could happen (after all, there are factors at work other than just foot patrols), but given that the cause-and-effect relationship is real, you might think such an increase would be unlikely – say, a 20% chance of occurrence.
However, you also have to consider the possibility that the theory is not true – that increased foot patrols have no effect on violent crime. If that is the case, then the probability of an increase in violent crime following an increase in foot patrols should be 50% – a coin flip. Given these estimates, Bayes’s equation (shown at the end of this posting) tells us that if we do observe an increase in violent crime in the wake of an increase in foot patrols, we should revise our estimate of the probability that the theory is true from 90% down to 78% – perhaps not as big a drop as most people would expect.
That’s fine, but why am I boring you with all of this statistical geekiness? Well, suppose you are so sure that foot patrols help to decrease violent crime that your estimate of the probability of a rise in violent crime following an increase in foot patrols isn’t 20%; suppose you are so confident that you think there’s only a 5% chance of that happening. If you now see an example of a city in which violent crime did indeed rise following an increase in foot patrols, Bayes’s Law says you must revise your belief that foot patrols decrease violent crime all the way down to 47%. This makes sense; if there’s almost no way that violent crime will go up in the wake of increased foot patrols given that foot patrols actually help to decrease violent crime, then an example of such a crime increase in the wake of a foot patrol increase makes it much less likely that our original premise (that foot patrols help to decrease violent crime) is true.
Worse yet, suppose you are absolutely convinced that increasing foot patrols decreases violent crime – i.e., your original estimate for the probability that the theory is true is not 90%, but 100%. In that case, Bayes’s equation says that no amount of evidence can change your mind. Now, we’re not really controlled by equations (regardless of Knill and Pouget’s research), but I find it frightening the number of people these days who make up their minds on a topic (often without really educating themselves on the subject), and refuse to budge, regardless of the data put before them. All sorts of rationalizations, including absurd conspiracy theories, are put forward to explain why the evidence does not support the individual’s beliefs. Part of this is due to cognitive dissonance (about which I’ve written in previous postings), but part of it seems to be due to the fact that once you’ve declared yourself to be absolutely certain about something, backing down becomes enormously difficult.
Overconfidence is currently a plague upon our national policy discussions. People listen only to news outlets which share their political perspectives, becoming ever surer of their beliefs, ever more certain. People who are absolutely certain of the correctness of their beliefs refuse to modify their positions, regardless of the evidence. John Maynard Keynes famously said, “When my information changes, I alter my conclusions. What do you do, sir?” Unfortunately, altering one’s conclusions to conform to the evidence seems to have gone out of fashion.
In the absence of well established, unambiguous, statistically valid scientific evidence, Nate Silver encourages people to back off a bit on their certainty. Even moving from 100% confidence to 90% confidence opens the door to changing one’s mind. That’s an important door to open. A mind that won’t change isn’t really a thinking entity.
Bayes’s Equation: P(A|B) = [P(A) *P(B|A)]/[P(B|A)*P(A) + P(B|notA)*P(notA)]
Where
- P(A) = the probability of “A” being true or occurring
- The symbol “|” means “given”
- “notA” = means “A” is false or does not occur