Having a strong opinion about an issue can make it hard to take in new information about it, or to consider other options when they're presented. Thankfully, there’s an old rule that can help us avoid this problem — and even help us make good decisions when we’re uncertain. Here’s how Bayesian Reasoning works, and why it can make you a better thinker.
To find out more about this topic, we spoke to mathematician Spencer Greenberg, co-founder of Rebellion Research and a contributing member of AskAMathematician where he answers questions on math and physics. He has also created a free Bayesian thinking module that's available online.
Bayes’ Rule is derived from a mathematical formula, but as we learned from Greenberg, you don’t need to know the equation or do fancy math to apply Bayes’s principle to daily life.
“It’s usually not that useful writing out Bayes’s equation,” he told io9. “And in fact, in order for this line of thinking to be useful in day to day life, you have to be able to think about it without having to sit down and write out a formula.” Fortunately, says Greenberg, there is a way to do that.
The Power of Probabilistic Reasoning
Bayes’s Rule is a theorem in probability theory that answers the question, "When you encounter new information, how much should it change your confidence in a belief?" It’s essentially about making decisions under uncertainty, and how we should update or revise our theories as new evidence emerges. It can also be used to help us reach decisions in those circumstances when very few observations or pieces of evidence are available. And it can also be used to help us avoid common mistakes and fallacies in our thinking.
Bayesianism is a great example of math applied to daily life. It’s derived from a widely accepted and uncontroversial formula that’s been around for hundreds of years.
“But it gets philosophically interesting when you start to interpret its implications,” says Greenberg.
The 18th century mathematician and theologian Thomas Bayes came up with the formula, and it has been used in a variety of applications ever since. Today, it’s used to analyze sequences of data in such fields as finance, artificial intelligence, engineering, medicine, and philosophy.
The key to Bayesianism is in understanding the power of probabilistic reasoning. But unlike games of chance, in which there’s no ambiguity and everyone agrees on what’s going on (like the roll of die), Bayesians use probability to express their degree of belief about something.
Degrees of Belief
When it comes to the confidence we have in our beliefs — what can be expressed in terms of probability — we can’t just make up any number we want. There’s only one consistent way to handle those degrees in beliefs.
“And that’s what Bayes’s Rule tells us — it tells us that if we have a certain belief about something, and then you get some evidence, the Rule tells us how to choose that degree of belief in order to come up with a new, or updated, strength of belief.”
And this, says Greenberg, is where Bayes gets really interesting.
“When you believe something is 20% likely, and then you get a new piece of information, it can tell you whether you now should think it's 10% likely, or 40% likely — it basically tells you how to process that information.”
In the strictest sense, of course, this requires a bit of mathematical knowledge. But Greenberg says there’s still an easy way to use this principle in daily life — and one that can be converted to plain English.
“It’s not the easiest thing in the world, because we don’t make up the rules of evidence,” he says. “The rules of evidence are inherent in the ways that probability work — perhaps even in the way the universe works — so we don’t get to choose how we actually process evidence that’s given to us."
Question of Evidence
Greenberg presents us with an example, one that involves a “belief” and what he calls the “question of evidence.”
Let’s take John. John is 20% certain that his exercise routine is giving him more energy throughout the day. After he performs the exercise routine, the question emerges in his mind about the strength of that evidence. Is it high? Moderate? Weak? It’s here where Bayes can help.
Greenberg says it’s the question of evidence which he should apply, which goes like this::
Assuming that our hypothesis is true, how much more plausible, or likely, is the evidence compared to the hypothesis if it was not true?
So, John needs to ask himself: How often do I feel energetic? If he feels energetic one out of every three days each week, John can say that it’s relatively weak evidence; it’s not that much more likely that exercise will make him energetic if the belief was true than if it wasn’t true. But if he has low energy virtually every day of the year, then he can say the evidence was very strong.
“It’s important to note that the idea here is not to answer the question in a precise way — like saying that it’s 3.2 times more likely — rather, it’s to get a rough sense. Is it a high number, a modest number, or a small number?”
To make Bayes practical, we have to start with the belief of how likely something is. Then we need to ask the question of evidence, and whether or not we should increase the confidence in our beliefs by a lot, a little, and so on.
“It doesn’t mean you should change your mind necessarily,” he adds.
“Let’s say you get some evidence that might actually be legitimate evidence,” he says. “Much of the time people will automatically try to shoot down evidence, but you can get evidence for things that are not true. Just because you have evidence doesn’t mean you should change your mind. But it does mean that you should change your degree of belief.”
Greenberg argues that, given this new evidence, if you used to believe that something had a 1 in 1000 chance in being true, now, armed with the new evidence, you might think that it has a 1 in 100 chance of being true.
“You still think it’s unlikely,” says Greenberg, “just less unlikely.”
Correcting Glitches in Our Thinking
Bayesian inferencing can also help with common fallacies and errors in thinking.
A typical area in which people make mistakes is in assessing evidence, particularly when confounding it with irrelevant data, or when starting with the wrong initial belief.
Greenberg offers the example in which a cancer test is shown to be 98% accurate. The cancer in question is rare, and people have a one in a million chance of having it.
“But if you ask most people what’s the probability that you have cancer given that the test comes out positive, most people will say 98% — but they’re saying that because the test is 98% accurate,” says Greenberg. The problem, he says, is that people are starting off with the wrong initial belief. It should have been the one in a million claim. Their incorrect answer (and line of reasoning) is independent of how rare the cancer is.
Greenberg also describes Representativeness Heuristic in which people tend to look at how similar things are.
For example, we hear about a person who wears a pocket protector and is a bit nerdy — what would satisfy the stereotype of someone who is an accountant. We then ask, what is the probability that this person is an accountant? How representative is this person to our mental image of an accountant?
But as Greenberg points out, this not sufficient to make a determination.
“To get people to make the wrong judgement on tests — to get their probabilities way, way off, choose a vary rare profession,” says Greenberg, “but make it sound like something stereotypical of that profession.”
Clearly, this is a problem that feeds into stereotyping, but one that Bayes can help to remedy.
More pertinent questions would be to ask how common or rare that thing is, like accountants. He says we need to take the base rate of these traits in the population, and then ask how similar it is to the judgment you’re trying to make.
No Absolute Beliefs
Greenberg also says that we should shy away from phrases like, “I believe,” or “I don’t believe.”
“That’s the wrong way to frame it,” he says. “We should think about things in terms of how probable they are. You almost never have anything close to perfect certainty.”
‘I believe,’ or ‘I don’t believe’ are convictions that can also be very dangerous, he says.
“Let’s say you believe that your nutrition supplement works,” he told us, “Then you get a small amount of evidence against it working, and you completely write that evidence off because you say, ‘well, I still believe it works because it’s just a small amount of evidence.’ But then you get more evidence that it doesn’t work. If you were an ideal reasoner, you’d see that accumulation of evidence, and every time you get that evidence, you should believe less and less that the nutritional supplements are actually working.”
Eventually, says Greenberg, you end up tipping things so that you no longer believe. But instead, we end up never changing our mind.
Greenberg says we should use phrases like, “I think this is likely,” or “I think this is very likely.” We can “believe” things to some degree, but not to an unlimited degree.
We should also refrain from claiming to have absolute certainty. Even the smallest amount of skepticism is necessary; it’s okay to say that something is incredibly, incredibly, probable — but not that it is 100% certain.
“You should never say that you have absolute certainty, because it closes the door to being able to revise your certainty in light of new information,” Greenberg told io9. “And the same thing can be said for having zero percent certainty about something happening. If you’re at 100% certainty, then the correct way of updating is to stay at 100% forever, and no amount of evidence can tip you.”
Greenberg compares certainty to the speed of light.
"You can keep adding and adding information, getting more and more confident, but you can never get all the way to 100%,” he says. We’re dealing with finite amounts of evidence.
“There's always some chance that we've misunderstood something somewhere, or have made a reasoning error, or failed to grasp some unknown unknown, or even gone mad and don't realize it. This creates wiggle room making it hard to ever justify believing with 100% certainty. You would have to be 100% certain that you aren't making some tiny error somewhere, and 100% certain that you understand all the relevant facts, etc., which isn't attainable in practice.”
Probability Relativity
Lastly, he also says that probabilities can depend on the observer — what is a kind of probability relativity. We all have access to different information, so different people should assign different rates of probability to different things based on different sets of evidence.
For example, take the flipping of a coin. One person has seen the flip, the other person has not. The one who has seen the flip is 99.999999% certain that it’s heads because she has seen it (the slight margin for error can be attributed to a memory glitch or visual problem she may have experienced). The other person, on the other hand, has to settle for 50% certainty.
So that’s Bayes Rule. Now go out there and starting updating your beliefs as new evidence emerges!
Images: Shutterstock/Viorel Sima.