Sapiens' Health Statistical Illiteracy

November 9, 2019 | Leadership & Management | Written by Mohamed Soliman

Haunted by Decision Options, do we naturally understand statistical probabilities to make right decisions?

KEY POINTS

  • Despite encountering them every day, we do not intuitively understand risks and probabilities, thus suffering from a “collective statistical illiteracy” that can have important consequences for our decisions especially when it comes to our health
  • Consider absolute rather than relative risks

***

“'The fool doth think he is wise, but the wise man knows himself to be a fool.”

-Act 5 Scene 1, As you like it

Whenever the term statistics is mentioned in almost any casual conversation, the most common response is that of hate. Most cringe when tasks involve statistics. We all seem to have a hard time understanding risks and probabilities, which is attributed in part to our inherited cognitive biases. Some suggest that we suffer from what has been dubbed as “collective statistical illiteracy”* which refers to the inability to critically assess probabilities and statistical information we encounter daily. This phenomenon also extends to highly educated professionals.

A classic example is the 1995 Contraceptive Pill Scare where the UK Committee on Safety of Medicine published a warning against the use of third-generation oral contraceptive pills as they increase the risk of fatal blood clots “by 100%.” This committee warning created a wave of panic among the public and confusion among physicians. Many abruptly stopped taking the pills altogether. The result of this “100%” increased risk announcement was an estimated 13,000 added abortions in 1996 and additional cost of $70 million at the time on the U.K. NHS!

What went wrong? The statistics were presented by the committee to the physicians and the public as relative risk rather than absolute risk values. The statistics comparing side effects of second- and third-generation contraceptive pills showed that 2/7000 women on a third-generation pill suffered from major clots, compared to 1/7000 women on second-generation pills. The "100% " relative risk was not a lie but the absolute risk of 1 extra case for every 7000 women is extremely low population-wise. What would have happened if the warnings were presented to the public as an absolute risk? Could this have led to fewer abortions and unplanned pregnancies?

Another example is screening tests. Most will undergo medical screening tests at some point of our lives. When we hear the statement “regular screening tests every year in your age group will prevent the risk of certain types of cancer,” we commonly interpret the sentence as “undergoing this screen will decrease my risk of getting a certain disease.” We know that screening detects existing diseases at an early stage, but we overlook the fact that early detection does not reduce the chances of getting the disease. For example, regular mammogram every 1-2 years does not affect the risk of getting breast cancer nor will it prevent it. Screening is meant for early detection which could lead to better therapeutic outcomes. Yet, almost ¾ of a random sample of German woman who underwent mammogram screening believed that screening reduces the risk of developing breast cancer. Similar responses were reported in US, and UK, among other developed countries.

A third example is media statements such as “five-year survival among patients in this facility continued to increase” need to be interpreted with caution. It is not uncommon to see intentionally misleading ads comparing “mortality rate” to “5-year survival rates” to confuse the audience. Let’s look at the following hypothetical scenario. Suppose we have a group currently diagnosed with disease X, with a median age of 55, but all of patients are expected to die by the age of 57 due to disease complications and limited treatment options. On average, they would have survived 2 years. What is the 5-year survival for this group? It is 0%. None lived until the age of 60. Now, suppose the same group were diagnosed 5 years earlier - at the age of 50 - by a sensitive screening test. They will still receive the same treatment as the other group and they will live, on average, to the age of 57 (remember that the only difference between the two groups is when the disease was detected). Now, what is the 5-year survival rate of this group at the age of 55? 100%.This perfect survival rate occurred even though nothing changed about the average time of death and that no one lived any longer through early detection of disease X. Through adjusting the time frame to a point of earlier diagnosis, the survival rate jumped from 0% to 100%. What to do then? When we need to make choices about treatment options, we are better off evaluating annual mortality rate of a disease rather than the 5-year survival. In the scenario above, the annual mortality rate is the same in both groups. Improved 5-year survival rate does not mean more lives saved.

The obvious problem of relative risk thinking is overestimation of benefits over risk - or vice versa - which could lead to misinformed decisions. Whenever we are told “the risk of this operation is…,” “the risk of taking this medication is…,” or “the risk of dying from this disorder is…,” there is a natural tendency to think relative risk rather than absolute. Our brain is wired to compare: 50% reduction is easier to understand than 2 lives will be saved instead of 1 for every 1000 in the population. Yet relative risk reduction is of limited value to understand whether the intervention will result in benefit ( if the base rate is not known). This relative risk thinking extends to many professional areas that we might believe are less prone to this bias. According to a study published in the prestigious medical journal JAMA, only 25 out of 360 original studies published in top-tier medical journals such as JAMA, NEJM, The BMJ and The Lancet in the 1990s reported absolute risk reduction. By 2007, the number of articles reporting absolute risk reduction increased to about half in the The BMJ, JAMA and the Lancet.

Lastly, some of the most confusing representations of statistics include single-event probabilities. This refers to the probability of an individual event rather than for a cohort of people. For example, when we are presented with 15% probability of having a side effect from a treatment, many think this means they will suffer from the side effect 15% of the time they take their medication. What it really means is that out of every 100 of individuals taking this med, 15 will experience this side effect. This form of probability might be the deal breaker for our decisions whether or not to take a medication.

Despite encountering statistics every day, we do not intuitively understand risks and probabilities. Our “collective statistical illiteracy” biases our decisions especially when it comes to our health. Because data are packaged in different ways, we need to be cautious interpreting published statistics. Asking about the absolute risk rather than relative risks, mortality rate rather than survival rates understanding the meaning of single-event probabilities is essential for making an educated decision.

* * *

*Gigerenzer, G. (2009). "Making sense of health statistics."

Gigerenzer, Gerd, et al. (2007) "Helping doctors and patients make sense of health statistics."