Thursday, June 13, 2013

Some notes on medical statistics

Over the past year, I've been reading more and more about the causes of obesity and the (related) epidemic of diabetes, since both run in my family.  In my readings, I've encountered a lot of dodgy statistics to bolster research claims.

Statistics allow us to make statements like 'the chance that these dice are unfair is less than 1%', based on throwing them n times and observing the results. We call such results 'significant', where the threshold for significance is often set at '5% chance of results being random and not because of some effect'.

(and for the statistics professionals, I know my terminology is sloppy. Have this comic to make up for it:)

The world of medical research also tries hard to do statistics, and by and large fails at this. Partially this is due to a misunderstanding of how statistics work, and partially this is a problem of language.

For example, a pill which causes a 1% absolute reduction in the number of heart attacks in a population can easily result in a 'statistically significant effect'.  This is because we might be *very* sure that the "Odds Ratio" of having a heart attack is 0.99 and not 1.  "p < 0.05".  This number is not clinically significant though, or more concretely, it is an irrelevant number.

Public relations departments, funding considerations and industry relations however just scream to turn this mathematical, statistical significance into a bold press release reporting an actual significant medical advance.

However, since heart attacks are rare, hundreds of people would spend decades taking this particular pill before a single actual heart attack would be prevented.  And who knows how many side effects there would have been!  So, statistical significance does not equal practical significance.

A far better metric is called The Number (of people) Needed To Treat (NNT) to get benefit.  For example, the NNT of common painkillers for treating a normal headache is very close to 1, since they almost always work.

The NNT is far more powerful than "relative statistical significance". For example, although 25% of the over 45 population in the US is now prescribed statin pills, its NNT for preventing a heart attack for people without prior heart disease is 300 person years, or, described differently, if 60 of those people take statins for 5 years, 59 of them receive no benefit.  All 60 are at risk for potential side effects however.

The NNT for preventing a *fatal* heart attack in this population is in fact immeasurably high ('infinite'). For people who have had a heart attack already, the NNT for preventing death is around 80 for 5 years.

There is also "the NNT for harm", which for statins is about 10 after 5 years. In other words, of those 60 people treated for 5 years, 6 of them would have a serious side effect.

The NNT & NNT for harm are medical statistics done right; and it is therefore no surprise these numbers are exceedingly unpopular in press releases and articles.

So next time you read about a medical breakthrough.. look beyond the reported statistical "significance" and see if you can find the NNT.

Some good links for further reading:


  1. Reaktie v/e vrijwilligerswerkcollega die arts-in-opleiding is:

    PS1: wat volgens mij niet in die blog staat is dat het number-needed-to treat gelijk is aan 1 gedeeld door de absolute risicoreductie?

    PS2: en dat onderzoekers/farmaceuten data masseren door ook relatieve risicoreducties te presenteren?

  2. Hoi Folkert, over PS1, dat klopt, het Wikipedia-artikel dekt dat goed. Over PS2, dat is de volgende blogpost ;-) Uiteindelijk werk ik toe naar een samenvatting van - mogelijk ook de moeite waard voor je collega!

    Dank voor je reactie!