Full text of British physician Ben Goldacre’s talk titled “What doctors don’t know about the drugs they prescribe” at TED conference. In this talk, Ben explains why unreported instances of negative drug trial results are especially misleading and dangerous.
Listen to the MP3 Audio here:
Ben Goldacre – British physician, academic, and science writer
Hi. So, this chap here, he thinks he can tell you the future. His name is Nostradamus, although here the Sun have made him look a little bit like Sean Connery.
And like most of you, I suspect, I don’t really believe that people can see into the future. I don’t believe in precognition, and every now and then, you hear that somebody has been able to predict something that happened in the future, and that’s probably because it was a fluke.
And we only hear about the flukes and about the freaks. We don’t hear about all the times that people got stuff wrong.
Now we expect that to happen with silly stories about precognition. But the problem is, we have exactly the same problem in academia and in medicine, and in this environment, it costs lives.
So firstly, thinking just about precognition, as it turns out, just last year a researcher called Daryl Bem conducted a piece of research where he found evidence of precognitive powers in undergraduate students. And this was published in a peer-reviewed academic journal and most of the people who read this just said, “Okay, well, fair enough, but I think that’s a fluke, that’s a freak, because I know that if I did a study where I found no evidence that undergraduate students had precognitive powers, it probably wouldn’t get published in a journal.”
And in fact, we know that that’s true, because several different groups of research scientists tried to replicate the findings of this precognition study, and when they submitted it to the exact same journal, the journal said, “No, we’re not interested in publishing replication. We’re not interested in your negative data.”
So this is already evidence of how, in the academic literature, we will see a biased sample of the true picture of all of the scientific studies that have been conducted.
But it doesn’t just happen in the dry academic field of psychology. It also happens in, for example, cancer research. So in March, 2012, just one month ago, some researchers reported in the journal Nature how they had tried to replicate 53 different basic science studies looking at potential treatment targets in cancer.
And out of those 53 studies, they were only able to successfully replicate six. Forty-seven out of those 53 were unreplicable. And they say in their discussion that this is very likely because freaks get published. People will do lots and lots and lots of different studies, and the occasions when it works they will publish, and the ones where it doesn’t work they won’t.
And their first recommendation of how to fix this problem, because it is a problem, because it sends us all down blind alleys, their first recommendation of how to fix this problem is to make it easier to publish negative results in science, and to change the incentives so that scientists are encouraged to post more of their negative results in public.
But it doesn’t just happen in the very dry world of preclinical basic science cancer research. It also happens in the very real, flesh and blood of academic medicine. So in 1980, some researchers did a study on a drug called lorcainide, and this was an anti-arrhythmic drug, a drug that suppresses abnormal heart rhythms, and the idea was, after people have had a heart attack, they’re quite likely to have abnormal heart rhythms, so if we give them a drug that suppresses abnormal heart rhythms, this will increase the chances of them surviving.
Early on its development, they did a very small trial, just under a hundred patients. Fifty patients got lorcainide, and of those patients, 10 died. Another 50 patients got a dummy placebo sugar pill with no active ingredient, and only one of them died. So they rightly regarded this drug as a failure, and its commercial development was stopped.
And because its commercial development was stopped, this trial was never published.
Unfortunately, over the course of the next five, 10 years, other companies had the same idea about drugs that would prevent arrhythmias in people who have had heart attacks. These drugs were brought to market. They were prescribed very widely because heart attacks are a very common thing.
And it took so long for us to find out that these drugs also caused an increased rate of death that before we detected that safety signal, over 100,000 people died unnecessarily in America from the prescription of anti-arrhythmic drugs.
Now actually, in 1993, the researchers who did that 1980 study, that early study, published a mea culpa, an apology to the scientific community, in which they said, “When we carried out our study in 1980, we thought that the increased death rate that occurred in the lorcainide group was an effect of chance.”
The development of lorcainide was abandoned for commercial reasons, and this study was never published; it’s now a good example of publication bias. That’s the technical term for the phenomenon where unflattering data gets lost, gets unpublished, is left missing in action.