Authors: Ben Goldacre
Tags: #General, #Life Sciences, #Health & Fitness, #Errors, #Health Care Issues, #Essays, #Scientific, #Science
Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should
never
compare your drug with placebo, because it proves nothing of clinical value. In the real world, nobody cares if your drug is better than a sugar pill; people care only if it is better than the best currently available treatment. But you’ve already spent hundreds of millions of dollars bringing your drug to market, so stuff that: do lots of placebo-controlled trials, and make a big fuss about them, because they practically guarantee some positive data. Again, this is universal, because almost all drugs will be compared against placebo at some stage in their lives, and drug reps, the people employed by big pharma to bamboozle doctors, love the unambiguous positivity of the graphs these studies can produce.
Then things get more interesting. If you do have to compare your drug with one produced by a competitor—to save face or because a regulator demands it—you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well, or give a very high dose of the competing drug, so that patients experience lots of side effects, or give the competing drug in the wrong way (perhaps orally when it should be intravenous, and hope most readers don’t notice); or you could increase the dose of the competing drug much too quickly, so that the patients taking it get worse side effects. Your drug will shine by comparison.
You might think no such thing could ever happen. If you follow the references in the back, you will find studies in which patients were given really rather high doses of old-fashioned antipsychotic medication (which made the new-generation drugs look as if they were better in terms of side effects) and studies with doses of SSRI antidepressants, which some might consider unusual, to name just a couple of examples. I know. It’s slightly incredible.
Of course, another trick you could pull with side effects is simply not to ask about them, or rather—since you have to be sneaky in this field—you could be careful about how you ask. Here is an example. SSRI antidepressant drugs cause sexual side effects, fairly commonly, including anorgasmia. We should be clear (I’m trying to phrase this as neutrally as possible): I
really
enjoy the sensation of orgasm. It’s important to me, and everything I experience in the world tells me that this sensation is important to other people too. Wars have been fought, essentially, for the sensation of orgasm. There are evolutionary psychologists who would try to persuade you that the entirety of human culture and language is driven, in large part, by the pursuit of the sensation of orgasm. Losing it seems like an important side effect to ask about.
And yet various studies have shown that the reported prevalence of anorgasmia in patients taking SSRI drugs varies between 2 percent and 73 percent, depending primarily on how you ask: a casual, open-ended question about side effects, for example, or a careful and detailed inquiry. One three-thousand-subject review on SSRIs simply did not list any sexual side effects on its twenty-three–item side effect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.
But back to the main outcomes. And here is a good trick: instead of a real-world outcome, like death or pain, you could always use a “surrogate outcome,” which is easier to attain. If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths; measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and quicker to do, so your result will be cheaper
and
more positive. Result!
Now you’ve done your trial, and despite your best efforts, things have come out negative. What can you do? Well, if your trial has been good overall, but has thrown out a few negative results, you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph. Mention it briefly in the text, and ignore it when drawing your conclusions. (I’m so good at this I scare myself. Comes from reading too many rubbish trials.)
If your results are completely negative, don’t publish them at all, or publish them only after a long delay. This is exactly what the drug companies did with the data on SSRI antidepressants: they hid the data suggesting they might be dangerous, and they buried the data showing them to perform no better than placebo. If you’re really clever and have money to burn, then after you get disappointing data, you could do some more trials with the same protocol, in the hope that they will be positive; then try to bundle all the data up together, so that your negative data is swallowed up by some mediocre positive results.
Or you could get really serious, and start to manipulate the statistics. For two pages only, this book will now get quite nerdy. I understand if you want to skip it, but know that it is here for the doctors who bought the book to laugh at homeopaths. Here are the classic tricks to play in your statistical analysis to make sure your trial has a positive result.
Ignore the Protocol Entirely
Always assume that any correlation
proves
causation. Throw all your data into a spreadsheet program, and report—as significant—any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive just by sheer luck.
Play with the Baseline
Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, adjust for the baseline in your analysis.
Ignore Dropouts
People who drop out of trials are statistically much more likely to have done badly and much more likely to have had side effects. They will only make your drug look bad. So ignore them: make no attempt to chase them up, do not include them in your final analysis.
Clean Up the Data
Look at your graphs. There will be some anomalous “outliers,” or points that lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.
“The Best of Five…No…Seven…No…Nine!”
If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results; things might get less impressive if you carry on. Alternatively, if at six months the results are “nearly significant,” extend the trial by another three months.
Torture the Data
If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. “Torture the data, and it will confess to anything,” as they say at Guantánamo Bay.
Try Every Button on the Computer
If you’re really desperate and analyzing your data the way you planned does not give you the result you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.
And when you’re finished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written, and edited entirely by the industry). Remember, the tricks we have just described hide nothing and will be obvious to anyone who reads your paper, but only if he or she reads it very attentively, so it’s in your interest to make sure it isn’t read beyond the abstract. Finally, if your finding is really embarrassing, hide it away somewhere, and cite “data on file.” Nobody will know the methods, and it will be noticed only if someone comes pestering you for the data to do a systematic review. Hopefully, that won’t be for ages.
How Can This Be Possible?
When I explain this abuse of research to friends from outside medicine and academia, they are rightly amazed. “How can this be possible?” they say. Well, first, much bad research comes down to incompetence. Many of the methodological errors described above can come about by wishful thinking, as much as by mendacity. But is it possible to prove foul play?
On an individual level, it is sometimes quite hard to show that a trial has been deliberately rigged to give the right answer for its sponsors. Overall, however, the picture emerges very clearly. The issue has been studied so frequently that in 2003 a systematic review found thirty separate studies looking at whether funding in various groups of trials affected the findings. Overall, studies funded by a pharmaceutical company were found to be four times more likely to give results that were favorable to the company than were independent studies.
One review of bias tells a particularly Alice in Wonderland story. Fifty-six different trials comparing painkillers like ibuprofen, diclofenac, and so on were found. People often invent new versions of these drugs in the hope that they might have fewer side effects or be stronger (or stay in patent and make money). In every single trial the sponsoring manufacturer’s drug came out as better than, or equal to, the others in the trial. On not one occasion did the manufacturer’s drug come out worse. Philosophers and mathematicians talk about “transitivity”: if A is better than B, and B is better than C, then C cannot be better than A. To put it bluntly, this review of fifty-six trials exposed a singular absurdity: all these drugs were better than one another.
But there is a surprise waiting around the corner. Astonishingly, when the methodological flaws in studies are examined, it seems that industry-funded trials actually turn out to have
better
research methods, on average, than independent trials. The most that could be pinned on the drug companies were some fairly trivial howlers, things like using inadequate doses of the competitor’s drug (as we said above) or making claims in the conclusions section of the paper that exaggerated a positive finding. But these, at least, were transparent flaws; you only had to read the trial to see that the researchers had given a miserly dose of a painkiller, and you should always read the methods and results section of a trial to decide what its findings are, because the discussion and conclusion pages at the end are like the comment pages in a newspaper. They’re not where you get your news from.
How can we explain, then, the apparent fact that industry-funded trials are so often so glowing? How can all the drugs simultaneously be better than all of the others? The crucial kludge may happen after the trial is finished.
Publication Bias and Suppressing Negative Results
Publication bias is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones. It’s easy enough to understand, if you put yourself in the shoes of the researcher. First, when you get a negative result, it feels as if it’s all been a bit of a waste of time. It’s easy to convince yourself that you found nothing when in fact you discovered a very useful piece of information: that the thing you were testing
doesn’t work
.
Rightly or wrongly, finding out that something doesn’t work probably isn’t going to win you a Nobel Prize—there’s no justice in the world—so you might feel unmotivated about the project, or prioritize other projects ahead of writing up and submitting your negative finding to an academic journal, and so the data just sits, rotting, in your bottom drawer. Months pass. You get a new grant. The guilt niggles occasionally, but Monday’s your day in the clinic, so Tuesday’s the beginning of the week really, and there’s the departmental meeting on Wednesday, so Thursday’s the only day you can get any proper work done, because Friday’s your teaching day, and before you know it, a year has passed, your supervisor retires, the new guy doesn’t even know the experiment ever happened, and the negative trial data is forgotten forever, unpublished. If you are smiling in recognition at this paragraph, then you are a very bad person.
Even if you do get around to writing up your negative finding, it’s hardly news. You’re probably not going to get it into a big-name journal, unless it was a massive trial on something everybody thought was really whizbang until your negative trial came along and blew it out of the water, so as well as this being a good reason for you not bothering, it means the whole process will be heinously delayed: it can take a year for some of the slacker journals to reject a paper. Every time you submit to a different journal you might have to reformat the references (hours of tedium). If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent; that’s years of people not knowing about your study.