Authors: James Davies
In 2005, a report by the British government's Health Committee identified some of these strategies. The authors of the report claimed that they had heard that lax regulation and oversight had allowed pharmaceutical companies to engage in a number of practices that clearly acted against the public interest. The strategies brought to their attention included:
⦠that clinical trials were not adequately designedâthat they could be designed to show the new drug in the best lightâand sometimes fail to indicate the true effects of a medicine on health outcomes relevant to the patient. We were informed of several high-profile cases of suppression of trial results. We also heard of selective publication strategies and ghost-writing. The suppression of negative clinical trial findings leads to a body of evidence that does not reflect the true risk: benefit profile of the medicine in question.
103
The former chief editor of the
British Medical Journal
, Dr. Richard Smith, also highlighted the proliferation of these practices in a paper titled “Medical Journals are an Extension of the Marketing Arm of Pharmaceutical Companies.” Here he described how pharmaceutical companies have manipulated drug-trial data in ways so initially undecipherable that, as he confessed, it took “almost a quarter of a century editing for the
BMJ
to wake up to what was happening.”
104
Smith, like the authors of the 2005 government report, also outlined some of the strategies he had witnessed companies using to get the results they want:
It is not only the editor of the
British Medical Journal
who has spoken out. Richard Horton, editor of the
Lancet,
wrote in 2005 that “Journals have devolved into information-laundering operations for the pharmaceutical industry.” This position is also supported by the former editor of the
New England Journal of Medicine
, Marcia Angell, who lambasted the industry for becoming “primarily a marketing machine” and co-opting “every institution that might stand in its way.” In fact, the situation has so deteriorated that editors at
PLoS Medicine
have now openly committed to not becoming “part of the cycle of dependency ⦠between journals and the pharmaceutical industry,” a cycle that sees journals sometimes publishing research biased in favor of company interests.
106
While these editors' complaints are reassuring, the problem is far from being solved. Companies still engage in research strategies that by most accounts massage the facts, and journals are still often publishing this research. But to allow you to truly appreciate the extent of this problem, let me first show you in a little more detail some of the unprincipled practices that cause such consternation in senior editorial ranks.
3
In May 2000, Dr. S. Charles Schulz, a psychiatrist at the very height of his powers, walks up to a podium at the annual meeting of the APA and announces a breakthrough in antipsychotic research. The breakthrough amounts to the development of a new drug that has “dramatic benefits” over its competitors. Its name is Seroquel, and because of its superiority, “patients must receive these medications first,” as he later wrote in the press release.
Two months before this commanding announcement was made, the company that manufactures Seroquel, AstraZeneca, was in disarray. It had just discovered that its latest research into Seroquel had revealed that the drug was far less effective than its archrival Haldol. The document containing this finding had been circulated among senior staff at the company, who were not sure what to do. An internal e-mail written at the time (released later by the company during litigation) captured the mood:
From: Tumas, John T A
Sent: Thursday, March, 23rd, 2000, 10:05AM
To: Goldstein, Jeffery JM; Murry, Michael MF
Subject: FW: Meta Analyses
Importance: High
Jeff and Mike,
Here's the analyses I got from Emma. I've also attached a message I sent to her yesterday asking for clarification.
The data don't look good. In fact, I don't know how we can get a paper out of this.
My guess is that we all (including Schulz) saw the good stuff, ie the meta analyses of responder rates that showed we were superior to placebo and haloperidol, and then thought further analyses would be supportive and that a paper was in order. What seems to be the case is that we were highlighting only the good stuff, and that our own analysis [now] support[s] the “view out there” that we are less effective than haloperidol and our competitors.
Once you have a chance to digest this, let's get together (or teleconference) and discuss where to go from here. We need to do this quickly because Schulz needs to get a draft ready for APA and he needs any additional analyses we can give him well before then.
Thanks.
107
In this e-mail, the publications manager at AstraZeneca casts about for a solution. He knows the research into Seroquel “doesn't look good,” yet Schulz has to present a paper on Seroquel at the APA's meeting in two months' time. If Schulz reports the negative data, the drug is presumably doomed. A way out is neededâfast.
What does the company do? How in just two months does it move from private despair over the failings of Seroquel to making a public declaration about its exceptional advantages? Does the company rapidly undertake a new study that finally secures Seroquel's superiority? Does it re-analyze the old data only to discover that the previous, negative interpretation was wrong? The company does neither. There is no time. Even if there were time, the existing data are definitive. The drug is weaker than its competitorsâthat, it seems, is plain for all to see.
At this point you'd probably expect the company to cut its losses and with regret publish the whole truth. But the company does not take that route. Presumably there is too much money at stake, and anyway, perhaps there's another way out. Sure, it's not an ideal route to take, or even an honest one, but given the dollars that could be lost it has to be worth a go. The company therefore opts for a strategy known in drug research as “cherry-picking.”
Cherry-picking is the name given to the process by which only some of the data from a clinical trial are “picked” for publication and the rest are ignored. The huge advantage of proceeding in this way is that you can simply “pick” the data that makes the drug look effective, while leaving aside the data that don't. This was the solution AstraZeneca chose in early 2000. Rather than admit that after a year on Seroquel patients suffered more relapses and worse ratings on various symptom scales than patients on Haldolânot to mention that they also gained on average eleven pounds in weight, which put them at increased risk of diabetes
108
âthe company rather homed in on one shred of positive data about the drug faring slightly better on some measures of cognitive functioning. And it was on the basis of these data that public claims were made that Seroquel has “greater efficacy than Haloperidol [Haldol],” a fact they hoped would lead physicians “[to] better understand the dramatic benefits of newer medications like Seroquel.”
The company seemed to have favored the practice of cherry-picking for some time. Indeed, in the following internal e-mail, again released during litigation, we hear how cherry-picking had been used in a previously buried trial called Trial 15:
From: Tumas John T A
Sent: Monday, December 06, 1999, 11:45PM
To: Owens Judith J; Jones Martin AM â PHMS; Litherland Steve S; Gavin Jin JP
Cc: Holdsworth Debbie D; Togend Georgia GL; Czupryna Michael MJ; Gorman Andrew AP; Wilkie Allison AM; Murry Michael MF; Rak Ihor IW; O'Brian Shawn SP; Denerely Paul PM; Goldstein Jeffery JM; Woods Paul PM; De Vriese Geert; Shadwell Pamela PG
Subject: RE: EPS Abstracts for APA
Please allow me to join the fray.
There has been a precedent set regarding “cherry picking” of data. This would be the recent Velligan presentations of cognitive function data from Trial 15 (one of the buried trials). Thus far, I am not aware of any repercussions regarding interest in the unreported data.
That does not mean that we should continue to advocate this practice. There is growing pressure from outside the industry to provide access to all data resulting from clinical trials conducted by the industry. Thus far we have buried Trials 15, 31, 56 and are now considering COSTAR.
The larger issue is how do we face the outside world when they begin to criticize us for suppressing data. One could say that our competitors indulge in this practice. However, until now, I believe we have been looked upon by the outside world favorably with regard to ethical behavior. We must decide if we wish to continue to enjoy this distinction.
Best regards.
109
Obviously, AstraZeneca decides not to plunge for the ethical option. Rather, it continues to risk its reputation and the health of patients by cherry-picking the positive data and burying the negative data in order to sell up the advantages of Seroquel over Haldol. This finally backfired in 2010, when so many people taking Seroquel were suffering from such awful side effects that about 17,500 of them were officially claiming that the company had lied about the risks of the drug. These claims were finally vindicated in 2010 when AstraZeneca paid out $191 million to settle a class action out of court for defrauding the public.
110
4
Cherry-picking is just one practice amid a variety of goal-moving techniques employed by pharmaceutical companies. One of the other most common strategies is something called “salami-slicing.” This is when companies not only keep negative studies hidden from professionals and the general public but also publish positive studies many times over in different forms and locations. The problem with this practice is obvious: it creates the false impression that many studies have been conducted, all showing positive results, when in actual fact all the positive studies stem from only one “data set” or piece of research.
To illustrate the highly subtle way in which salami-slicing can operate, just consider for a moment a recent study that investigated how salami-slicing can work by making use of what are called “pooled analyses.” A pooled analysis is a study that literally “pools” or bundles together the results of many separate and previous clinical trials, rather like a meta-analysis does. The crucial difference between a meta-analysis and a “pooled analysis,” however, is that a meta-analysis has to include
all
the relevant studies that address a particular question, whereas a pooled analysis may “pool” only those studies a company chooses to include. The danger here is obvious: a company ends up picking and choosing those studies which, when pooled together, convey a desirable outcome from the company's point of view.
To give you an example of this strategy at work, a recent study focused on forty-three “pooled analyses” conducted by Eli Lilly for its antidepressant Cymbalta (Duloxetine). It revealed that several pooled analyses were based on greatly overlapping clinical trials and presented efficacy and safety data that did not answer unique research questions, and thus appeared to qualify as “salami” publications. They also found that six clinical trials were used in more than twenty pooled analyses that were each published separatelyâmeaning that data from six trials were disseminated in more than twenty different places.
111
The authors exposing these tactics declared that “such redundant publications add little to scientific understanding,” and rather “better serve the curricula vitae of researchers and, potentially, goals of drug marketers” than they do “science and patient care.”
112
An equally suspect strategy is called “washing out.” This is when a company conducts a trial comparing a placebo to an active drug, but
before
the trial begins, it first puts all the patients on a placebo for a specified period of time. What it then does is remove from the prospective trial all those patients who got better on the placebo. In other words, if the placebo makes you feel better, you won't be included in the trial. This dubious practice is justified on the grounds that anyone who responds to a placebo is either not “ill” enough or has already recovered. But this justification does not even begin to address the core problem with the washout: By not allowing people who are helped by the placebo to enter the trial, you artificially inflate the numbers of people responding to the active drug compared to the placebo. Again, this dubious practice is common in psychiatric drug research.
113
If you still doubt whether pharmaceutical research into psychiatric drugs is not as honest as we would like to believe, consider this final study published in the
British Medical Journal
. This compared the outcomes of studies funded by the pharmaceutical industry with those funded from other sources. Overall, the company-funded studies were found to be four times more likely to show results favorable to company drugs than were studies funded from other sources.
114