pro-industry results. Each took a slightly different approach to finding research papers, and both found that industry-funded trials were, overall, about four times more likely to report positive results. 4 A further review in 2007 looked at the new studies that had been published in the four years after these two earlier reviews: it found twenty more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results. 5
I am setting out this evidence at length because I want to be absolutely clear that there is no doubt on the issue. Industry-sponsored trials give favourable results, and that is not my opinion, or a hunch from the occasional passing study. This is a very well-documented problem, and it has been researched extensively, without anybody stepping out to take effective action, as we shall see.
There is one last study I’d like to tell you about. It turns out that this pattern of industry-funded trials being vastly more likely to give positive results persists even when you move away from published academic papers, and look instead at trial reports from academic conferences, where data often appears for the first time (in fact, as we shall see, sometimes trial results only appear at an academic conference, with very little information on how the study was conducted).
Fries and Krishnan studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial, and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor’s drug. There is a small punch-line coming, and to understand it we need to cover a little of what an academic paper looks like. In general, the results section is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures. The ‘ranges’ are given, subgroups are perhaps explored, statistical tests are conducted, and each detail of the result is described in table form, and in shorter narrative form in the text, explaining the most important results. This lengthy process is usually spread over several pages.
In Fries and Krishnan [2004] this level of detail was unnecessary. The results section is a single, simple, and – I like to imagine – fairly passive-aggressive sentence:
The results from every RCT (45 out of 45) favored the drug of the sponsor.
This extreme finding has a very interesting side effect, for those interested in time-saving shortcuts. Since every industry-sponsored trial had a positive result, that’s all you’d need to know about a piece of work to predict its outcome: if it was funded by industry, you could know with absolute certainty that the trial found the drug was great.
How does this happen? How do industry-sponsored trials almost always manage to get a positive result? It is, as far as anyone can be certain, a combination of factors. Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good (which is – for interesting reasons we shall discuss – statistical poison). And so on.
But before we get to these fascinating methodological twists and quirks, these nudges and bumps that stop a trial from being a fair test of whether a treatment works or not, there is something very much simpler at hand.
Sometimes drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them. This is not a new problem, and it’s not limited to medicine. In fact, this issue of negative results that go missing in action cuts into almost every corner of science.