total – were flatly contradicted by subsequent research; and for a further seven studies, follow-up research had found that the benefits originally identified were present, but more modest than first thought.
This looks like a reasonably healthy state of affairs to me: there probably are true tales of dodgy peer reviewers delaying publication of findings they don’t like, but overall, things are routinely proven to be wrong in academic journals. Equally, the other side of this coin is not to be neglected: we often turn out to be wrong, even with giant, classic papers. So it pays to be cautious with dramatic new findings; if you blink you might miss a refutation; and there’s never an excuse to stop monitoring outcomes.
How Myths Are Made
Guardian , 8 August 2009
Much of what we cover in this column revolves around the idea of a ‘systematic review’, where the literature is surveyed methodically, following a predetermined protocol, to find all the evidence on a given question. As we saw in another column, 1 for example, the Soil Association would rather have the freedom to selectively reference only research that supports their case, rather than the totality of the evidence.
Two disturbing news stories demonstrate how this rejection of best practice can also cut to the core of academia.
Firstly, the Public Library of Science in the US this week successfully used a court order to obtain a full trail of evidence showing how pharmaceutical company Wyeth employed commercial ‘ghost writers’ to produce what were apparently academic review articles, published in academic journals, under the names of academic authors. These articles, published between 1998 and 2005, stressed the benefits of taking hormones to protect against problems like heart disease, dementia and ageing skin, while playing down the risks. Stories like this, sadly, are commonplace; but to understand the full damage that these distorted reviews can do, we need to understand a little about the structure of academic knowledge.
In a formal academic paper, every claim is referenced to another academic paper: either an original research paper, describing a piece of primary research in a laboratory or on patients; or a review paper which summarises an area. This convention gives us an opportunity to study how ideas spread, and myths grow, because in theory you could trace who references what, and how, to see an entire belief system evolve from the original data. Such an analysis was published this month in the British Medical Journal , and it is quietly seminal.
Steven Greenberg from Harvard Medical School focused on an arbitrary hypothesis: the specifics are irrelevant to us, but his case study was the idea that a protein called β amyloid is produced in the skeletal muscle of patients who have a condition called ‘inclusion body myositis’. Hundreds of papers have been written on this, with thousands of citations between them. Using network theory, Greenberg produced a map of interlocking relationships, to demonstrate who cited what.
By looking at this network of citations he could identify the intersections with the most incoming and outgoing traffic. These are the papers with the greatest ‘authority’ (Google uses the same principle to rank webpages in its search results). All of the ten most influential papers expressed the view that β amyloid is produced in the muscle of patients with IBM. In reality, this is not supported by the totality of the evidence. So how did this situation arise?
Firstly, we can trace how basic laboratory work was referenced. Four lab papers did find β amyloid in IBM patients’ muscle tissue, and these were among the top ten most influential papers. But looking at the whole network, there were also six very similar primary research papers, describing similar lab experiments, which are isolated from the interlocking web of citation traffic, meaning that they received no or few citations. These papers,