Identifying all the relevant evidence for systematic reviews

Download a pdf of Identifying all the relevant evidence for systematic reviews

Identifying all the relevant evidence for systematic reviews – irrespective of the language or format of the relevant reports – always presents a substantial challenge, not least because some relevant evidence has not been reported in public.

Underreporting stems principally from researchers not writing up or submitting reports of their research for publication because they were disappointed with the results. And pharmaceutical companies suppress studies that do not favour their products. Journals, too, have tended to show bias when they reject submitted reports because they deem their results insufficiently ‘exciting’. [3]

Biased under-reporting of research is unscientific and unethical, and there is now widespread acceptance that this is a serious problem. In particular, people trying to decide which treatments to use can be misled because studies that have yielded ‘disappointing’ or ‘negative’ results are less likely to be reported than others, whereas studies with exciting results are more likely than others to be ‘over-reported’.

The extent of under-reporting is astonishing: at least half of all clinical trials are never fully reported. This under-reporting of research is biased and applies to large as well as small clinical trials. One of the measures that has been taken to tackle this problem has been to establish arrangements for registering trials at inception, and encouraging researchers to publish the protocols for their studies. [3]

Biased under-reporting of research can even be lethal. To their great credit, some British researchers decided to report in 1993 the results of a clinical trial that had been done thirteen years earlier. It concerned a new drug for reducing heart rhythm abnormalities in patients experiencing heart attacks. Nine patients had died after taking the drug, whereas only one had died in the comparison group.

‘When we carried out our study in 1980,’ they wrote, ‘we thought that the increased death rate in the drug group was an effect of chance… The development of the drug [lorcainide] was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of “publication bias”. The results described here…might have provided an early warning of trouble ahead’. [4]

The ‘trouble ahead’ to which they were referring was that, at the peak of their use, drugs similar to the one they had tested were causing tens of thousands of premature deaths every year in the USA alone. [5]