Comparisons: are they fair and reliable?

Well-informed treatment decisions requires systematic reviews of fair comparisons of treatments; i.e. comparisons designed to minimise the risk of systematic and random errors. Non-systematic summaries can be misleading, and not all comparisons of treatments are fair comparisons.

Browse by Key Concept

Back to Learning Resources home

Filter these resources:

Clear Filters

CASP logo

Confidence Intervals – CASP

The p-value gives no direct indication of how large or important the estimated effect size is. So, confidence intervals are often preferred.

| 0 Comments | Evaluated

Know Your Chances

This book has been shown in two randomized trials to improve peoples' understanding of risk in the context of health care choices.

| 0 Comments | Evaluated

Sunn Skepsis

Denne portalen er ment å gi deg som pasient råd om kvalitetskriterier for helseinformasjon og tilgang til forskningsbasert informasjon.

| 0 Comments

Dodgy academic PR

Ben Goldacre: 58% of all press releases by academic institutions lacked relevant cautions and caveats about the methods and results reported

| 0 Comments

The certainty of chance

Ben Goldacre reminds readers how associations may simply reflect the play of chance, and describes Deming’s illustration of this.

| 0 Comments

How Science Works

Definitions of terms that students have to know for 'How Science Works' and associated coursework, ISAs, etc

| 0 Comments

The Systematic Review

This blog explains what a systematic review is, the steps involved in carrying one out, and how the review should be structured.

| 0 Comments

The Bias of Language

Publication of research findings in a particular language may be prompted by the nature and direction of the results.

| 0 Comments

Defining Bias

This blog explains what is meant by ‘bias’ in research, focusing particularly on attrition bias and detection bias.

| 0 Comments

Data Analysis Methods

A discussion of 2 approaches to data analysis in trials - ‘As Treated’, and ‘Intention-to-Treat’ - and some of the pros and cons of each.

| 0 Comments

Defining Risk

This blog defines ‘risk’ in relation to health, and discusses some the difficulties in applying estimates of risk to a given individual.

| 0 Comments
CASP logo

P Values – CASP

Statistical significance is usually assessed by appeal to a p-value, a probability, which can take any value between 0 and 1 (certain).

| 0 Comments

Common Sources of Bias

Bias (the conscious or unconscious influencing of a study and its results) can occur in different ways and renders studies less dependable.

| 0 Comments

Testing Treatments

Testing Treatments is a book to help the public understand why fair tests of treatments are needed, what they are, and how to use them.

| 0 Comments

Eureka!

Cherry picking the results of people in sub-groups can be misleading.

| 0 Comments

Goldilocks

Cartoon and blog about how poorly performed systematic reviews and meta-analyses may misrepresent the truth.

| 0 Comments

Cherry Picking

Cherry-picking results that only support your own conclusion may mean ignoring important evidence that refutes a treatment claim.

| 0 Comments

Forest Plot Trilogy

Synthesising the results of similar but separate fair comparisons (meta-analysis) may help by yielding statistically more reliable estimates

| 0 Comments

False Precision

The use of p-values to indicate the probability of something occurring by chance may be misleading.

| 0 Comments

CEBM – Study Designs

A short article explaining the relative strengths and weaknesses of different types of study design for assessing treatment effects.

| 0 Comments
DISCERN logo

DISCERN online

A questionnaire providing a valid and reliable way of assessing the quality of written information on treatment choices.

| 0 Comments
Featured image

Means vs. Medians

Keith Bower’s 3-min video explaining how means (averages) and medians can be presented misleadingly.

| 0 Comments

Mega-trials

In this 5 min audio resource, Neeraj Bhala discusses systematic reviews and the impact of mega-trials.

| 0 Comments

The placebo effect

A video by NHS Choices explaining what the placebo effect is, and describing its role in medical research and the pharmaceutical industry.

| 0 Comments

Stroke

Another example of unnecessary research, yet again because the results of preceding studies had not been gathered together and analyzed, […]

| 2 Comments

Psychiatric disorders

Regrettably, research is not always well done or relevant. Take the example of a distressing condition known as tardive dyskinesia. […]

| 0 Comments

HIV infection in children

The results of good research are also making a real difference to children infected with HIV (human immunodeficiency virus), the […]

| 0 Comments

Pre-eclampsia in pregnant women

Another outstanding example of good research concerns pregnant women. Worldwide, about 600,000 women die each year of pregnancy-related complications. Most […]

| 2 Comments

Stroke

Stroke is a leading cause of death and long-term disability. The death rate is between one in six and two […]

| 0 Comments

In an ideal world

‘In an ideal world, wherever possible, we could be gathering anonymised outcome data and comparing this against medication history, making […]

| 0 Comments

Marketing-based medicine

‘Internal documents from the pharmaceutical industry suggest that the publicly available evidence base may not accurately represent the underlying data […]

| 1 Comment

Why did you start?

‘Few principles are more fundamental to the scientific and ethical validity of clinical research than that studies should address questions […]

| 0 Comments

Is one study ever enough?

The simple answer is ‘hardly ever’. Very seldom will one fair treatment comparison yield sufficiently reliable evidence on which to […]

| 1 Comment

Comparing like with like

In this sub-section Comparisons are key (this page) Treatments with dramatic effects Treatments with moderate but important effects Comparisons are […]

| 2 Comments

No Resources Found

Try clearing your filters or selecting different ones.