- Does treatment make an important difference? (BMJ 2015)
- How to use the RevMan calculator for missing Standard Deviations for continuous outcomes in parallel arm trials
- Network Meta-analysis comparing treatments to prevent asthma attacks in adults (BMJ 2014)
- Inhaled corticosteroids in COPD – quantifying risks and benefits (Thorax 2013)
- What is heterogeneity and is it important? (BMJ 2011)
- Odds ratios explained (Update Article 2006)
- Odds versus Risks (Update Article 2006)
- Retrospective studies (Update Article 2005)
- Study Design (Update Article 2005)
- Absolute treatment effects from systematic reviews (Update Article 2005)
- Relatively Absolute (Update Article 2005)
- P values and confidence intervals (Update Article 2005)
- Stats made easy (Update Article 2005)
- Understanding Statistics BMJ Learning 2004
- Systematic Reviews and Meta-analyses (Prescriber 2003)
- Subgroups compared (BMJ 2003)
- Measuring the costs and effectiveness of treatment (Prescriber 2003)
- Combining the results from Clinical Trials (Pulse Article 2001)
- The perils and pitfalls of sub-group analysis (Pulse Article 2001)
- Antibiotics ‘no use’ for acute cough: an example of biased reporting (Pulse Article 1999)
The first question that we can ask when looking at the results of a clinical trial, is whether the results might just be due to the play of chance (with no true difference between the treatment and control). If the likelihood is less than 5% that this could be a chance finding (P<0.05), then the results are regarded as statistically significant. But is the benefit (or harm) from the treatment big enough to be clinically important.
We cannot tell how much difference a treatment makes from the P value! We need to compare the size of the treatment effect with a measure that has been shown to be big enough to make a difference to patients. This is known as the Minimum Important Difference (MID).
It may be tempting to concluded that a treatment that makes an average difference of less than the MID may not be clinically worthwhile. However this is not necessarily so, and if you want to find out more please have a look at the paper by Charlotta Karner and myself published in the BMJ in November 2015.
Cates C, Karner C. Clinical importance cannot be ruled out using mean difference alone. BMJ (Clinical research ed 2015;351(nov20 4):h5496-h96 doi: 10.1136/bmj.h5496
There is an interesting example of a network meta-analysis that was published by Loymans and colleagues in the BMJ in May 2014 (Loymans RJB, Gemperli A, Cohen J, Rubinstein SM, Sterk PJ, Reddel HK, et al. Comparative effectiveness of long term drug treatment strategies to prevent asthma exacerbations: network meta-analysis. BMJ2014; 348: g3009). http://www.bmj.com/content/348/bmj.g3009
In an accompanying editorial at http://www.bmj.com/content/348/bmj.g3148, I have outlined how the network meta-analysis combines evidence from the included randomised trials of maintenance treatments for adults with asthma.
The strength (and weakness) of the network approach is that treatment effects estimated from the direct randomised comparisons within each trial are combined with indirect non-randomised comparisons of the treatment effects between the trials. Although the authors tested the consistency of the direct and indirect comparisons, the power to find inconsistency is low when there is only a small amount of direct evidence. This is pointed out by the authors and demonstrated by the wide confidence intervals in Figures 5 and 6. Note that these figures are on a log scale, so 2 on the scale represents a Rate Ratio(RR) of e2, which is equivalent to RR = 7.39!
The problem is that indirect comparisons are subject to potential confounding by differences between the participants, outcome measurements and trial designs between the individual trials. This assumption is that the patients are similar enough that it would be reasonable to assume that they could been found with equal likelihood in any of the trials? The technical term for this is a “transitivity assumption”.
One way to assess whether the network meets this transitivity assumption is to look at the asthma exacerbation (attack) rates across both arms of each of the trials to see if these are broadly similar. The Loymans network is very well reported and includes full documentation of the exacerbation rates in each trial in supplementary table S1. This shows considerable discrepancy between the attack rates in the different trials.
The variation in the frequency of the asthma attacks between studies causes two problems. Firstly the results cannot readily be applied in clinical practice, as the treatments have been compared in a wide variety of different severities of asthma. Secondly, since the frequency of asthma attacks is likely to be an important effect modifier when comparing the effects of the different maintenance treatments, the indirect comparisons may be confounded by this.
One example of the possible impact of such confounding is explored further in my editorial, and may explain why combination with lower dose of inhaled corticosteroids (ICS) is ranked in the network as likely to be more effective than combination therapy with higher dose ICS. In general the trials using higher dose ICS were on people who had more severe asthma and more frequent attacks (as you might expect), so this is not a fair indirect comparison of the higher and lower dose combination therapy. There was very little direct evidence comparing the high and low dose ICS combination treatments (i.e. randomised within the same trial).
Finally, it is also not too surprising that current best practice comes out as least likely to lead to withdrawal of treatment. The authors suggest that this outcome gives an indication of the safety of the asthma treatments. However, it may simply be that all the other treatments required some change for the participants. When treatments are changed we would expect some people to dislike the new inhaler for one reason or another. Again this is not really a fair comparison.
It is also a shame that the safety of the different treatments is discussed in the text without much mention of serious adverse events, which are in relegated to a supplementary table!
For a more in-depth discussion of the methodology of the network approach, you might find the following helpful: Cipriani A, Higgins JPT, Geddes JR, Salanti G. Conceptual and technical challenges in network meta-analysis.Ann Intern Med 2013; 159: 130-7.
Calculating Numbers Needed to Treat (NNT) may not be as straight forward as you might think! A good example of how the calculation can be quite problematic concerns the treatment of Chronic Obstructive Pulmonary Disease (COPD) with inhaled corticosteroids (ICS). The evidence from randomised trials shows that treatment with ICS increases the risk of pneumonia, whilst reducing the risk of exacerbations of COPD. The occurrence of exacerbations in the trials was more common than episodes of pneumonia, and this difference in the rate of the two conditions means that the NNT early in the trials favours treatment with ICS, but the tables appear to turn the other way after a couple of years.
To find out more have a look at the online-ahead-of-print version of a Thorax Editorial that I have written about this (due out in print in early 2013).
There is no heterogeneity between steroids and antihistamines as I argue in my response to this BMJ article in 2011:
What is an Odds ratio and how is it different from a Risk ratio?
How Odds are different from Risks and why they are used in statistical analysis
The second in a series on study design, this article looks at retrospective studies
An introduction to different types of study design