- Combining the results from Clinical Trials (Pulse Article 2001)
- Update Magazine Short Statistical Articles (2005/6)
- Twenty years of the Cochrane Collaboration (Video)
- How to make sense of a Cochrane systematic review (BREATHE 2014)
- Network Meta-analysis comparing treatments to prevent asthma attacks in adults (BMJ 2014)
- How to use the RevMan calculator for missing Standard Deviations for continuous outcomes in parallel arm trials
Category: Systematic Reviews
How to use the RevMan calculator for missing Standard Deviations for continuous outcomes in parallel arm trials
Network Meta-analysis comparing treatments to prevent asthma attacks in adults (BMJ 2014)
There is an interesting example of a network meta-analysis that was published by Loymans and colleagues in the BMJ in May 2014 (Loymans RJB, Gemperli A, Cohen J, Rubinstein SM, Sterk PJ, Reddel HK, et al. Comparative effectiveness of long term drug treatment strategies to prevent asthma exacerbations: network meta-analysis. BMJ2014; 348: g3009). http://www.bmj.com/content/348/bmj.g3009
In an accompanying editorial at http://www.bmj.com/content/348/bmj.g3148, I have outlined how the network meta-analysis combines evidence from the included randomised trials of maintenance treatments for adults with asthma.
The strength (and weakness) of the network approach is that treatment effects estimated from the direct randomised comparisons within each trial are combined with indirect non-randomised comparisons of the treatment effects between the trials. Although the authors tested the consistency of the direct and indirect comparisons, the power to find inconsistency is low when there is only a small amount of direct evidence. This is pointed out by the authors and demonstrated by the wide confidence intervals in Figures 5 and 6. Note that these figures are on a log scale, so 2 on the scale represents a Rate Ratio(RR) of e2, which is equivalent to RR = 7.39!
The problem is that indirect comparisons are subject to potential confounding by differences between the participants, outcome measurements and trial designs between the individual trials. This assumption is that the patients are similar enough that it would be reasonable to assume that they could been found with equal likelihood in any of the trials? The technical term for this is a “transitivity assumption”.
One way to assess whether the network meets this transitivity assumption is to look at the asthma exacerbation (attack) rates across both arms of each of the trials to see if these are broadly similar. The Loymans network is very well reported and includes full documentation of the exacerbation rates in each trial in supplementary table S1. This shows considerable discrepancy between the attack rates in the different trials.
The variation in the frequency of the asthma attacks between studies causes two problems. Firstly the results cannot readily be applied in clinical practice, as the treatments have been compared in a wide variety of different severities of asthma. Secondly, since the frequency of asthma attacks is likely to be an important effect modifier when comparing the effects of the different maintenance treatments, the indirect comparisons may be confounded by this.
One example of the possible impact of such confounding is explored further in my editorial, and may explain why combination with lower dose of inhaled corticosteroids (ICS) is ranked in the network as likely to be more effective than combination therapy with higher dose ICS. In general the trials using higher dose ICS were on people who had more severe asthma and more frequent attacks (as you might expect), so this is not a fair indirect comparison of the higher and lower dose combination therapy. There was very little direct evidence comparing the high and low dose ICS combination treatments (i.e. randomised within the same trial).
Finally, it is also not too surprising that current best practice comes out as least likely to lead to withdrawal of treatment. The authors suggest that this outcome gives an indication of the safety of the asthma treatments. However, it may simply be that all the other treatments required some change for the participants. When treatments are changed we would expect some people to dislike the new inhaler for one reason or another. Again this is not really a fair comparison.
It is also a shame that the safety of the different treatments is discussed in the text without much mention of serious adverse events, which are in relegated to a supplementary table!
For a more in-depth discussion of the methodology of the network approach, you might find the following helpful: Cipriani A, Higgins JPT, Geddes JR, Salanti G. Conceptual and technical challenges in network meta-analysis.Ann Intern Med 2013; 159: 130-7.
How to make sense of a Cochrane systematic review (BREATHE 2014)
This free access 2014 article in BREATHE unpacks what goes into Systematic reviews and gives pointers about how to appraise them. We use an example of the Cochrane systematic review comparing spacers with nebulisers to deliver salbutamol in acute asthma in adults and children.
Twenty years of the Cochrane Collaboration (Video)
Update Magazine Short Statistical Articles (2005/6)
Combining the results from Clinical Trials (Pulse Article 2001)
This article is part of a series on Critical Reading.
In an article on sub-group comparisons I warned about the danger of paying too much attention to results from patients in particular sub-groups of a trial, arguing that the overall treatment effect is usually the best measure for all the patients.
In the same way, when the results of all available clinical trials are combined in a Systematic Review (for example in a Cochrane review) care is still required in the interpretation of the results from each individual trial, and the main focus is on the pooled result giving the average from all the trials. The results are often displayed in a forest plot as demonstrated below. The result of each trial is represented by a rectangle (which is larger for the bigger trials) and the horizontal lines indicate the 95% confidence interval of each trial. The diamond at the bottom is the pooled result and its confidence interval is the width of the diamond.
As hospital admissions for acute asthma were rare in each trial (shown in the columns of data for Holding Chamber and nebuliser) the uncertainty of the individual trials is seen in wide confidence intervals but when these are pooled together the uncertainty shrinks to a much narrower estimate. The pooled odds ratio of one indicates no difference shown between delivery methods for beta-agonists in acute asthma as far as admission rates are concerned, but the estimate is still imprecise and compatible with both a halving or a doubling of the odds of being admitted to hospital. So we have to say that we do not know whether there is a difference in the rate of admissions between the two delivery methods.
Before all the results are combined it is wise to carry out statistical tests to look for Publication Bias. There is evidence that positive results from Clinical trials are more likely to be published in major journals, and in the English language than similar trials that report negative results. When published studies are combined this leads to a tendency to overestimate the benefits of treatment. The easiest way to look for this is using a funnel plot of the results from the trials, where the results of each trial are plotted against the size of each study. Chance variations mean that small studies should show more random scatter in both directions around the pooled result. If all the small studies are showing positive results there is a suspicion that other small studies exist with negative results but were not published. The funnel plot shown below is taken from a Cochrane review of the use of Nicotine gum for smoking cessation and is reasonably symmetrical.
A further important check is to look for Heterogeneity. The individual trials will again show chance variation in their results and in a Systematic Review it is usual to test whether the differences are larger than those expected than by chance alone. The Forest plot above shows that the Heterogeneity in this set of trials is quite low. However if significant Heterogeneity is shown (in other words the results are more diverse than expected) it is recommended to explore the reasons why this may be. Although statistical adjustments can be made to incorporate such Heterogeneity (using a so called Random Effects Model) this should not be accepted uncritically. It may be more sensible not to try to combine the trial results at all.
An example of this can be found in the BMJ in October 1999 in which a group from Toronto published a meta-analysis of Helicobacter eradication (1). The statistical tests showed considerable Heterogeneity between the trials that was largely ignored by the authors. Inspection of the trials shows that there were two types; some with outcomes measured at six weeks using single treatments and others using triple therapy and measuring dyspepsia at one year. There is no good clinical reason to put these together and this may well explain the diversity of the results (2).
The message is to use your common sense when deciding whether the differences between the outcomes measured and the treatments used in each trial mean that it is safer not to calculate a single average result (not least because the average is not easy to interpret and apply to clinical practice).
Reference:
1. Jaakimainen RL, Boyle E, Tuciver F. Is Helicobacter pylori associated with non-ulcer dyspepsia and will eradication improve symptoms? A meta-analysis. BMJ 1999;319:1040-4
2. Studies included in meta-analysis had heterogeneous, not homogeneous, results. Cates C. BMJ 2000;320:1208