In a previous article I explained the advantages of reporting results of studies as an effect size with Confidence Intervals (usually 95%). The interval defines how certain the study result is in terms of its ability to predict the true average value of a treatment if it were to be given to everyone in the world with a certain condition. In the same issue of the BMJ in which Simon Chapman eloquently exposed the misuse of the lower confidence interval of data presented about the risk of passive smoking, a systematic review of evidence relating to antibiotics and acute cough was published.
Numbers Needed to Treat (NNT)
The review “Quantitative systematic review of randomised controlled trials comparing antibiotic with placebo for acute cough in adults” (Fahey T, Stocks N, Thomas T. BMJ 1998; 316: 906-910) carefully collected together the data from trials which addressed this important question. Nine trials were found but one was excluded because it did not fit the inclusion criteria, leaving eight trials with around 700 patients with results that could be analysed. The results are clearly presented as numbers needed to treat and harm in the Implications section of the paper and the authors calculated that “for every 100 people treated with antibiotic nine would report an improvement after 7-10 days if they visited their general practitioner but at the expense of seven who would have side-effects from the antibiotic. The resolution of illness in the remaining 84 people would not be affected by treatment with antibiotic.”
This information could be extremely useful in discussing with patients whether they need an antibiotic for their acute cough, although it should be noted that the majority of trials used doxycycline or Co-trimoxazole, which are perhaps not first choice antibiotics in this group of patients now. Unfortunately the reporting of the results earlier in the paper is not quite so elegant, and I wonder if the authors have been striving to push the figures into the form they want in order to obtain statistical significance. The diagram below shows the results from the meta-analysis for clinical improvement at day 7-11 and side effects of antibiotics.
To my mind these two effects are quite well balanced and fit with the description of the results for numbers needed to treat above. The authors however take a different view. They report the benefit of giving an antibiotic as being none (presumably because the 95% Confidence Interval includes the possibility of no difference as shown), whilst the possibility of side effects is reported as a non-significant increase. In view of the symmetry shown above this is not exactly even handed. Moreover they then proceed to adjust the data by removing the only trial that showed an excess of side effects in the placebo group, (which might be expected by chance in some trials with small numbers), and suddenly the non-significant trend reaches statistical significance!
All this makes me suspicious that the authors were keen to deliver the message that antibiotics are not much use in acute cough, and perhaps they have been a bit biased in the way that the results are displayed. This may not always be easy to spot in a paper, but it is certainly worth looking at the way results are reported when they take the form of a trend which does not reach significance as this may give clues about the authors’ views on the data.
Sensitivity Analysis, Sub-group analysis and Heterogeneity.
Sensitivity analysis is an expected part of meta-analysis and it involves excluding the data of lower quality to see whether the overall result is changed. It is also possible to carry out sub-group analysis to look for differences between different groups of patients or treatments, so for example the data could have been divided into trials which used Erythromycin as one sub-group, Doxycycline as a second group and Co-trimoxazole as a third. There are however dangers in data dredging and it is safest when specified in advance for a small number of sub-groups. It should also be pointed out that the sub-groups do not randomise one treatment against another and the protection against bias is lost in this type of comparison.
A final reason to split up the data is if significant heterogeneity is shown between the trials; normally this would be presented as a Chi-squared statistic for each outcome and hopefully will be accompanied by its p value. A simple shortcut when looking at the graphical display for the trials is to see whether the 95% Confidence Intervals all overlap; if they do not there are probably significant differences between the trials.