Measuring the costs and effectiveness of treatment (Prescriber 2003)

The Prescriber series on evidence-based medicine aims to provide the reader with an easy-to-follow guide to a complex topic. Using practical examples, the articles will help you apply evidence-based medicine to daily practice. In this final article, we look at how cost-effectiveness is calculated.

Previous articles in this series have described the statistical methods used to find out whether treatments are effective in clinical trials and, before embarking on cost-effectiveness analysis, it is wise to check first that there is good evidence that the treatment works. There are many extra levels of uncertainty when costs are considered, as this article demonstrates, and it is important to ensure that the foundational evidence of clinical benefit is in place before building a cost-effectiveness analysis that may rest on a treatment that has not reliably been shown to be better than placebo.

Let us take as an example the recent report on the benefit of ramipril (Tritace) in the secondary prevention of stroke from the Heart Outcomes Prevention Evaluation (HOPE) investigations.1 This was a large study of 9297 high-risk patients over 55 who were treated with either 10mg ramipril daily or placebo for an average of 4.5 years. The study showed a highly statistically significant 32 per cent reduction in the risk of stroke – 95 per cent confidence interval (CI) of 16-44 per cent reduction. The risk of fatal stroke was reduced by 61 per cent (95 per cent CI of 33-78 per cent reduction) over the 4.5 years of the study.

So here we have convincing evidence that ramipril was better than placebo. Subsequent correspondence, however, has pointed out that the presentation of the results concentrates on relative rather than absolute benefits and there is no mention of the potential costs involved in preventing strokes with ramipril.2 Various points are made in the letters, both about the way the results are presented and about the remaining uncertainty in relation to whether this effect is specific to ramipril (or ACE inhibitors in general) or is a general benefit of blood pressure reduction.

Cost-effectiveness analysis

In order to carry out a cost-effectiveness analysis, the consequences of a treatment must be measurable in suitable units (those that measure an important outcome),3 so in this case the unit could be one stroke. By making the unit of analysis ‘one stroke prevented’, the costs of caring for stroke can be set on one side and the costs of different treatments can be calculated. There is debate about whether non-NHS costs should be included, but for simplicity we will restrict ourselves to the ingredient costs of the drugs used for stroke prevention and ignore the costs of blood tests for monitoring.

We find that because strokes only occurred in 4.9 per cent of the patients in the placebo group, the impressive 32 per cent relative risk reduction actually translates into an absolute risk reduction of 1.5 per cent and a number needed to treat (NNT) of 66 people (95 per cent CI of 49 to 128) for 4.5 years to prevent one stroke.

The cost of the drug for 66 people for this length of time is about £58 000 (£196 per year each), and the confidence intervals of the NNT translate into a range of £43 000 to £113 000 to prevent one stroke. The temptation to make direct cost comparisons with the results of other drugs in reducing stroke is strong but care needs to be exercised.

There is a recognised difficulty in comparing NNTs for different treatments that do not have the same duration4 and this can be overcome by looking at cost-effectiveness as the duration of treatment is taken into account. This is because more events will be prevented with longer trial durations but the costs of treatment will go up in parallel, so the cost per event should stay the same whatever the duration considered.

There is, however, a residual problem in relation to any kind of absolute treatment effect (including cost-effectiveness). The size of absolute benefit is closely related to the baseline risk of the patient being treated, so high-risk patients will tend to show lower NNT and lower costs per event saved. This is because the relative risk reduction tends to be fairly consistent across different levels of baseline risk. This is demonstrated in the ramipril results where the relative risk reduction is very similar for patients with high and normal blood pressure,1 but those with higher blood pressure have higher absolute risks of stroke and therefore derive more benefit from treatment.

A further example of this relates to the cost of using statins. To prevent one cardiovascular event, fewer patients need to be given a statin when they are used for secondary prevention (where the baseline risk is high) in comparison with primary prevention (lower baseline risk). For this reason, before comparing costs between trials or meta-analyses of different treatments against placebo, it is important to check that the baseline risk of the patients in the placebo group is similar.  In fact the patients included in the Heart Protection Study5 did have similar baseline risks of stroke and a similar duration of treatment.

Here it is reasonable to compare the costs of using a statin and this works out as more expensive, at around £100 000 to prevent one stroke – this allows for the fact that some placebo arm patients ended up on a statin and not all the active patients stayed on treatment. Aspirin is many orders of magnitude cheaper at around £500 per stroke prevented, but hopefully most patients will be receiving this already.

Head-to-head comparisons of different interventions in a single trial can overcome the above difficulties, but in order to generate the power required to reliably detect small differences, prohibitively large numbers of patients need to be recruited. This in turn raises a further question about whether the costs of finding the answer outweigh the benefits of knowing it!

Cost minimisation

It is a mistake to think that economic analysis is only about minimising the costs of the treatment itself; if this were the only concern all asthmatics would be treated with oral steroids (the cheapest option). Clearly this ignores the known risks of long-term systemic treatment with oral steroids and would be entirely unethical.

In some situations, however, there is enough reliable information to persuade us that different treatments lead to similar outcomes, and in this instance a cost minimisation approach can be used. An example of this is the use of different delivery devices in asthma. A systematic search of the literature6 found that there is little evidence for any of the devices producing superior outcomes in clinical trials, so a cost minimisation analysis was carried out in which the costs of the devices were directly compared. Since a metered-dose inhaler with spacer is the cheapest method available this is the preferred first-line delivery method to try, but of course this does not mean that some patients will not need dry powder devices or breath-activated inhalers.

Cost-utility analysis

In some cases treatments cannot be directly compared using one of the simpler methods above as the treatments alter quality and quantity of life. Many of the treatments used in cancer fall into this category and assessments have to be made that incorporate both mortality and quality of life (QoL).

One way of judging how much people value their current health status is by using a standard gamble technique. Patients are asked to consider the theoretical possibility of having a treatment for their condition that had a chance of leaving them in perfect health or causing death; the odds of each outcome are adjusted until they are unsure whether to accept the treatment or not, and this can be used to rate their current QoL. This information can then be turned into quality-adjusted-life-years (QALYs) to allow the results of treatments for different diseases to be compared.

Sensitivity analysis

Since all economical analysis requires assumptions to be made about the cost of treatments and the value of outcomes, it is usual to carry out a sensitivity analysis to see how much the results of the analysis vary when the assumptions are altered. In particular, it may be necessary to predict what would happen beyond the timescale of the trials by using modelling techniques. If the results are very unstable when the assumptions are adjusted, this should be made clear and the reader will need to interpret the analysis with more caution.

Decisions have to be made

In the real world medical needs will always exceed the ability of any healthcare system to provide them. Hard choices have to be made every day about how best to use the resources that are available to us. The best available evidence of treatment efficacy (usually from systematic review of the results of randomised controlled trials) has to be combined with an economic analysis. Then hard choices must sometimes be made.

These are the processes used by the National Institute for Clinical Excellence (NICE), and they should be as transparent as possible so that we can see how the decisions were reached, even if we do not agree with all of them.

Table 1. Glossary of terms

Cost-effectiveness analysis

A form of economic study design in which consequences of different interventions may vary but can be expressed in identical natural units; competing interventions are compared in terms of cost per unit of consequence

Cost-minimisation analysis

An economic study design in which the consequences of competing interventions are the same and in which only inputs are taken into consideration; the aim is to decide which is the cheapest way of achieving the same outcome

Cost-utility analysis

A form of economic study design in which interventions producing different consequences in both quality and quantity of life are expressed as utilities; the best known utility measure is the quality-adjusted-life-year or QALY; competing interventions can be compared in terms of cost per QALY

Sensitivity analysis

A technique that repeats the comparison between inputs and consequences, varying the assumptions underlying the estimates – in doing so, sensitivity analysis tests the robustness of the conclusions by varying the items around which there is uncertainty

I would like to thank Professor Miranda Mugford for permission to use the glossary terms from Elementary Economic Evaluation and for helpful comments on this article.

Acknowledgement

I would like to thank Professor Miranda Mugford for permission to use the glossary of terms from Elementary Economic Evaluation in Health Care and for helpful comments on this article.

References

1. Bosch J, Yusuf S, Pogue J, et al. Use of ramipril in preventing stroke: double blind randomised trial. BMJ 2002; 324:699-702.
2. Badrinath P, Wakeman AP, Wakeman JG, et al. Preventing stroke with ramipril. BMJ 2002;325:439.
3. Jefferson TO, Demicheli V, Mugford M. Elementary economic evaluation in health care. 2nd ed. London: BMJ Books, 2000;132.
4. Smeeth L, Haines A, Ebrahim S. Numbers needed to treat derived from meta-analyses – sometimes informative, usually misleading. BMJ 1999;318:1548-51.
5. MRC/BHF Heart Protection Study of cholesterol lowering with simvastatin in 20 536 high-risk individuals: a placebo-controlled randomised controlled trial. Lancet 2002;360:7-22.
6. Brocklebank D, Ram F, Wright J, et al. Comparison of the effectiveness of inhaler devices in asthma and chronic obstructive airways disease; a systematic review of the literature. Health Technol Assess 2001;5:1-149.