Background Many randomized controlled trials (RCTs) collect cost-effectiveness data. Without appropriate sample size calculations, patient recruitment may cease before the cost-effectiveness of the intervention can be established or continue after the cost-effectiveness of the intervention is established beyond doubt.
Purpose We determined the frequency with which cost-effectiveness is considered in sample size calculations and whether RCT-based economic evaluations are likely to come to inconclusive results at odds with the clinical findings.
Methods We searched the National Health Service Economic Evaluation Database (NHS EED) to identify RCT-based cost-utility analyses. RCTs that collected individual patient data on costs and quality-adjusted life years (QALYs) were eligible. Studies using models to extrapolate the results of RCTs or with insufficient information on incremental costs and QALYs were excluded.
Results In total, 38 trials met eligibility criteria. Only one considered cost-effectiveness in sample size calculations. RCTs were less likely to reach definitive conclusions based on the cost-effectiveness results than the primary clinical outcome (15.8% vs. 42.1%; McNemar; p = 0.01). In trials that provided sufficient data, exploratory analysis indicated that the median power to detect important differences was 29.5% for QALYs, 94.1% for costs, and 78.7% for the primary clinical outcome. In three trials (7.9%), a definitely more effective intervention was found to be expensive and probably not cost-effective.
Limitations Our results reflect trials where authors considered within-trial estimates of cost-effectiveness to be meaningful. In focusing on one primary clinical outcome from each RCT, we have simplified the clinical effectiveness results, although the primary outcome will usually be one that policy makers use in judging the ‘success’ of the intervention.
Conclusions Economic evaluations conducted alongside RCTs are valuable, but often present inconclusive evidence. Trial results may lead to discordant messages when the most effective intervention is probably not the most cost-effective. Despite methodological advances, trialists rarely assessed the extent to which their trial might resolve the key uncertainties about the cost-effectiveness of interventions. We recommend that grant funders should do more to encourage trialists to include economic end points in sample size calculations, particularly when the majority of costs and benefits of the intervention occur within the time frame of the trial.