The use of cost-effectiveness modeling to prioritize healthcare spending has become a key foundation of UK government policy. Although the preferred method of evaluation-cost-utility analysis-is not without its critics, it represents a standard approach that can arguably be used to assess relative value for money across a range of disease types and interventions. A key limitation of economic modeling, however, is that its conclusions hinge on the input assumptions, many of which are derived from randomized controlled trials or meta-analyses that cannot be reliably linked to real-world performance of treatments in a broader clinical context. This means that spending decisions are frequently based on artificial constructs that may project costs and benefits that are significantly at odds with those that are achievable in reality. There is a clear agenda to carry out some form of predictive validation for the model claims, in order to assess not only whether the spending decisions made can be justified post hoc, but also to ensure that budgetary expenditure continues to be allocated in the most rational way. To date, however, no timely, effective system to carry out this testing has been implemented, with the consequence that there is little objective evidence as to whether the prioritization decisions made are actually living up to expectations. This article reviews two unfulfilled initiatives that have been carried out in the UK over the past 20 years, each of which had the potential to address this objective, and considers why they failed to deliver the expected outcomes.