Background: The burden of atherosclerosis has led to treatment prioritization on high-risk individuals without established cardiovascular disease based on risk estimates. We investigated the effects of biological variation in risk factors on risk estimate accuracy and whether current primary prevention screening (risk assessment) models correctly categorize patients.
Methods: A population of 10 000 'perfect' individuals with 100 stimulants affected by biological and analytical variation for systolic blood pressure, total cholesterol, high-density lipoprotein-cholesterol was mathematically modelled. Coronary heart disease (CHD) risks were calculated using the Framingham study algorithm and the mathematical properties of the screening system were evaluated.
Results: At internationally recommended 10-year CHD risk treatment threshold levels of 15, 20 and 30%, the 95% confidence intervals were +/- 5.1, +/- 6.0 and +/- 6.9% for single-point (singlicate), +/- 3.6, +/- 4.2 and +/- 4.9% for duplicate and +/- 2.8, +/- 3.3 and +/- 3.9% for triplicate estimates respectively (i.e. for singlicate 15% risk, 95% confidence interval is 9.9-20.1%). Consequently, using the 30% risk threshold from the National Service Framework (NSF) for CHD with singlicate estimation, 30% of patients who should receive treatment would be denied it and 20% would receive treatment unnecessarily. Multiple measurements improve precision but cannot absolutely define risk. Blood pressure should be measured to the greatest accuracy possible and not rounded prior to averaging.
Conclusions: This study suggests biological variation in cardiovascular risk factors has profound consequences on calculated risk for therapeutic decision-making. Current guidelines recommending multiple measurements are usually ignored. Triplicate measurement is required to allow risk to be identified and clinical judgement has to be exercised in interpretation of the results.