Background: Various methodologic approaches have been used to estimate the role of risk factors in explaining the social gradient in coronary heart disease (CHD). Our objective was to examine whether there is a discrepancy in results obtained using the relative and absolute approaches.
Methods: Data are from the Whitehall II prospective cohort study on 5363 men who were 40- to 62-year-old at the start of the 11-year follow-up period.
Results: One or more of the 4 conventional risk factors examined (smoking, hypertension, high cholesterol, and diabetes) were present for 77% of individuals in the low socioeconomic group compared with 68% in the high socioeconomic group. The relative risk for incident CHD in the low socioeconomic group was 1.66 (95% confidence interval = 1.20 to 2.29) compared with the high group. Standardizing the distribution of risk factors in the low and high socioeconomic group to the overall study sample reduced relative risk by 16% and absolute risk by 14%. We also computed the population attributable risk (PAR) to indicate the reduction in CHD if the risk factor were completely removed from the population. The PAR associated with having at least one risk factor was 41% (95% confidence interval = 33% to 57%) in the high and 58% (13% to 91%) in the low socioeconomic group.
Conclusions: In situations where the goal is to remove social differences in the distribution of risk factors, conventional risk factors explain a similar proportion of the social gradient in CHD, whether using the relative or absolute approaches to change in risk. This is not comparable to population attributable risk calculations, in which the goal is to completely remove the risk factors from the population. Failure to recognize that these methods address different questions seems to be the reason for discrepancies in previous results.