Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2012;9(5):1-12.
doi: 10.1371/journal.pmed.1001221. Epub 2012 May 22.

Reporting and Methods in Clinical Prediction Research: A Systematic Review

Affiliations
Free PMC article
Review

Reporting and Methods in Clinical Prediction Research: A Systematic Review

Walter Bouwmeester et al. PLoS Med. .
Free PMC article

Abstract

Background: We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.

Methods and findings: We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.

Conclusions: The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Flowchart of included studies.
aThe hand search included only studies with an abstract, published in 2008 in The New England Journal of Medicine, The Lancet, JAMA: the Journal of the American Medical Association, Annals of Internal Medicine, BMJ, and PLoS Medicine. The following publication types were excluded beforehand: editorials, bibliographies, biographies, comments, dictionaries, directories, festschrifts, interviews, letters, news, and periodical indexes. bStudies, generally conducted in a yet healthy population, aimed at quantifying a causal relationship between a particular determinant or risk factor and an outcome, adjusting for other risk factors (i.e., confounders). cFor example, see .

Similar articles

See all similar articles

Cited by 118 articles

See all "Cited by" articles

References

    1. Altman DG, Riley RD. Primer: an evidence-based approach to prognostic markers. Nat Clin Pract Oncol. 2005;2:466–472. - PubMed
    1. Altman DG. Prognostic models: a methodological framework and review of models for breast cancer. In: Lyman GH, Burstein HJ, editors. Breast cancer. Translational therapeutic strategies. New York: New York Informa Healthcare; 2007. pp. 11–26.
    1. Altman DG, Lyman GH. Methodological challenges in the evaluation of prognostic factors in breast cancer. Breast Cancer Res Treat. 1998;52:289–303. - PubMed
    1. McShane LM, Altman DG, Sauerbrei W, Taube SE, Gion M, et al. Reporting recommendations for tumor marker prognostic studies (REMARK). J Natl Cancer Inst. 2005;97:1180–1184. - PubMed
    1. Rothwell PM. Prognostic models. Pract Neurol. 2008;8:242–253. - PubMed
Feedback