Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Dec 1;1(9):1014-1020.
doi: 10.1001/jamacardio.2016.3236.

Comparison of Approaches for Heart Failure Case Identification From Electronic Health Record Data

Affiliations
Free PMC article

Comparison of Approaches for Heart Failure Case Identification From Electronic Health Record Data

Saul Blecker et al. JAMA Cardiol. .
Free PMC article

Abstract

Importance: Accurate, real-time case identification is needed to target interventions to improve quality and outcomes for hospitalized patients with heart failure. Problem lists may be useful for case identification but are often inaccurate or incomplete. Machine-learning approaches may improve accuracy of identification but can be limited by complexity of implementation.

Objective: To develop algorithms that use readily available clinical data to identify patients with heart failure while in the hospital.

Design, setting, and participants: We performed a retrospective study of hospitalizations at an academic medical center. Hospitalizations for patients 18 years or older who were admitted after January 1, 2013, and discharged before February 28, 2015, were included. From a random 75% sample of hospitalizations, we developed 5 algorithms for heart failure identification using electronic health record data: (1) heart failure on problem list; (2) presence of at least 1 of 3 characteristics: heart failure on problem list, inpatient loop diuretic, or brain natriuretic peptide level of 500 pg/mL or higher; (3) logistic regression of 30 clinically relevant structured data elements; (4) machine-learning approach using unstructured notes; and (5) machine-learning approach using structured and unstructured data.

Main outcomes and measures: Heart failure diagnosis based on discharge diagnosis and physician review of sampled medical records.

Results: A total of 47 119 hospitalizations were included in this study (mean [SD] age, 60.9 [18.15] years; 23 952 female [50.8%], 5258 black/African American [11.2%], and 3667 Hispanic/Latino [7.8%] patients). Of these hospitalizations, 6549 (13.9%) had a discharge diagnosis of heart failure. Inclusion of heart failure on the problem list (algorithm 1) had a sensitivity of 0.40 and a positive predictive value (PPV) of 0.96 for heart failure identification. Algorithm 2 improved sensitivity to 0.77 at the expense of a PPV of 0.64. Algorithms 3, 4, and 5 had areas under the receiver operating characteristic curves of 0.953, 0.969, and 0.974, respectively. With a PPV of 0.9, these algorithms had associated sensitivities of 0.68, 0.77, and 0.83, respectively.

Conclusions and relevance: The problem list is insufficient for real-time identification of hospitalized patients with heart failure. The high predictive accuracy of machine learning using free text demonstrates that support of such analytics in future electronic health record systems can improve cohort identification.

Figures

Figure 1
Figure 1
Receiver operating curves (ROCs) for three algorithms to classify patients with heart failure. The three algorithms are: logistic regression of structured data (algorithm 3), machine learning of unstructured data (algorithm 4), and machine learning of a combination of structured and unstructured data (algorithm 5); also included are points for two algorithms that represent binary classification: heart failure on problem list (algorithm 1) and presence of 1 of 3 clinical characteristics (algorithm 2).
Figure 2
Figure 2
Number of patients identified as having heart failure by each algorithm, among hospitalizations with a discharge diagnosis of heart failure in the validation set and whose patients were not compliant with one of three quality metrics. Quality metrics were assessment of ejection fraction (EF) with echocardiography, discharge medication of an ACE inhibitor or ARB for patients with documented EF≤40%, discharge medication of a heart failure specific beta-blocker for patients with documented EF≤40%. Figure displays true positives and does not account for false positives; for instance, false positives for EF measurement were 6, 318, 41, 56, 71 with a corresponding positive predictive value (PPV) of 0.92, 0.30, 0.71, 0.71, and 0.67 for algorithms 1–5, respectively. Algorithms were: heart failure on problem list (algorithm 1), Presence of 1 of 3 clinical characteristics (algorithm 2), logistic regression of structured data (algorithm 3), machine learning of unstructured data (algorithm 4), and machine learning of a combination of structured and unstructured data (algorithm 5).

Similar articles

See all similar articles

Cited by 12 articles

See all "Cited by" articles
Feedback