Objective: Determine how varying longitudinal historical training data can impact prediction of future clinical decisions. Estimate the "decay rate" of clinical data source relevance.
Materials and methods: We trained a clinical order recommender system, analogous to Netflix or Amazon's "Customers who bought A also bought B..." product recommenders, based on a tertiary academic hospital's structured electronic health record data. We used this system to predict future (2013) admission orders based on different subsets of historical training data (2009 through 2012), relative to existing human-authored order sets.
Results: Predicting future (2013) inpatient orders is more accurate with models trained on just one month of recent (2012) data than with 12 months of older (2009) data (ROC AUC 0.91 vs. 0.88, precision 27% vs. 22%, recall 52% vs. 43%, all P<10-10). Algorithmically learned models from even the older (2009) data was still more effective than existing human-authored order sets (ROC AUC 0.81, precision 16% recall 35%). Training with more longitudinal data (2009-2012) was no better than using only the most recent (2012) data, unless applying a decaying weighting scheme with a "half-life" of data relevance about 4 months.
Discussion: Clinical practice patterns (automatically) learned from electronic health record data can vary substantially across years. Gold standards for clinical decision support are elusive moving targets, reinforcing the need for automated methods that can adapt to evolving information.
Conclusions and relevancm: Prioritizing small amounts of recent data is more effective than using larger amounts of older data towards future clinical predictions.
Keywords: Collaborative filtering; Data mining; Electronic health records; Practice variability; Prediction models.
Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.