Improving Prediction of Low-Prior Clinical Events with Simultaneous General Patient-State Representation Learning

Artif Intell Med Conf Artif Intell Med (2005-). 2021 Jun;12721:479-490. doi: 10.1007/978-3-030-77211-6_57. Epub 2021 Jun 8.

Abstract

Low-prior targets are common among many important clinical events, which introduces the challenge of having enough data to support learning of their predictive models. Many prior works have addressed this problem by first building a general patient-state representation model, and then adapting it to a new low-prior prediction target. In this schema, there is potential for the predictive performance to be hindered by the misalignment between the general patient-state model and the target task. To overcome this challenge, we propose a new method that simultaneously optimizes a shared model through multi-task learning of both the low-prior supervised target and general purpose patient-state representation (GPSR). More specifically, our method improves prediction performance of a low-prior task by jointly optimizing a shared model that combines the loss of the target event and a broad range of generic clinical events. We study the approach in the context of Recurrent Neural Networks (RNNs). Through extensive experiments on multiple clinical event targets using MIMIC-III [8] data, we show that the inclusion of general patient-state representation tasks during model training improves the prediction of individual low-prior targets.

Keywords: General Patient-State Representation; LSTM; Low-Prior Events; RNN; Simultaneous Learning; Weighted Loss.