Deep advantage learning for optimal dynamic treatment regime

Stat Theory Relat Fields. 2018;2(1):80-88. doi: 10.1080/24754269.2018.1466096. Epub 2018 May 16.

Abstract

Recently deep learning has successfully achieved state-of-the-art performance on many difficult tasks. Deep neural network outperforms many existing popular methods in the field of reinforcement learning. It can also identify important covariates automatically. Parameter sharing of convolutional neural network (CNN) greatly reduces the amount of parameters in the neural network, which allows for high scalability. However few research has been done on deep advantage learning (A-learning). In this paper, we present a deep A-learning approach to estimate optimal dynamic treatment regime. A-learning models the advantage function, which is of direct relevance to the goal. We use an inverse probability weighting (IPW) method to estimate the difference between potential outcomes, which does not require to make any model assumption on the baseline mean function. We implemented different architectures of deep CNN and convexified convolutional neural networks (CCNN). The proposed deep A-learning methods are applied to a data from the STAR*D trial and are shown to have better performance compared with the penalized least square estimator using a linear decision rule.

Keywords: Advantage Learning; Convexified Convolutional Neural Networks; Convolutional Neural Networks; Dynamic Treatment Regime; Inverse Probability Weighting.