Interpretable Machine Learning Techniques for Causal Inference Using Balancing Scores as Meta-features

Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul:2018:4042-4045. doi: 10.1109/EMBC.2018.8513026.

Abstract

Estimating individual causal effect is important for decision making in many fields especially for medical interventions. We propose an interpretable and accurate algorithm for estimating causal effects from observational data. The proposed scheme is combining multiple predictors' outputs by an interpretable predictor such as linear predictor and if then rules. We secure interpretability using the interpretable predictor and balancing scores in causal inference studies as meta-features. For securing accuracy, we adapt machine learning algorithms for calculating balancing scores. We analyze the effect of t-PA therapy for stroke patients using real-world data, which has 64,609 records with 362 variables and interpret results. The results show that cross validation AUC of the proposed scheme is little less than original machine learning scheme; however, the proposed scheme provides interpretability that t-PA therapy is effective for severe patients.

MeSH terms

  • Algorithms*
  • Humans
  • Machine Learning*