Causal inference and interpretable machine learning for personalised medicine

Parbhoo, Sonali. Causal inference and interpretable machine learning for personalised medicine. 2019, Doctoral Thesis, University of Basel, Faculty of Science.

Available under License CC BY-NC-ND (Attribution-NonCommercial-NoDerivatives).


Official URL: http://edoc.unibas.ch/diss/DissB_71348

Downloads: Statistics Overview


In this thesis, we discuss the importance of causal knowledge in healthcare for tailoring treatments to a patient's needs. We propose three different causal models for reasoning about the effects of medical interventions on patients with HIV and sepsis, based on observational data. Both application areas are challenging as a result of patient heterogeneity and the existence of confounding that influences patient outcomes. Our first contribution is a treatment policy mixture model that combines nonparametric, kernel-based learning with model-based reinforcement learning to reason about a series of treatments and their effects. These methods each have their own strengths: non-parametric methods can accurately predict treatment effects where there are overlapping patient instances or where data is abundant; model-based reinforcement learning generalises better in outlier situations by learning a belief state representation of confounding. The overall policy mixture model learns a partition of the space of heterogeneous patients such that we can personalise treatments accordingly. Our second contribution incorporates knowledge from kernel-based reasoning directly into a reinforcement learning model by learning a combined belief state representation. In doing so, we can use the model to simulate counterfactual scenarios to reason about what would happen to a patient if we intervened in a particular way and how would their specific outcomes change. As a result, we may tailor therapies according to patient-specific scenarios.
Our third contribution is a reformulation of the information bottleneck problem for learning an interpretable, low-dimensional representation of confounding for medical decision-making. The approach uses the relevance of information to perform a sufficient reduction of confounding. Based on this reduction, we learn equivalence classes among groups of patients, such that we may transfer knowledge to patients with incomplete covariate information at test time. By conditioning on the sufficient statistic we can accurately infer treatment effects on both a population and subgroup level. Our final contribution is the development of a novel regularisation strategy that can be applied to deep machine learning models to enforce clinical interpretability. We specifically train deep time-series models such that their predictions have high accuracy while being closely modelled by small decision trees that can be audited easily by medical experts. Broadly, our tree-based explanations can be used to provide additional context in scenarios where reasoning about treatment effects may otherwise be difficult. Importantly, each of the models we present is an attempt to bring about more understanding in medical applications to inform better decision-making overall.
Advisors:Roth, Volker and Beerenwinkel, Niko
Faculties and Departments:05 Faculty of Science > Departement Mathematik und Informatik > Informatik > Biomedical Data Analysis (Roth)
UniBasel Contributors:Parbhoo, Sonali and Roth, Volker
Item Type:Thesis
Thesis Subtype:Doctoral Thesis
Thesis no:71348
Thesis status:Complete
Number of Pages:1 Online-Ressource (xviii, 128 Seiten)
Identification Number:
edoc DOI:
Last Modified:06 Sep 2019 04:30
Deposited On:05 Sep 2019 08:52

Repository Staff Only: item control page