A Reinforcement Learning Framework for Dynamic Mediation Analysis

Gespeichert in:
Bibliographische Detailangaben
Titel: A Reinforcement Learning Framework for Dynamic Mediation Analysis
Autoren: Ge, Lin, Wang, Jitao, Shi, Chengchun, Wu, Zhenke, Song, Rui
Publication Status: Preprint
Verlagsinformationen: arXiv, 2023.
Publikationsjahr: 2023
Schlagwörter: FOS: Computer and information sciences, Computer Science - Machine Learning, DMS-2003637, R01NR013658toJW&ZW, (EP/W014971/1, Machine Learning (stat.ML), 01 natural sciences, Machine Learning (cs.LG), Methodology (stat.ME), 03 medical and health sciences, (R01 MH101459toZW, 0302 clinical medicine, Statistics - Machine Learning, 0101 mathematics, Statistics - Methodology
Beschreibung: Mediation analysis learns the causal effect transmitted via mediator variables between treatments and outcomes and receives increasing attention in various scientific domains to elucidate causal relations. Most existing works focus on point-exposure studies where each subject only receives one treatment at a single time point. However, there are a number of applications (e.g., mobile health) where the treatments are sequentially assigned over time and the dynamic mediation effects are of primary interest. Proposing a reinforcement learning (RL) framework, we are the first to evaluate dynamic mediation effects in settings with infinite horizons. We decompose the average treatment effect into an immediate direct effect, an immediate mediation effect, a delayed direct effect, and a delayed mediation effect. Upon the identification of each effect component, we further develop robust and semi-parametrically efficient estimators under the RL framework to infer these causal effects. The superior performance of the proposed method is demonstrated through extensive numerical studies, theoretical results, and an analysis of a mobile health dataset.
Publikationsart: Article
DOI: 10.48550/arxiv.2301.13348
Zugangs-URL: http://arxiv.org/abs/2301.13348
http://eprints.lse.ac.uk/120776/
Rights: arXiv Non-Exclusive Distribution
Dokumentencode: edsair.doi.dedup.....869b9c102129fd6d6e1db326800fcff3
Datenbank: OpenAIRE
Beschreibung
Abstract:Mediation analysis learns the causal effect transmitted via mediator variables between treatments and outcomes and receives increasing attention in various scientific domains to elucidate causal relations. Most existing works focus on point-exposure studies where each subject only receives one treatment at a single time point. However, there are a number of applications (e.g., mobile health) where the treatments are sequentially assigned over time and the dynamic mediation effects are of primary interest. Proposing a reinforcement learning (RL) framework, we are the first to evaluate dynamic mediation effects in settings with infinite horizons. We decompose the average treatment effect into an immediate direct effect, an immediate mediation effect, a delayed direct effect, and a delayed mediation effect. Upon the identification of each effect component, we further develop robust and semi-parametrically efficient estimators under the RL framework to infer these causal effects. The superior performance of the proposed method is demonstrated through extensive numerical studies, theoretical results, and an analysis of a mobile health dataset.
DOI:10.48550/arxiv.2301.13348