In this paper we propose a smoothed Q-learning algorithm for estimating optimal dynamic treatment regimes. In contrast to the Q-learning algorithm in which non-regular inference is involved, we show that under assumptions adopted in this paper, the proposed smoothed Q-learning estimator is asymptotically normally distributed even when the Q-learning estimator is not and its asymptotic variance can be consistently estimated. As a result, inference based on the smoothed Q-learning estimator is standard. We derive the optimal smoothing parameter and propose a data-driven method for estimating it. The ﬁnite sample properties of the smoothed Q-learning estimator are studied and compared with several existing estimators including the Q-learning estimator via an extensive simulation study. We illustrate the new method by analyzing data from the Clinical Antipsychotic Trials of Intervention EﬀectivenessAlzheimer’s Disease (CATIE-AD) study.
Asymptotic normality; Exceptional law; Optimal smoothing parameter; Sequential randomization; Wald-type inference
FAN, Yanqin; HE, Ming; SU, Liangjun; and ZHOU, Xiao-Hua.
A smoothed Q-learning algorithm for estimating optimal dynamic treatment regime. (2016). 1-45. Research Collection School Of Economics.
Available at: http://ink.library.smu.edu.sg/soe_research/2044
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.