With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technologyto understand and create an engaging user experience. This paper proposes afast adaptation and learning scheme of activity tracking policies when userstatistics are unknown a priori, varying with time, and inconsistent for differentusers. In our stochastic optimization, user activities are required to besynchronized with a backend under a cellular data limit to avoid overchargesfrom cellular operators. The mobile device is charged intermittently usingwireless or wired charging for receiving the required energy for transmission andsensing operations. Firstly, we propose an activity tracking policy byformulating a stochastic optimization as a constrained Markov decision process(CMDP). Secondly, we prove that the optimal policy of the CMDP has a thresholdstructure using a Lagrangian relaxation approach and the submodularity concept.We accordingly present a fast Q-learning algorithm by considering the policystructure to improve the convergence speed over that of conventionalQ-learning. Finally, simulation examples are presented to support thetheoretical findings of this paper.
Activity tracking, fast adaptation, Internet of Things, Markov decision processes, wireless charging
Computer Sciences | Software Engineering
Software and Cyber-Physical Systems
IEEE Transactions on Vehicular Technology
Institute of Electrical and Electronics Engineers (IEEE)
ALSHEIKH, Mohammad Abu; NIYATO, Dusit; LIN, Shaowei; TAN, Hwee-Pink; and KIM, Dong In.
Fast adaptation of activity sensing policies in mobile devices. (2017). IEEE Transactions on Vehicular Technology. 66, (7), 1-14. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3858
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.