Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
12-2021
Abstract
One of the main challenges in real-world reinforcement learning is to learn successfully from limited training samples. We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning, by exploiting an invariance property in the tasks. We provide a theoretical performance bound for the gain in sample efficiency under this setting. This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy. We demonstrate empirically the effectiveness of the proposed approach on two real-world sequential resource allocation tasks where this invariance property occurs: financial portfolio optimization and meta federated learning.
Keywords
training, conferences, neural networks, reinforcement learning, multitasking, collaborative work, resource management
Discipline
Numerical Analysis and Scientific Computing
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the 60th IEEE Conference on Decision and Control, CDC 2021, Austin, TX, USA, December 14-17
First Page
2270
Last Page
2275
ISBN
9781665436595
Identifier
10.1109/CDC45484.2021.9683491
Publisher
IEEE
City or Country
Piscataway, NJ
Citation
CAI, Desmond; LIM, Shiau Hong; and WYNTER, Laura.
Efficient reinforcement learning in resource allocation problems through permutation invariant multi-task learning. (2021). Proceedings of the 60th IEEE Conference on Decision and Control, CDC 2021, Austin, TX, USA, December 14-17. 2270-2275.
Available at: https://ink.library.smu.edu.sg/sis_research/10363
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/CDC45484.2021.9683491