Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome this challenge and enable learning multiple tasks simultaneously on a decentralized distributed network. Specifically, the learning framework can be separated into two phases: (i) multi-task information is shared within each node on the first phase; (ii) communication between nodes then leads the whole network to converge to a common minimizer. Theoretical analysis indicates that our algorithm achieves a (Formula presented.) regret bound when compared with the best classifier in hindsight, which is further validated by experiments on both synthetic and real-world datasets.
Decentralized distributed learning, Multi-task learning, Online learning, Classification (of information) Learning systems, Distributed learning, Distributed networks, Learning frameworks, Multiple tasks, Multitask learning, Novel algorithm, Online learning, Real-world datasets, E-learning
Computer Sciences | Online and Distance Education | Software Engineering
Springer Verlag (Germany)
ZHANG, Chi; ZHAO, Peilin; HAO, Shuji; SOH, Yeng Chai; LEE, Bu Sung; MIAO, Chunyan; and HOI, Steven C. H..
Distributed multi-task classification: A decentralized online learning approach. (2017). Machine Learning. 1-21. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3841
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.