Active Crowdsourcing for Annotation
Conference Proceeding Article
Crowdsourcing has shown great potential in obtaining large-scale and cheap labels for different tasks. However, obtaining reliable labels is challenging due to several reasons, such as noisy annotators, limited budget and so on. The state-of-the-art approaches, either suffer in some noisy scenarios, or rely on unlimited resources to acquire reliable labels. In this article, we adopt the learning with expert~(AKA worker in crowdsourcing) advice framework to robustly infer accurate labels by considering the reliability of each worker. However, in order to accurately predict the reliability of each worker, traditional learning with expert advice will consult with external oracles~(AKA domain experts) on the true label of each instance. To reduce the cost of consultation, we proposed two active learning approaches, margin-based and weighted difference of advices based. Meanwhile, to address the problem of limited annotation budget, we proposed a reliability-based assigning approach which actively decides who to annotate the next instance based on each worker's cumulative performance. The experimental results both on real and simulated datasets show that our algorithms can achieve robust and promising performance both in the normal and noisy scenarios with limited budget.
Active Learning, Crowdsourcing, Online Learning
Databases and Information Systems
Data Management and Analytics
IEEE/WIC/ACM Web Intelligence 2015 and IEEE/WIC/ACM Intelligent Agent Technology 2015 (WI-IAT 2015): Proceedings: December 6-8, Singapore
City or Country
HAO, Shuji; MIAO, Chunyan; HOI, Steven C. H.; and ZHAO, Peilin.
Active Crowdsourcing for Annotation. (2015). IEEE/WIC/ACM Web Intelligence 2015 and IEEE/WIC/ACM Intelligent Agent Technology 2015 (WI-IAT 2015): Proceedings: December 6-8, Singapore. 1-8. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3173
This document is currently not available here.