Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
6-2023
Abstract
Weakly Supervised Video Anomaly Detection (WSVAD) is challenging because the binary anomaly label is only given on the video level, but the output requires snippet-level predictions. So, Multiple Instance Learning (MIL) is prevailing in WSVAD. However, MIL is notoriously known to suffer from many false alarms because the snippet-level detector is easily biased towards the abnormal snippets with simple context, confused by the normality with the same bias, and missing the anomaly with a different pattern. To this end, we propose a new MIL framework: Unbiased MIL (UMIL), to learn unbiased anomaly features that improve WSVAD. At each MIL training iteration, we use the current detector to divide the samples into two groups with different context biases: the most confident abnormal/normal snippets and the rest ambiguous ones. Then, by seeking the invariant features across the two sample groups, we can remove the variant context biases.
Discipline
Databases and Information Systems | Graphics and Human Computer Interfaces
Research Areas
Data Science and Engineering
Publication
Proceedings of the 2023 Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023 June 18-22
First Page
8022
Last Page
8031
Publisher
CVPR
City or Country
Vancouver
Citation
LYU, Hui; YUE, Zhongqi; SUN, Qianru; LUO, Bin; CUI, Zhen; and ZHANG, Hanwang.
Unbiased multiple instance learning for weakly supervised video anomaly detection. (2023). Proceedings of the 2023 Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023 June 18-22. 8022-8031.
Available at: https://ink.library.smu.edu.sg/sis_research/8101
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Included in
Databases and Information Systems Commons, Graphics and Human Computer Interfaces Commons