Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
6-2022
Abstract
Recent Vision Transformer (ViT) models have demonstrated encouraging results across various computer vision tasks, thanks to its competence in modeling long-range dependencies of image patches or tokens via self-attention. These models, however, usually designate the similar receptive fields of each token feature within each layer. Such a constraint inevitably limits the ability of each self-attention layer in capturing multi-scale features, thereby leading to performance degradation in handling images with multiple objects of different scales. To address this issue, we propose a novel and generic strategy, termed shunted selfattention (SSA), that allows ViTs to model the attentions at hybrid scales per attention layer. The key idea of SSA is to inject heterogeneous receptive field sizes into tokens: before computing the self-attention matrix, it selectively merges tokens to represent larger object features while keeping certain tokens to preserve fine-grained features. This novel merging scheme enables the self-attention to learn relationships between objects with different sizes, and simultaneously reduces the token numbers and the computational cost. Extensive experiments across various tasks demonstrate the superiority of SSA. Specifically, the SSAbased transformer achieve 84.0% Top-1 accuracy and outperforms the state-of-the-art Focal Transformer on ImageNet with only half of the model size and computation cost, and surpasses Focal Transformer by 1.3 mAP on COCO and 2.9 mIOU on ADE20K under similar parameter and computation cost.
Keywords
Computation costs, Deep learning architecture and technique, Efficient learning, Efficient learning and inference, Image patches, Learning architectures, Learning techniques, Multi-scales, Receptive fields, Transformer modeling
Discipline
Databases and Information Systems | Information Security
Research Areas
Information Systems and Management
Publication
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New Orleans, June 19-24
First Page
10843
Last Page
10852
ISBN
9781665469463
Identifier
10.1109/CVPR52688.2022.01058
Publisher
IEEE
City or Country
New Jersey
Citation
REN, Sucheng; ZHOU, Daquan; HE, Shengfeng; FENG, Jiashi; and WANG, Xinchao.
Shunted self-attention via multi-scale token aggregation. (2022). Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New Orleans, June 19-24. 10843-10852.
Available at: https://ink.library.smu.edu.sg/sis_research/8530
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.