Publication Type
Journal Article
Version
acceptedVersion
Publication Date
11-2023
Abstract
Abstract—MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, by migrating our focus away from the token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and demonstrate their gratifying performance. We summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer’s results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224 × 224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with commonly-used GELU yet achieves better performance. Specifically, StarReLU is a variant of Squared ReLU dedicated to alleviating distribution shift. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks. Code and models are available at https://github.com/sail-sg/metaformer.
Keywords
MetaFormer, Transformer, Neural Networks, Image Classification, Deep Learning
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
IEEE Transactions on Pattern Analysis and Machine Intelligence
Volume
46
Issue
2
First Page
896
Last Page
912
ISSN
0162-8828
Identifier
10.1109/TPAMI.2023.3329173
Publisher
Institute of Electrical and Electronics Engineers
Citation
YU, Weihao; SI, Chenyang; ZHOU, Pan; LUO, Mi; ZHOU, Yichen; FENG, Jiashi; YAN, Shuicheng; and WANG, Xinchao.
MetaFormer baselines for vision. (2023). IEEE Transactions on Pattern Analysis and Machine Intelligence. 46, (2), 896-912.
Available at: https://ink.library.smu.edu.sg/sis_research/9054
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/TPAMI.2023.3329173