Publication Type
Conference Proceeding Article
Book Title/Conference/Journal
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019 July 28 - August 2
Year
7-2019
Abstract
Many state-of-the-art neural models for NLP are heavily parameterized and thus memory inefficient. This paper proposes a series of lightweight and memory efficient neural architectures for a potpourri of natural language processing (NLP) tasks. To this end, our models exploit computation using Quaternion algebra and hypercomplex spaces, enabling not only expressive inter-component interactions but also significantly (75%) reduced parameter size due to lesser degrees of freedom in the Hamilton product. We propose Quaternion variants of models, giving rise to new architectures such as the Quaternion attention Model and Quaternion Transformer. Extensive experiments on a battery of NLP tasks demonstrates the utility of proposed Quaternion-inspired models, enabling up to 75% reduction in parameter size without significant loss in performance.
Disciplines
OS and Networks | Programming Languages and Compilers
Subject(s)
Applied or Integration/Application Scholarship
Publisher
ACL
DOI
10.18653/v1/P19-1145
Version
publishedVersion
Language
eng
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Format
application/PDF
Citation
TAY, Yi; ZHANG, Aston; LUU, Anh Tuan; RAO, Jinfeng; ZHANG, Shuai; WANG, Shuohang; FU, Jie; and HUI, Siu Cheung.
Lightweight and efficient neural natural language processing with quaternion networks. (2019). Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019 July 28 - August 2. 1494-1503.
Available at: https://ink.library.smu.edu.sg/scis_studentpub/2
Additional URL
https://aclanthology.org/P19-1145/