Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
8-2018
Abstract
In this paper, we study how we can improve a deep learning approach to textual entailment by incorporating lexical entailment relations from WordNet. Our idea is to embed the lexical entailment knowledge contained in WordNet in specially-learned word vectors, which we call “entailment vectors.” We present a standard neural network model and a novel set-theoretic model to learn these entailment vectors from word pairs with known lexical entailment relations derived from WordNet. We further incorporate these entailment vectors into a decomposable attention model for textual entailment and evaluate the model on the SICK and the SNLI dataset. We find that using these special entailment word vectors, we can significantly improve the performance of textual entailment compared with a baseline that uses only standard word2vec vectors. The final performance of our model is close to or above the state of the art, but our method does not rely on any manually-crafted rules or extensive syntactic features.
Discipline
Databases and Information Systems | Theory and Algorithms
Research Areas
Data Science and Engineering
Publication
Proceedings of the 27th International Conference on Computational Linguistics: Santa Fe, New Mexico, August 20-26
First Page
270
Last Page
281
Publisher
ACL
City or Country
New Brunswick, NJ
Citation
LAN, Yunshi and JIANG, Jing.
Embedding WordNet knowledge for textual entailment. (2018). Proceedings of the 27th International Conference on Computational Linguistics: Santa Fe, New Mexico, August 20-26. 270-281.
Available at: https://ink.library.smu.edu.sg/sis_research/4280
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.