Conference Proceeding Article
Natural language inference (NLI) is a fundamentallyimportant task in natural languageprocessing that has many applications. Therecently released Stanford Natural LanguageInference (SNLI) corpus has made it possibleto develop and evaluate learning-centeredmethods such as deep neural networks for naturallanguage inference (NLI). In this paper,we propose a special long short-term memory(LSTM) architecture for NLI. Our modelbuilds on top of a recently proposed neural attentionmodel for NLI but is based on a significantlydifferent idea. Instead of derivingsentence embeddings for the premise and thehypothesis to be used for classification, our solutionuses a match-LSTM to perform word-by-wordmatching of the hypothesis with thepremise. This LSTM is able to place moreemphasis on important word-level matchingresults. In particular, we observe that thisLSTM remembers important mismatches thatare critical for predicting the contradiction orthe neutral relationship label. On the SNLIcorpus, our model achieves an accuracy of86.1%, outperforming the state of the art.
Databases and Information Systems | Software Engineering | Systems Architecture
Data Management and Analytics
NAACL HLT 2016: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: San Diego, California, 2016 June 12-17
Association for Computational Linguistics (ACL)
City or Country
WANG, Shuohang and JIANG, Jing.
Learning natural language inference with LSTM. (2016). NAACL HLT 2016: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: San Diego, California, 2016 June 12-17. 1442-1451. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3434
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.