Publication Type

Conference Proceeding Article

Publication Date



Natural language inference (NLI) is a fundamentallyimportant task in natural languageprocessing that has many applications. Therecently released Stanford Natural LanguageInference (SNLI) corpus has made it possibleto develop and evaluate learning-centeredmethods such as deep neural networks for naturallanguage inference (NLI). In this paper,we propose a special long short-term memory(LSTM) architecture for NLI. Our modelbuilds on top of a recently proposed neural attentionmodel for NLI but is based on a significantlydifferent idea. Instead of derivingsentence embeddings for the premise and thehypothesis to be used for classification, our solutionuses a match-LSTM to perform word-by-wordmatching of the hypothesis with thepremise. This LSTM is able to place moreemphasis on important word-level matchingresults. In particular, we observe that thisLSTM remembers important mismatches thatare critical for predicting the contradiction orthe neutral relationship label. On the SNLIcorpus, our model achieves an accuracy of86.1%, outperforming the state of the art.


Databases and Information Systems | Software Engineering | Systems Architecture

Research Areas

Data Management and Analytics


NAACL HLT 2016: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: San Diego, California, 2016 June 12-17

First Page


Last Page





Association for Computational Linguistics (ACL)

City or Country

Stroudsburg, USA

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Additional URL