Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2016

Abstract

Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1%, outperforming the state of the art.

Discipline

Databases and Information Systems | Systems Architecture

Research Areas

Data Science and Engineering

Publication

NAACL HLT 2016: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: San Diego, California, 2016 June 12-17

First Page

1442

Last Page

1451

ISBN

9781941643914

Identifier

10.18653/v1/N16-1170

Publisher

Association for Computational Linguistics (ACL)

City or Country

Stroudsburg, PA

Additional URL

https://doi.org/10.18653/v1/N16-1170

Share

COinS