Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2022
Abstract
Lawsandtheirinterpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
Discipline
Artificial Intelligence and Robotics | Science and Technology Law
Research Areas
Innovation, Technology and the Law
Publication
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 2022 May 22-27
Volume
1
First Page
4310
Last Page
4330
ISBN
9781955917216
Identifier
10.18653/v1/2022.acl-long.297
Publisher
ACL
City or Country
Dublin
Citation
CHALKIDIS, Ilias; JANA, Abhik; HARTUNG, Dirk; BOMMARITO, Michael; ANDROUTSOPOULOS, Ion; KATZ, Daniel; and ALETRAS, Nikolaos.
LexGLUE: A benchmark dataset for legal language understanding in English. (2022). Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 2022 May 22-27. 1, 4310-4330.
Available at: https://ink.library.smu.edu.sg/sol_research/4523
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://aclanthology.org/2022.acl-long.297