Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
8-2024
Abstract
Large language models (LLMs) are increasingly used to meet user information needs, but their effectiveness in dealing with user queries that contain various types of ambiguity remains unknown, ultimately risking user trust and satisfaction. To this end, we introduce CLAMBER, a benchmark for evaluating LLMs using a well-organized taxonomy. Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries, even enhanced by chain-of-thought (CoT) and few-shot prompting. These techniques may result in overconfidence in LLMs and yield only marginal enhancements in identifying ambiguity. Furthermore, current LLMs fall short in generating high-quality clarifying questions due to a lack of conflict resolution and inaccurate utilization of inherent knowledge.In this paper, CLAMBER presents a guidance and promotes further research on proactive and trustworthy LLMs.
Discipline
Databases and Information Systems | Programming Languages and Compilers
Research Areas
Data Science and Engineering
Publication
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 2024 August 11-16
First Page
10746
Last Page
10766
Publisher
ACL
City or Country
USA
Citation
ZHANG, Tong; QIN, Peixin; DENG, Yang; HUANG, Chen; LEI, Wenqiang; LIU, Junhong; JIN, Dingnan; LIANG, Hongru; and CHUA, Tat-Seng.
CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models. (2024). Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 2024 August 11-16. 10746-10766.
Available at: https://ink.library.smu.edu.sg/sis_research/9238
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://aclanthology.org/2024.acl-long.578