Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

10-2023

Abstract

Large language models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences. In this paper, we propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers. RelD is trained on the constructed RelQA, a bilingual question-answering dialogue dataset along with answers generated by LLMs and a comprehensive set of metrics. Our experimental results demonstrate that the proposed RelD successfully detects hallucination in the answers generated by diverse LLMs. Moreover, it performs well in distinguishing hallucination in LLMs' generated answers from both in-distribution and out-of-distribution datasets. Additionally, we also conduct a thorough analysis of the types of hallucinations that occur and present valuable insights. This research significantly contributes to the detection of reliable answers generated by LLMs and holds noteworthy implications for mitigating hallucination in the future work.

Keywords

Hallucination Detection, Large Language Models, Reliable Answers

Discipline

Artificial Intelligence and Robotics | Numerical Analysis and Scientific Computing | Programming Languages and Compilers

Publication

CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, October 21-25

First Page

245

Last Page

255

ISBN

9798400701245

Identifier

10.1145/3583780.3614905

Publisher

ACM

City or Country

New York

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1145/3583780.3614905

Share

COinS