Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2025

Abstract

Large Language Models (LLMs) have emerged as powerful tools for generating content and facilitating information seeking across diverse domains. While their integration into conversational systems opens new avenues for interactive information-seeking experiences, their effectiveness is constrained by their knowledge boundaries—the limits of what they know and their ability to provide reliable, truthful, and contextually appropriate information. Understanding these boundaries is essential for maximizing the utility of LLMs for real-time information seeking while ensuring their reliability and trustworthiness. In this tutorial, we will explore the taxonomy of knowledge boundary in LLMs, addressing their handling of uncertainty, response calibration, and mitigation of unintended behaviors that can arise during interaction with users. We will also present advanced techniques for optimizing LLM behavior in generative information-seeking tasks, ensuring that models align with user expectations of accuracy and transparency. Attendees will gain insights into research trends and practical methods for enhancing the reliability and utility of LLMs for trustworthy information access.

Keywords

Trustworthy Information Access, Large Language Model, Knowledge Boundary, Retrieval-augmented Generation

Discipline

Artificial Intelligence and Robotics

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

SIGIR '25: Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, Padua Italy, July 13 - 18

First Page

4086

Last Page

4089

Identifier

10.1145/3726302.3731684

Publisher

ACM

City or Country

New York

Additional URL

https://doi.org/10.1145/3726302.3731684

Share

COinS