Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

12-2023

Abstract

Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, they still possess limitations, such as failing to ask clarifying questions to ambiguous queries or refuse users' unreasonable requests, both of which are considered as key aspects of a conversational agent's proactivity. This raises the question of whether LLM-based conversational systems are equipped to handle proactive dialogue problems. In this work, we conduct a comprehensive analysis of LLM-based conversational systems, specifically focusing on three key aspects of proactive dialogues: clarification, target-guided, and non-collaborative dialogues. To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme, which augments LLMs with the goal planning capability over descriptive reasoning chains. Empirical findings are discussed to promote future studies on LLM-based proactive dialogue systems.

Keywords

Comprehensive analysis, Conversational agents, Conversational systems, Empirical findings, In contexts, Language model, Model-based OPC, Planning capability, Proactivity, Response generation

Discipline

Databases and Information Systems | Information Security

Research Areas

Data Science and Engineering; Information Systems and Management

Areas of Excellence

Digital transformation

Publication

Proceeding of the 2023 Findings of the Association for Computational Linguistics, Singapore, December 6-10

First Page

10602

Last Page

10621

ISBN

9798891760615

Identifier

10.18653/v1/2023.findings-emnlp.711

Publisher

Association for Computational Linguistics

City or Country

USA

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.18653/v1/2023.findings-emnlp.711

Share

COinS