Thoughts to target : enhance planning for target-driven conversation

Publication Type

Conference Proceeding Article

Publication Date

11-2024

Abstract

In conversational AI, large-scale models excel in various tasks but struggle with target-driven conversation planning. Current methods, such as chain-of-thought reasoning and tree-search policy learning techniques, either neglect plan rationality or require extensive human simulation procedures. Addressing this, we propose a novel two-stage framework, named EnPL, to improve the LLMs’ capability in planning conversations towards designated targets, including (1) distilling natural language plans from target-driven conversation corpus and (2) generating new plans with demonstration-guided in-context learning. Specifically, we first propose a filter approach to distill a high-quality plan dataset, ConvPlan1. With the aid of corresponding conversational data and support from relevant knowledge bases, we validate the quality and rationality of these plans. Then, these plans are leveraged to help guide LLMs to further plan for new targets. Empirical results demonstrate that our method significantly improves the planning ability of LLMs, especially in target-driven conversations. Furthermore, EnPL is demonstrated to be quite effective in collecting target-driven conversation datasets and enhancing response generation, paving the way for constructing extensive target-driven conversational models.

Keywords

Conversational AI, Conversation planning, Large Language Models, LLMS

Discipline

Artificial Intelligence and Robotics | Computer Sciences

Research Areas

Intelligent Systems and Optimization; Data Science and Engineering

Publication

Proceedings of the Conference on Empirical Methods in Natural Language Processing 19th EMNLP 2024 : Miami, Florida, USA, November 12-16

First Page

21108

Last Page

21124

Publisher

Association for Computational Linguistics

City or Country

USA

This document is currently not available here.

Share

COinS