Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

5-2026

Abstract

An effective healthcare agent must be able to recall and reason over a patient’s longitudinal medical history. However, the absence of datasets with realistic long-term dialogue timelines limits systematic evaluation. Real clinical text is constrained by privacy and ethics, while existing benchmarks focus on isolated interactions, failing to capture cross-session reasoning. We introduce a framework for synthesizing high-quality, long-term medical dialogues with LLMs. Our approach entails a knowledge-guided decomposition into three stages: constructing synthetic patient profiles with diverse disease and complication trajectories, generating multiturn dialogues per encounter, and integrating them into a coherent longitudinal history dataset, MediLongChat. We establish three benchmark tasks—In-dialogue Reasoning, Cross-dialogue Reasoning, and Synthesize Reasoning—to evaluate the memory capabilities of healthcare agents. To assess data quality, we introduce a multidimensional evaluation framework combining vector-based metrics with LLM-as-a-judge assessments. Specifically, we define automatic measures—Faithfulness, Coherence, and Diversity—together with two LLM-based evaluations: Correctness and Realism. Benchmark experiments show that even state-of-the-art LLMs struggle with MediLongChat. These findings highlight the benchmark’s applicability and underscore the need for tailored methods to advance healthcare agents.

Keywords

Healthcare agent, Synthetic Dataset, LLM, Medical Dialogue Dataset

Discipline

Artificial Intelligence and Robotics | Computer Sciences

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the 25th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2026), Paphos, Cyphrus, May 25-29

First Page

1

Last Page

9

City or Country

Cyprus

Share

COinS