ChatTracker: Enhancing visual tracking via LLM-driven iterative description refinement
Publication Type
Journal Article
Publication Date
3-2026
Abstract
Visual object tracking focuses on locating a target object within a video sequence based on an initial bounding box. Recently, Vision-Language (VL) trackers have been proposed to utilize additional natural language descriptions to enhance versatility in various applications. Despite this potential, VL trackers still underperform the State-of-the-Art (SoTA) visual trackers in terms of tracking accuracy. We find that this inferiority is primarily due to their heavy reliance on manual textual annotations, which include the frequent provision of ambiguous language descriptions. In this paper, we identify, for the first time, that over 10% of textual annotations in existing VL tracking datasets suffer from inaccuracies through manual evaluation. To address this problem, we propose ChatTracker to leverage the wealth of world knowledge in the Multimodal Large Language Model (MLLM) to generate high-quality language descriptions and enhance tracking performance. To this end, we propose a novel Reflection-based Language Description Refinement Module to iteratively refine the ambiguous and inaccurate descriptions of the target with tracking feedback. To further utilize semantic information produced by MLLM, a simple yet effective VL tracking framework is proposed, which can be easily integrated as a plugand- play module to boost the performance of both VL and visual trackers. Experimental results show that ChatTracker achieves comparable performance to existing SoTA tracking methods. In addition, language descriptions generated by ChatTracker enhance the performance of various VL trackers and exhibit better text-to-image alignment than annotations in the original dataset. Moreover, our proposed framework can improve the performance of various visual tasks, including Referring Expression Comprehension (REC), Referring Expression Segmentation (RES), and Referring Video Object Segmentation (R-VOS) tasks by providing more accurate language descriptions, which demonstrates the universality of ChatTracker. We release the manual evaluation results and the generated textual descriptions, aiming to drive advancements in VL tracking.
Keywords
Multimodal learning, Single object tracking, Vision-Language trackers, Visual object tracking
Discipline
Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces
Publication
IEEE Transactions on Pattern Analysis and Machine Intelligence
First Page
1
Last Page
18
ISSN
0162-8828
Identifier
10.1109/TPAMI.2026.3674357
Publisher
Institute of Electrical and Electronics Engineers
Citation
ZHANG, Yu; SUN, Yiming; ZHANG, Mi; YU, Fan; CHEN, Shaoxiang; LI, Yang; WANG, Changbo; ZHU, Jianke; and HOI, Steven C. H..
ChatTracker: Enhancing visual tracking via LLM-driven iterative description refinement. (2026). IEEE Transactions on Pattern Analysis and Machine Intelligence. 1-18.
Available at: https://ink.library.smu.edu.sg/sis_research/11084
Additional URL
https://doi.org/10.1109/TPAMI.2026.3674357