Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
1-2026
Abstract
Current large language models (LLMs) exhibit significant deficiencies in episodic memory tasks including encoding, storing, and retrieving specific information from temporally dependent events over a long period of time. Recent approaches to handle memory tasks in LLMs, such as in-context learning, retrieval-augmented generation (RAG), and fine-tuning, may resolve the long-term retention issues, but are still inadequate to handle tasks requiring chronological awareness of the stored information. We introduce Agentic Retrieval with Temporal-Episodic Memory (ARTEM), a hybrid LLM-based agent architecture integrating LLMs with a self-organizing neural network named Spatial-Temporal Episodic Memory (STEM), designed to handle episodic memory tasks. Our approach employs LLMs for event extraction from the inputs to represent temporal, spatial, entitative, and semantic information that may facilitate future retrieval, aside from generating outputs or direct responses. The extracted events can then be encoded vectorially and stored in a fast and stable manner in the episodic memory through an instance-based incremental learning in STEM. STEM supports precise episodes retrieval and helps reduce computational overhead in generating the appropriate responses by LLMs. Evaluation on standardized episodic memory benchmarks across four tasks—partial cue retrieval, epistemic uncertainty detection, recent event identification, and chronological recall—demonstrates superior performance of ARTEM compared to in-context learning, RAG, and fine-tuning in various popular LLMs.
Keywords
Large Language Model, Episodic Memory
Discipline
Artificial Intelligence and Robotics | Databases and Information Systems
Research Areas
Data Science and Engineering
Publication
Proceedings of the 40th Annual AAAI Conference on Artificial Intelligence (AAAI‑26), Singapore, January 20-27
Volume
40
First Page
25753
Last Page
25760
Identifier
10.1609/aaai.v40i30.39773
Publisher
AAAI
City or Country
Singapore
Citation
TAN, Cassandra Hui Ming; SUBAGDJA, Budhitama; and TAN, Ah-hwee.
ARTEM: Enhancing large language model agents with spatial-temporal episodic memory. (2026). Proceedings of the 40th Annual AAAI Conference on Artificial Intelligence (AAAI‑26), Singapore, January 20-27. 40, 25753-25760.
Available at: https://ink.library.smu.edu.sg/sis_research/11089
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1609/aaai.v40i30.39773