Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

6-2025

Abstract

AlayaDB is a cutting-edge vector database system natively architected for efficient and effective long-context inference for Large Language Models (LLMs) at AlayaDB AI. Specifically, it decouples the KV cache and attention computation from the LLM inference systems, and encapsulates them into a novel vector database system. For the Model as a Service providers (MaaS), AlayaDB consumes fewer hardware resources and offers higher generation quality for various workloads with different kinds of Service Level Objectives (SLOs), when compared with the existing alternative solutions (e.g., KV cache disaggregation, retrieval-based sparse attention). The crux of AlayaDB is that it abstracts the attention computation and cache management for LLM inference into a query processing procedure, and optimizes the performance via a native query optimizer. In this work, we demonstrate the effectiveness of AlayaDB via (i) two use cases from our industry partners, and (ii) extensive experimental results on LLM inference benchmarks.

Keywords

Vector database, Large language model, Machine learning systems

Discipline

Artificial Intelligence and Robotics | Databases and Information Systems

Research Areas

Data Science and Engineering

Publication

SIGMOD/PODS '25: Companion of the 2025 International Conference on Management of Data, Berlin, Germany, June 22-27, 2025

First Page

364

Last Page

377

Identifier

10.1145/3722212.3724428

Publisher

ACM

City or Country

New York

Additional URL

https://doi.org/10.1145/3722212.372442

Share

COinS