Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

6-2025

Abstract

Log Anomaly Detection (LAD) seeks to identify atypical patterns in log data that are crucial to assessing the security and condition of systems. Although Large Language Models (LLMs) have shown tremendous success in various fields, the use of LLMs in enabling the detection of log anomalies is largely unexplored. This work aims to fill this gap. Due to the prohibitive costs involved in fully fine-tuning LLMs,we explore the use of parameter-efficient fine-tuning techniques (PEFTs) for adapting LLMs to LAD.To have an in-depth exploration of the potential of LLM-driven LAD, we present a comprehensive investigation of leveraging two of the most popular PEFTs – Low-Rank Adaptation (LoRA) and Representation Fine-tuning (ReFT) –to tap into three prominent LLMs of varying size, including RoBERTa, GPT-2, and Llama-3, for parameter-efficient LAD. Comprehensive experiments on four public log datasets are performed to reveal important insights into effective LLM-driven LAD in several key perspectives, including the efficacy of these PEFT-based LLM-driven LAD methods, their stability, sample efficiency, robustness w.r.t. unstable logs, and cross-dataset generalization. Code is available at https://github.com/mala-lab/LogADReft.

Keywords

Finetuning; Large language models; Log anomaly detection

Discipline

Databases and Information Systems | Theory and Algorithms

Publication

Data Science: Foundations and Applications: 29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025, Sydney, NSW, Australia, June 10-13: Proceedings

First Page

325

Last Page

337

ISBN

9789819682973

Identifier

10.1007/978-981-96-8298-0_26

Publisher

Springer

City or Country

Cham

Additional URL

https://doi.org/10.1007/978-981-96-8298-0_26

Share

COinS