Publication Type

Journal Article

Version

publishedVersion

Publication Date

10-2025

Abstract

Developers use logging statements to create logs that document system behavior and aid in software maintenance. As such, high-quality logging is essential for effective maintenance; however, manual logging often leads to errors and inconsistency. Recent methods emphasize using large language models (LLMs) for automated logging statement generation, but these present privacy and resource issues, hindering their suitability for enterprise use. This paper presents the first large-scale empirical study evaluating small open-source language models (SOLMs) for automated logging statement generation. We evaluate four prominent SOLMs using various prompt strategies and parameter-efficient fine-tuning techniques, such as Low-Rank Adaptation (LoRA) and Retrieval-Augmented Generation (RAG). Our results show that fine-tuned SOLMs with LoRA and RAG prompts, particularly Qwen2.5-coder-14B, outperform existing tools and LLM baselines (e.g., Claude3.7 sonnet and GPT4o) in predicting logging locations and generating high-quality statements, with robust generalization across diverse repositories. These findings highlight SOLMs as a privacy-preserving, efficient alternative for automated logging.

Keywords

Software Logging, Logging Statement, Logging Text, Logging Practice, Large LanguageModel

Discipline

Programming Languages and Compilers

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

ACM Transactions on Software Engineering and Methodology

First Page

1

Last Page

40

ISSN

1049-331X

Identifier

10.1145/3773287

Publisher

Association for Computing Machinery (ACM)

Additional URL

https://doi.org/10.1145/3773287

Share

COinS