The code review comprehension assessment for language models
Publication Type
Conference Proceeding Article
Publication Date
8-2025
Abstract
State-of-the-art large language models (LLMs) have demonstrated impressive code generation capabilities but struggle with real-world software engineering tasks, such as revising source code to address code reviews, hindering their practical use. Code review comments are often implicit, ambiguous, and colloquial, requiring models to grasp both code and human intent. This challenge calls for evaluating large language models' ability to bridge both technical and conversational contexts. While existing work has employed the automated code refinement (ACR) task to resolve these comments, current evaluation methods fall short, relying on text matching metrics that provide limited insight into model failures and remain susceptible to training data contamination. To address these limitations, we introduce a novel evaluation benchmark, CodeReviewQA that enables us to conduct fine-grained assessment of model capabilities and mitigate data contamination risks. In CodeReviewQA, we decompose the generation task of code refinement into three essential reasoning steps: change type recognition (CTR), change localisation (CL), and solution identification (SI). Each step is reformulated as multiple-choice questions with varied difficulty levels, enabling precise assessment of model capabilities, while mitigating data contamination risks. Our comprehensive evaluation spans 72 recently released large language models on 900 manually curated, high-quality examples across nine programming languages. Our results show that CodeReviewQA is able to expose specific model weaknesses in code review comprehension, disentangled from their generative automated code refinement results.
Discipline
Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
Proceedings of ACL ‘25: Findings of the Association for Computational Linguistics, Vienna, July 27 - August 1
First Page
9138
Last Page
9166
Identifier
10.18653/v1/2025.findings-acl.476
Publisher
Association for Computational Linguistics
City or Country
Vienna, Austria
Citation
LIN, Hong Yi; LIU, Chunhua; GAO, Haoyu; THONGTANUNAM, Patanamon; and TREUDE, Christoph.
The code review comprehension assessment for language models. (2025). Proceedings of ACL ‘25: Findings of the Association for Computational Linguistics, Vienna, July 27 - August 1. 9138-9166.
Available at: https://ink.library.smu.edu.sg/sis_research/10530
Additional URL
https://doi.org/10.18653/v1/2025.findings-acl.476