Publication Type
Journal Article
Version
acceptedVersion
Publication Date
5-2024
Abstract
Code models have made significant advancements in code intelligence by encoding knowledge about programming languages. While previous studies have explored the capabilities of these models in learning code syntax, there has been limited investigation on their ability to understand code semantics. Additionally, existing analyses assume the number of edges between nodes at the abstract syntax tree (AST) is related to syntax distance, and also often require transforming the high-dimensional space of deep learning models to a low-dimensional one, which may introduce inaccuracies. To study how code models represent code syntax and semantics, we conduct a comprehensive analysis of 7 code models, including four representative code pre-trained models (CodeBERT, GraphCodeBERT, CodeT5, and UnixCoder) and three large language models (StarCoder, CodeLlama and CodeT5+). We design four probing tasks to assess the models’ capacities in learning both code syntax and semantics. These probing tasks reconstruct code syntax and semantics structures (AST, CDG, DDG and CFG) in the representation space. These structures are core concepts for code understanding. We also investigate the syntax token role in each token representation and the long dependency between the code tokens. Additionally, we analyze the distribution of attention weights related to code semantic structures. Through extensive analysis, our findings highlight the strengths and limitations of different code models in learning code syntax and semantics. The results demonstrate that these models excel in learning code syntax, successfully capturing the syntax relationships between tokens and the syntax roles of individual tokens. However, their performance in encoding code semantics varies. CodeT5 and CodeBERT demonstrate proficiency in capturing control and data dependencies, while UnixCoder shows weaker performance in this aspect. We do not observe LLMs generally performing much better than pre-trained models. The shallow layers of LLMs perform better than their deep layers. The investigation of attention weights reveals that different attention heads play distinct roles in encoding code semantics. Our research findings emphasize the need for further enhancements in code models to better learn code semantics. This study contributes to the understanding of code models’ abilities in syntax and semantics analysis. Our findings provide guidance for future improvements in code models, facilitating their effective application in various code-related tasks.
Keywords
Code Model Analysis, Syntax and Semantic Encoding
Discipline
Programming Languages and Compilers | Software Engineering
Research Areas
Information Systems and Management
Areas of Excellence
Digital transformation
Publication
ACM Transactions on Software Engineering and Methodology
First Page
1
Last Page
28
ISSN
1049-331X
Identifier
10.1145/3664606
Publisher
Association for Computing Machinery (ACM)
Citation
MA, Wei; LIU, Shangqing; ZHAO, Mengjie; XIE, Xiaofei; WANG, Wenhang; HU, Qiang; ZHANG, Jie; and YANG, Liu.
Unveiling code pre-trained models: Investigating syntax and semantics capacities. (2024). ACM Transactions on Software Engineering and Methodology. 1-28.
Available at: https://ink.library.smu.edu.sg/sis_research/9092
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3664606