Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
3-2022
Abstract
APIs (Application Programming Interfaces) are reusable software libraries and are building blocks for modern rapid software development. Previous research shows that programmers frequently share and search for reviews of APIs on the mainstream software question and answer (Q&A) platforms like Stack Overflow, which motivates researchers to design tasks and approaches related to process API reviews automatically. Among these tasks, classifying API reviews into different aspects (e.g., performance or security), which is called the aspect-based API review classification, is of great importance. The current state-of-the-art (SOTA) solution to this task is based on the traditional machine learning algorithm. Inspired by the great success achieved by pre-trained models on many software engineering tasks, this study fine-tunes six pre-trained models for the aspect-based API review classification task and compares them with the current SOTA solution on an API review benchmark collected by Uddin et al. The investigated models include four models (BERT, RoBERTa, ALBERT and XLNet) that are pretrained on natural languages, BERTOverflow that is pre-trained on text corpus extracted from posts on Stack Overflow, and CosSensBERT that is designed for handling imbalanced data. The results show that all the six fine-tuned models outperform the traditional machine learning-based tool. More specifically, the improvement on the F1-score ranges from 21.0% to 30.2%. We also find that BERTOverflow, a model pre-trained on the corpus from Stack Overflow, does not show better performance than BERT. The result also suggests that CosSensBERT also does not exhibit better performance than BERT in terms of F1, but it is still worthy of being considered as it achieves better performance on MCC and AUC.
Keywords
Software mining, Natural language processing, Multi-label classification, Pre-trained models
Discipline
Databases and Information Systems | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
2022 IEEE International Conference on Software Analysis, Evolution and Reengineering: Honolulu, HI, March 15-18: Proceedings
First Page
1
Last Page
11
ISBN
9781665437868
Identifier
10.1109/SANER53432.2022.00054
Publisher
IEEE
City or Country
Piscataway, NJ
Citation
YANG, Chengran; XU, Bowen; KHAN, Junaed Younus; UDDIN, Gias; HAN, DongGyun; YANG, Zhou; and LO, David.
Aspect-based API review classification: How far can pre-trained transformer model go?. (2022). 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering: Honolulu, HI, March 15-18: Proceedings. 1-11.
Available at: https://ink.library.smu.edu.sg/sis_research/7697
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/SANER53432.2022.00054