Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2022
Abstract
Developers frequently use APIs to implement certain functionalities, such as parsing Excel Files, reading and writing text files line by line, etc. Developers can greatly benefit from automatic API usage sequence generation based on natural language queries for building applications in a faster and cleaner manner. Existing approaches utilize information retrieval models to search for matching API sequences given a query or use RNN-based encoder-decoder to generate API sequences. As it stands, the first approach treats queries and API names as bags of words. It lacks deep comprehension of the semantics of the queries. The latter approach adapts a neural language model to encode a user query into a fixed-length context vector and generate API sequences from the context vector. We want to understand the effectiveness of recent Pre-trained Transformer based Models (PTMs) for the API learning task. These PTMs are trained on large natural language corpora in an unsupervised manner to retain contextual knowledge about the language and have found success in solving similar Natural Language Processing (NLP) problems. However, the applicability of PTMs has not yet been explored for the API sequence generation task. We use a dataset that contains 7 million annotations collected from GitHub to evaluate the PTMs empirically. This dataset was also used to assess previous approaches. Based on our results, PTMs generate more accurate API sequences and outperform other related methods by ∼11%. We have also identified two different tokenization approaches that can contribute to a significant boost in PTMs' performance for the API sequence generation task.
Keywords
API, Deep leaning, Transformers, Code search, API sequence, API usage
Discipline
Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension, Pittsburgh, USA, 2022 May 16-17
First Page
309
Last Page
320
ISBN
9781450392983
Identifier
10.1145/3524610.3527886
Publisher
IEEE Computer Society
City or Country
Washington, DC
Citation
HADI, Mohammad Abdul; IMAM NUR BANI YUSUF; Ferdian, Thung; LUONG, Gia Kien; JIANG, Lingxiao; FARD, Fatemeh H.; and LO, David.
On the effectiveness of pretrained models for API learning. (2022). Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension, Pittsburgh, USA, 2022 May 16-17. 309-320.
Available at: https://ink.library.smu.edu.sg/sis_research/7642
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3524610.3527886