Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

7-2021

Abstract

We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks

Keywords

Software and its engineering, Software libraries and repositories, Information systems, Information retrieval

Discipline

Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Conference, July 11–15

First Page

1

Last Page

11

Identifier

10.1145/3404835.3462840

Publisher

ACM

City or Country

Virtual Event, Canada

Share

COinS