Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

9-2014

Abstract

Code search techniques return relevant code fragments given a user query. They typically work in a passive mode: given a user query, a static list of code fragments sorted by the relevance scores decided by a code search technique is returned to the user. A user will go through the sorted list of returned code fragments from top to bottom. As the user checks each code fragment one by one, he or she will naturally form an opinion about the true relevance of the code fragment. In an active model, those opinions will be taken as feedbacks to the search engine for refining result lists. In this work, we incorporate users’ opinion on the results from a code search engine to refine result lists: as a user forms an opinion about one result, our technique takes this opinion as feedback and leverages it to re-order the results to make truly relevant results appear earlier in the list. The re- finement results can also be cached to potentially improve future code search tasks. We have built our active refinement technique on top of a state-of-the-art code search engine— Portfolio. Our technique improves Portfolio in terms of Normalized Discounted Cumulative Gain (NDCG) by more than 11.3%, from 0.738 to 0.821.

Keywords

Code Search, User Feedback, Active Learning

Discipline

Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

ASE '14: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering: September 15-19, 2014, Västerås, Sweden

First Page

677

Last Page

682

ISBN

9781450330138

Identifier

10.1145/2642937.2642947

Publisher

ACM

City or Country

New York

Additional URL

http://dx.doi.org/10.1145/2642937.2642947

Share

COinS