Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2022
Abstract
Automatic static analysis tools (ASATs), such as Findbugs, have a high false alarm rate. The large number of false alarms produced poses a barrier to adoption. Researchers have proposed the use of machine learning to prune false alarms and present only actionable warnings to developers. The state-of-the-art study has identified a set of “Golden Features” based on metrics computed over the characteristics and history of the file, code, and warning. Recent studies show that machine learning using these features is extremely effective and that they achieve almost perfect performance. We perform a detailed analysis to better understand the strong performance of the “Golden Features”. We found that several studies used an experimental procedure that results in data leakage and data duplication, which are subtle issues with significant implications. Firstly, the ground-truth labels have leaked into features that measure the proportion of actionable warnings in a given context. Secondly, many warnings in the testing dataset appear in the training dataset. Next, we demonstrate limitations in the warning oracle that determines the ground-truth labels, a heuristic comparing warnings in a given revision to a reference revision in the future. We show the choice of reference revision influences the warning distribution. Moreover, the heuristic produces labels that do not agree with human oracles. Hence, the strong performance of these techniques previously seen is overoptimistic of their true performance if adopted in practice. Our results convey several lessons and provide guidelines for evaluating false alarm detectors.
Keywords
Static analysis, False alarms, Data leakage, Data duplication
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering; Information Systems and Management
Publication
Proceedings of the 44th International Conference on Software Engineering, Pittsburgh, PA, USA, 2022 May 21-29
First Page
698
Last Page
709
Identifier
10.1145/3510003.3510214
Publisher
Association for Computing Machinery
City or Country
New York
Citation
KANG, Hong Jin; AW, Khai Loong; and LO, David.
Detecting false alarms from automatic static analysis tools: how far are we?. (2022). Proceedings of the 44th International Conference on Software Engineering, Pittsburgh, PA, USA, 2022 May 21-29. 698-709.
Available at: https://ink.library.smu.edu.sg/sis_research/7686
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3510003.3510214