Publication Type
Journal Article
Version
acceptedVersion
Publication Date
1-2023
Abstract
Automatically generated static code warnings suffer from a large number of false alarms. Hence, developers only take action on a small percent of those warnings. To better predict which static code warnings should ot be ignored, we suggest that analysts need to look deeper into their algorithms to find choices that better improve the particulars of their specific problem. Specifically, we show here that effective predictors of such warnings can be created by methods that ocally adjust the decision boundary (between actionable warnings and others). These methods yield a new high water-mark for recognizing actionable static code warnings. For eight open-source Java projects (cassandra, jmeter, commons, lucene-solr, maven, ant, tomcat, derby) we achieve perfect test results on 4/8 datasets and, overall, a median AUC (area under the true negatives, true positives curve) of 92%.
Keywords
Codes, Computer bugs, false alarms, Industries, locality, hyperparameter optimization, Measurement, software analytics, static analysis, Source coding, Static analysis, Training
Discipline
Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
IEEE Transactions on Software Engineering
First Page
1
Last Page
17
ISSN
0098-5589
Identifier
10.1109/TSE.2023.3234206
Publisher
Institute of Electrical and Electronics Engineers
Citation
YEDIDA, Rahul; KANG, Hong Jin; TU, Huy; YANG, Xueqi; LO, David; and MENZIES, Tim.
How to find actionable static analysis warnings: A case study with FindBugs. (2023). IEEE Transactions on Software Engineering. 1-17.
Available at: https://ink.library.smu.edu.sg/sis_research/7768
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/TSE.2023.3234206