Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

7-2023

Abstract

Quality assurance (QA) tools are receiving more and more attention and are widely used by developers. Given the wide range of solutions for QA technology, it is still a question of evaluating QA tools. Most existing research is limited in the following ways: (i) They compare tools without considering scanning rules analysis. (ii) They disagree on the effectiveness of tools due to the study methodology and benchmark dataset. (iii) They do not separately analyze the role of the warnings. (iv) There is no large-scale study on the analysis of time performance. To address these problems, in the paper, we systematically select 6 free or open-source tools for a comprehensive study from a list of 148 existing Java QA tools. To carry out a comprehensive study and evaluate tools in multi-level dimensions, we first mapped the scanning rules to the CWE and analyze the coverage and granularity of the scanning rules. Then we conducted an experiment on 5 benchmarks, including 1,425 bugs, to investigate the effectiveness of these tools. Furthermore, we took substantial effort to investigate the effectiveness of warnings by comparing the real labeled bugs with the warnings and investigating their role in bug detection. Finally, we assessed these tools’ time performance on 1,049 projects. The useful findings based on our comprehensive study can help developers improve their tools and provide users with suggestions for selecting QA tools.

Keywords

Bug finding, CWE, Quality assurance tools, Scanning rules

Discipline

Information Security

Areas of Excellence

Digital transformation

Publication

ISSTA ’23: Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, July 17-21, Seattle

First Page

285

Last Page

297

ISBN

9798400702211

Identifier

10.1145/3597926.3598056

Publisher

ACM

City or Country

New York

Additional URL

https://doi.org/10.1145/3597926.3598056

Share

COinS