Testing and debugging are important activities during software development and maintenance. Testing is performed to check if the code contains errors whereas debugging is done to locate and fix these errors. Testing can be manual or automated and can be of different types such as unit, integration, system, stress etc. Debugging can also be manual or automated. These two activities have drawn attention of researchers in the recent years. Past studies have proposed many testing techniques such as automated test generation, test minimization, test case selection etc. Studies related to debugging have proposed new techniques to find bugs using various fault localization schemes such as spectrum-based fault localization, IR-based fault localization, program slicing, delta debugging etc. to accurately and efficiently find bugs. However, even after years of research software continues to have bugs, which can have significant implications for the organization and economy. Often developers mention that the number of bugs they receive for the project overwhelms the resources they have. This brings forth the question of analyzing the current state of testing and debugging to understand its advantages and shortcomings. Also, many debugging techniques proposed in the past may ignore bias in data which can lead to wrong results. Furthermore, it is equally important to understand the expectations of practitioners who are currently using or will use these techniques. These analyses will help researchers understand pain points and expectations of practitioners which will help them design better techniques. In this thesis, I take a step in this direction by conducting large-scale data analysis and by interviewing and surveying large number of practitioners. By analysing the quantitative and qualitative data, I plan to bring forward the gap between practitioners’ expectations and the research ouput. My thesis sheds light on current state-of-practice in testing in open-source projects, the tools currently used by developers and challenges faced by them during testing. For bug localization, I find that files that are already localized can have an impact on the results and this bias must be removed before running a bug localization algorithm. Furthermore, practitioners have a high expectation when it comes to adopting a new bug localization tool. I also propose a technique to help developers find elements to test. Furthermore, through interviews and surveys, I provide suggestions for developers to create good test cases based on several characteristics such as size and complexity, coverage, maintainability, bug detection etc. In the future, I plan to perform a longitudinal study to understand the causal impact of testing on software quality. Furthermore, I plan to perform an empirical validation of good test cases based on the suggestions received from the practitioners.
Software Testing, Debugging, Bug Localization, Empirical Software Engineering, Mining Software Repositories
PhD in Information Systems
Programming Languages and Compilers | Software Engineering
Singapore Management University
City or Country
KOCHHAR, Pavneet Singh.
Testing and debugging: A reality check. (2017). Singapore Management University. Dissertations and Theses Collection.
Available at: http://ink.library.smu.edu.sg/etd_coll_all/17
Copyright Owner and License
Singapore Management University
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.