Towards benchmarking the coverage of automated testing tools in Android against manual testing

Publication Type

Conference Proceeding Article

Publication Date

4-2024

Abstract

Android apps are commonly used nowadays as smartphones have become irreplaceable parts of modern lives. To ensure that these apps work correctly, developers would need to test them. Testing these apps is laborious, tedious, and often time consuming. Thus, many automated testing tools for Android have been proposed. These tools generate test cases that aim to achieve as much code coverage as possible. A lot of testing methodologies are employed such as model-based testing, search-based testing, random testing, fuzzing, concolic execution, and mutation. Despite much efforts, it is not perfectly clear how far these testing tools can cover user behaviours. To fill this gap, we want to measure the gap between the coverage of automated testing tools and manual testing. In this preliminary work, we selected a set of 11 Android apps and ran state-of-the-art automated testing tools on them. We also manually tested these apps by following a guideline on actions that we need to exhaust when exploring the apps. Our work highlights that automated tools need to close some gaps before they can achieve coverage that is comparable to manual testing. We also present some limitations that future automated tools need to overcome to achieve such coverage.

Discipline

Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

MOBILESoft '24: Proceedings of the IEEE/ACM 11th International Conference on Mobile Software Engineering and Systems, Lisbon Portugal, April 14-15

First Page

74

Last Page

77

ISBN

9798400705946

Identifier

10.1145/3647632.3651394

Publisher

ACM

City or Country

New York

Additional URL

https://doi.org/10.1145/3647632.3651394

This document is currently not available here.

Share

COinS