Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

9-2024

Abstract

Container Runtime Systems (CRSs), which form the foundational infrastructure of container clouds, are critically important due to their impact on the quality of container cloud implementations. However, a comprehensive understanding of the quality issues present in CRS implementations remains lacking. To bridge this gap, we conduct the first comprehensive empirical study of CRS bugs. Specifically, we gather 429 bugs from 8,271 commits across dominant CRS projects, including runc, gvisor, containerd, and cri-o. Through manual analysis, we develop taxonomies of CRS bug symptoms and root causes, comprising 16 and 13 categories, respectively. Furthermore, we evaluate the capability of popular testing approaches, including unit testing, integration testing, and fuzz testing in detecting these bugs. The results show that 78.79% of the bugs cannot be detected due to the lack of test drivers, oracles, and effective test cases. Based on the findings of our study, we present implications and future research directions for various stakeholders in the domain of CRSs. We hope that our work can lay the groundwork for future research on CRS bug detection.

Keywords

Container runtime, Empirical studies, Manual analysis, Quality issues, Root cause, Run-time systems, Runtimes, Software testing, Systems implementation, Unit testing

Discipline

Computer Engineering | Software Engineering

Research Areas

Data Science and Engineering; Information Systems and Management

Publication

Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, Vienna, Austria, 2024 September 16-20

First Page

1364

Last Page

1376

ISBN

9798400706127

Identifier

10.1145/3650212.3680366

Publisher

ACM

City or Country

New York

Additional URL

https://doi.org/10.1145/3650212.3680366

Share

COinS