Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
7-2023
Abstract
Recent studies have proposed the use of Text-To-Speech (TTS) systems to automatically synthesise speech test cases on a scale and uncover a large number of failures in ASR systems. However, the failures uncovered by synthetic test cases may not reflect the actual performance of an ASR system when it transcribes human audio, which we refer to as false alarms. Given a failed test case synthesised from TTS systems, which consists of TTS-generated audio and the corresponding ground truth text, we feed the human audio stating the same text to an ASR system. If human audio can be correctly transcribed, an instance of a false alarm is detected. In this study, we investigate false alarm occurrences in five popular ASR systems using synthetic audio generated from four TTS systems and human audio obtained from two commonly used datasets. Our results show that the least number of false alarms is identified when testing Deepspeech, and the number of false alarms is the highest when testing Wav2vec2. On average, false alarm rates range from 21% to 34% in all five ASR systems. Among the TTS systems used, Google TTS produces the least number of false alarms (17%), and Espeak TTS produces the highest number of false alarms (32%) among the four TTS systems. Additionally, we build a false alarm estimator that flags potential false alarms, which achieves promising results: a precision of 98.3%, a recall of 96.4%, an accuracy of 98.5%, and an F1 score of 97.3%. Our study provides insight into the appropriate selection of TTS systems to generate high-quality speech to test ASR systems. Additionally, a false alarm estimator can be a way to minimise the impact of false alarms and help developers choose suitable test inputs when evaluating ASR systems. The source code used in this paper is publicly available on GitHub at https://github.com/julianyonghao/FAinASRtest.
Keywords
Automated speech recognition, Empirical studies, False alarms, Number of false alarms, Software testings, Speech tests, Synthetic tests, Test case, Text to speech, Text-to-speech system
Discipline
Databases and Information Systems | Software Engineering
Research Areas
Data Science and Engineering
Publication
Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, Seattle, USA, 2023 July 17-21
First Page
1169
Last Page
1181
ISBN
9798400702211
Identifier
10.1145/3597926.3598126
Publisher
ACM
City or Country
New York
Citation
LAU, Julia Kaiwen; KONG, Kelvin Kai Wen; YONG, Julian Hao; TAN, Per Hoong; YANG, Zhou; YONG, Zi Qian; LOW, Joshua Chern Wey; CHONG, Chun Yong; LIM, Mei Kuan; and David LO.
Synthesizing speech test cases with text-to-speech? An empirical study on the false alarms in automated speech recognition testing. (2023). Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, Seattle, USA, 2023 July 17-21. 1169-1181.
Available at: https://ink.library.smu.edu.sg/sis_research/8566
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3597926.3598126