An approach for self-training audio event detectors using web data

Publication Type

Conference Proceeding Article

Publication Date

8-2017

Abstract

Audio Event Detection (AED) aims to recognize sounds within audio and video recordings. AED employs machine learning algorithms commonly trained and tested on annotated datasets. However, available datasets are limited in number of samples and hence it is difficult to model acoustic diversity. Therefore, we propose combining labeled audio from a dataset and unlabeled audio from the web to improve the sound models. The audio event detectors are trained on the labeled audio and ran on the unlabeled audio downloaded from YouTube. Whenever the detectors recognized any of the known sounds with high confidence, the unlabeled audio was use to re-train the detectors. The performance of the re-trained detectors is compared to the one from the original detectors using the annotated test set. Results showed an improvement of the AED, and uncovered challenges of using web audio from videos.

Discipline

Artificial Intelligence and Robotics

Research Areas

Data Science and Engineering

Publication

2017 25th European Signal Processing Conference (EUSIPCO)

Identifier

10.23919/EUSIPCO.2017.8081532

Publisher

IEEE

City or Country

Kos, Greece

Additional URL

https://doi.org/10.23919/EUSIPCO.2017.8081532

This document is currently not available here.

Share

COinS