Publication Type

Journal Article

Version

acceptedVersion

Publication Date

3-2020

Abstract

Recent years have witnessed promising results of exploring deep convolutional neural network for face detection. Despite making remarkable progress, face detection in the wild remains challenging especially when detecting faces at vastly different scales and characteristics. In this paper, we propose a novel simple yet effective framework of “Feature Agglomeration Networks” (FANet) to build a new single-stage face detector, which not only achieves state-of-the-art performance but also runs efficiently. As inspired by Feature Pyramid Networks (FPN) (Lin et al., 2017), the key idea of our framework is to exploit inherent multi-scale features of a single convolutional neural network by aggregating higher-level semantic feature maps of different scales as contextual cues to augment lower-level feature maps via a hierarchical agglomeration manner at marginal extra computation cost. We further propose a Hierarchical Loss to effectively train the FANet model. We evaluate the proposed FANet detector on several public face detection benchmarks, including PASCAL face, FDDB, and WIDER FACE datasets and achieved state-of-the-art results2. Our detector can run in real-time for VGA-resolution images on GPU.

Keywords

Hierarchical loss, Single-stage detectors, Context-aware, Feature agglomeration

Discipline

Databases and Information Systems | Data Storage Systems

Research Areas

Data Science and Engineering

Publication

Neurocomputing

Volume

380

First Page

180

Last Page

189

ISSN

0925-2312

Identifier

10.1016/j.neucom.2019.10.087

Publisher

Elsevier

Additional URL

https://doi.org/10.1016/j.neucom.2019.10.087

Share

COinS