Publication Type

Conference Paper

Version

publishedVersion

Publication Date

12-2019

Abstract

Using proof techniques involving L∞ covering numbers, we show generalisation error bounds for deep learning with two main improvements over the state of the art. First, our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the L 2 norm of the weight matrices, while previous bounds exhibit at least a square-root dependence on the number of classes in this case. Second, we adapt the Rademacher analysis of DNNs to incorporate weight sharing—a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Finally we provide a few further technical improvements, including improving the width dependence from before to after pooling. We also examine our bound’s behaviour on artificial data.

Keywords

Deep Learning, convolutions, multi-class, multi-label

Discipline

Databases and Information Systems | Graphics and Human Computer Interfaces

Research Areas

Intelligent Systems and Optimization

Publication

NeurIPS 2019 Workshop on Machine Learning with Guarantees, Vancouver, Canada, 14 December

City or Country

Vancouver, Canada

Additional URL

https://sites.google.com/view/mlwithguarantees/accepted-papers

Share

COinS