Publication Type
Conference Paper
Version
publishedVersion
Publication Date
12-2019
Abstract
Using proof techniques involving L∞ covering numbers, we show generalisation error bounds for deep learning with two main improvements over the state of the art. First, our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the L 2 norm of the weight matrices, while previous bounds exhibit at least a square-root dependence on the number of classes in this case. Second, we adapt the Rademacher analysis of DNNs to incorporate weight sharing—a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Finally we provide a few further technical improvements, including improving the width dependence from before to after pooling. We also examine our bound’s behaviour on artificial data.
Keywords
Deep Learning, convolutions, multi-class, multi-label
Discipline
Databases and Information Systems | Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Publication
NeurIPS 2019 Workshop on Machine Learning with Guarantees, Vancouver, Canada, 14 December
City or Country
Vancouver, Canada
Citation
LEDENT, Antoine; LEI, Yunwen; and KLOFT, Marius.
Improved generalisation bounds for deep learning through L∞ covering numbers. (2019). NeurIPS 2019 Workshop on Machine Learning with Guarantees, Vancouver, Canada, 14 December.
Available at: https://ink.library.smu.edu.sg/sis_research/7211
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://sites.google.com/view/mlwithguarantees/accepted-papers
Included in
Databases and Information Systems Commons, Graphics and Human Computer Interfaces Commons