Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

3-2025

Abstract

Balancing predictive power and interpretability has long been a challenging research area, particularly in powerful yet complex models like neural networks, where nonlinearity obstructs direct interpretation. This paper introduces a novel approach to constructing an explainable neural network that harmonizes predictiveness and explainability. Our model is designed as a linear combination of a sparse set of jointly learned features, each derived from a different trainable function applied to a single 1-dimensional input feature. Leveraging the ability to learn arbitrarily complex relationships, our neural network architecture enables automatic selection of a sparse set of important features, with the final prediction being a sum of rescaled versions of these features. We demonstrate the ability to select significant features while maintaining comparable predictive performance and direct interpretability through extensive experiments on synthetic and real-world datasets. We also provide theoretical analysis on the generalization bounds of our framework, which is favorably linear in the number of selected features and only logarithmic in the number of input features. We further lift any dependence of sample complexity on the number of parameters or the architectural details under very mild conditions. Our research paves the way for further research on sparse and explainable neural networks with guarantees.

Discipline

Artificial Intelligence and Robotics | OS and Networks

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Digital transformation

Publication

Proceedings of the 39th AAAI conference on Artificial Intelligence, Philadelphia, Pennyslvania, 2025 February 25 - March 4

Volume

39

First Page

18044

Last Page

18052

Identifier

10.1609/aaai.v39i17.33985

Publisher

AAAI Press

City or Country

United States

Additional URL

https://doi.org/10.1609/aaai.v39i17.33985

Share

COinS