Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

8-2021

Abstract

We propose a novel training methodology---Concept Group Learning (CGL)---that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions which are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features.

Keywords

Convolutional Neural Networks, Interpretability, Computer Vision.

Discipline

Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces

Research Areas

Intelligent Systems and Optimization

Publication

Proceedings of the 30th International Joint Conference on Artificial Intelligence, Montreal, 2021 August 19-27

First Page

1061

Last Page

1067

ISBN

9780999241196

Identifier

10.24963/ijcai.2021/147

Publisher

IJCAI

City or Country

Montreal, Canada

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.24963/ijcai.2021/147

Share

COinS