TUKL INDUCING INTERPRETABILITY IN CNN

Autor/innen

  • Saurabh Varshneya Technische Universtät Kaiserslautern

Abstract

Different filters of a Convolutional Neural Network(CNN) optimize to recognize specific visual concepts, such as object,
patterns, and scenes. We develop a training method which learns a more interpretable interpretation by fusing the representations
obtained from a group of filters in different layers of a CNN. To further enhance its hidden interpretations, we use additional
regularizers which act as generic priors. These priors encourage filters to form groups to learn a disentangled representation.

Downloads

Veröffentlicht

2022-04-14