Exploiting the Full Capacity of Deep Neural Networks while Avoiding Overfitting by Targeted Sparsity Regularization

Huesmann K, Klemm S, Linsen L, Risse B

Sonstige wissenschaftliche Veröffentlichung

Zusammenfassung

Overfitting is one of the most common problems when training deep neural networks on comparatively small datasets. Here, we demonstrate that neural network activation sparsity is a reliable indicator for overfitting which we utilize to propose novel targeted sparsity visualization and regularization strategies. Based on these strategies we are able to understand and counteract overfitting caused by activation sparsity and filter correlation in a targeted layer-by-layer manner. Our results demonstrate that targeted sparsity regularization can efficiently be used to regularize well-known datasets and architectures with a significant increase in image classification performance while outperforming both dropout and batch normalization. Ultimately, our study reveals novel insights into the contradicting concepts of activation sparsity and network capacity by demonstrating that targeted sparsity regularization enables salient and discriminative feature learning while exploiting the full capacity of deep models without suffering from overfitting, even when trained excessively.

Details zur Publikation

StatusVeröffentlicht
Veröffentlichungsjahr2020 (24.02.2020)
Sprache, in der die Publikation verfasst istEnglisch
Link zum Volltexthttps://arxiv.org/abs/2002.09237
StichwörterMachine Learning; Computer Vision and Pattern Recognition

Autor*innen der Universität Münster

Huesmann, Karim
Professur für Praktische Informatik (Prof. Linsen)
Klemm, Sören
Professur für Praktische Informatik (Prof. Jiang)
Linsen, Lars
Professur für Praktische Informatik (Prof. Linsen)
Risse, Benjamin
Juniorprofessur für Praktische Informatik (Prof. Risse)