Learned Random Label Predictions as a Neural Network Complexity Metric

Becker, Marlon; Risse, Benjamin

Forschungsartikel in Online-Sammlung (Konferenz) | Peer reviewed

Zusammenfassung

We empirically investigate the impact of learning randomly generated labels in parallel to class labels in supervised learning on memorization, model complexity, and generalization in deep neural networks. To this end, we introduce a multi-head network architecture as an extension of standard CNN architectures. Inspired by methods used in fair AI, our approach allows for the unlearning of random labels, preventing the network from memorizing individual samples. Based on the concept of Rademacher complexity, we first use our proposed method as a complexity metric to analyze the effects of common regularization techniques and challenge the traditional understanding of feature extraction and classification in CNNs. Second, we propose a novel regularizer that effectively reduces sample memorization. However, contrary to the predictions of classical statistical learning theory, we do not observe improvements in generalization.

Details zur Publikation

Name des RepositoriumsOpenReview
StatusVeröffentlicht
Veröffentlichungsjahr2024
KonferenzWorkshop on Scientific Methods for Understanding Deep Learning @NeurIPS , Vancouver, Kanada
Link zum Volltexthttps://openreview.net/pdf?id=dPLmqmNXdw
StichwörterDeep Learnin; Random Labels; Generalization; Overfitting

Autor*innen der Universität Münster

Becker, Marlon
Professur für Geoinformatics for Sustainable Development (Prof. Risse)
Risse, Benjamin
Professur für Geoinformatics for Sustainable Development (Prof. Risse)