NWU Institutional Repository

Is network fragmentation a useful complexity measure?

Loading...
Thumbnail Image

Date

Researcher ID

Supervisors

Journal Title

Journal ISSN

Volume Title

Publisher

Record Identifier

Abstract

It has been observed that the input space of deep neural network classifiers can exhibit ‘fragmentation’, where the model function rapidly changes class as the input space is traversed. The severity of this fragmentation tends to follow the double descent curve, achieving a maximum at the interpolation regime. We study this phenomenon in the context of image classification and ask whether fragmentation could be predictive of generalization performance. Using a fragmentation-based complexity measure, we show this to be possible by achieving good performance on the PGDL (Predicting Generalization in Deep Learning) benchmark. In addition, we report on new observations related to fragmentation, namely fragmentation is not limited to the input space but occurs in the hidden representations as well, fragmentation follows the trends in the validation error throughout training, and fragmentation is not a direct result of increased weight norms. Together, this indicates that fragmentation is a phenomenon worth investigating further when studying the generalization ability of deep neural networks.

Sustainable Development Goals

Description

Keywords

Citation

Mouton, C. et al. Is network fragmentation a useful complexity measure?. arXiv:2411.04695v1 [cs.LG] 7 Nov 2024

Endorsement

Review

Supplemented By

Referenced By