Pre-interpolation loss behavior in neural networks
Date
2020Author
Venter, Arthur Edgar William
Theunissen, Marthinus Wilhelm
Davel, Marelie Hattingh
Metadata
Show full item recordAbstract
When training neural networks as classifiers, it is common to observe an increase in average test
loss while still maintaining or improving the overall classification accuracy on the same dataset.
In spite of the ubiquity of this phenomenon, it has not been well studied and is often dismissively
attributed to an increase in borderline correct classifications. We present an empirical investigation
that shows how this phenomenon is actually a result of the differential manner by which test samples
are processed. In essence: test loss does not increase overall, but only for a small minority of samples.
Large representational capacities allow losses to decrease for the vast majority of test samples at the
cost of extreme increases for others. This effect seems to be mainly caused by increased parameter
values relating to the correctly processed sample features. Our findings contribute to the practical
understanding of a common behaviour of deep neural networks. We also discuss the implications of
this work for network optimisation and generalisation.
Collections
- Faculty of Engineering [1129]