NWU Institutional Repository

Activation gap generators in neural networks

dc.contributor.authorDavel, Marelie H.
dc.date.accessioned2020-01-27T13:31:20Z
dc.date.available2020-01-27T13:31:20Z
dc.date.issued2019-12
dc.description.abstractNo framework exists that can explain and predict the generalisation ability of DNNs in general circumstances. In fact, this question has not been addressed for some of the least complicated of neural network architectures: fully-connected feedforward networks with ReLU activations and a limited number of hidden layers. Building on recent work [2] that demonstrates the ability of individual nodes in a hidden layer to draw class specific activation distributions apart, we show how a simplified network architecture can be analysed in terms of these activation distributions, and more specifically, the sample distances or activation gaps each node produces. We provide a theoretical perspective on the utility of viewing nodes as activation gap generators, and define the gap conditions that are guaranteed to result in perfect classification of a set of samples. We support these conclusions with empirical results.en_US
dc.identifier.citationMarelie H. Davel, “Activation gap generators in neural networks“, In Proc. South African Forum for Artificial Intelligence Research (FAIR2019), pp64-76, Cape Town, South Africa, December 2019.en_US
dc.identifier.issn1613-0073
dc.identifier.urihttp://hdl.handle.net/10394/33958
dc.language.isoenen_US
dc.publisherIn Proc. South African Forum for Artificial Intelligence Research (FAIR2019)en_US
dc.subjectGeneralizationen_US
dc.subjectfully-connected feedforward networksen_US
dc.subjectac- tivation distributionsen_US
dc.subjectMLPen_US
dc.titleActivation gap generators in neural networksen_US
dc.typeOtheren_US

Files

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed upon to submission
Description: