NWU Institutional Repository

Using summary layers to probe neural network behaviour

dc.contributor.authorDavel, Marelie Hattingh
dc.date.accessioned2021-03-18T07:49:06Z
dc.date.available2021-03-18T07:49:06Z
dc.date.issued2020
dc.description.abstractNo framework exists that can explain and predict the generalisation ability of deep neural networks in general circumstances. In fact, this question has not been answered for some of the least complicated of neural network architectures: fully-connected feedforward networks with rectified linear activations and a limited number of hidden layers. For such an architecture, we show how adding a summary layer to the network makes it more amenable to analysis, and allows us to define the conditions that are required to guarantee that a set of samples will all be classified correctly. This process does not describe the generalisation behaviour of these networks,but produces a number of metrics that are useful for probing their learning and generalisation behaviour. We support the analytical conclusions with empirical results, both to confirm that the mathematical guarantees hold in practice, and to demonstrate the use of the analysis process.en_US
dc.identifier.issn1015-7999
dc.identifier.issn2313-7835
dc.identifier.urihttp://hdl.handle.net/10394/36916
dc.language.isoenen_US
dc.publisherSouth African Institute of Computer Scientists and Information Technologistsen_US
dc.subjectDeep Learningen_US
dc.subjectMachine Learningen_US
dc.subjectLearning Theoryen_US
dc.subjectGeneralizationen_US
dc.titleUsing summary layers to probe neural network behaviouren_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
summary-nodes-davel.pdf
Size:
1.6 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed upon to submission
Description: