ReLU and sigmoidal activation functions
Date
2019-12Author
Pretorius, Arnold M.
Barnard, Etienne
Davel, Marelie H.
Metadata
Show full item recordAbstract
The generalization capabilities of deep neural networks are not well understood, and in particular, the influence of activation functions on generalization has received little theoretical attention. Phenomena such as vanishing gradients, node saturation and network sparsity have been identified as possible factors when comparing different activation functions [1]. We investigate these factors using fully connected feedforward networks on two standard benchmark problems, and find that the most salient differences between networks with sigmoidal and ReLU activations relate to the way that class-distinctive information is propagated through a network.
Collections
- Faculty of Engineering [1115]