Show simple item record

dc.contributor.advisorDu Toit, J.V.
dc.contributor.authorGoosen, Johannes Christiaanen_US
dc.date.accessioned2012-02-17T08:21:56Z
dc.date.available2012-02-17T08:21:56Z
dc.date.issued2011en_US
dc.identifier.urihttp://hdl.handle.net/10394/5552
dc.descriptionThesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
dc.description.abstractIn this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm.en_US
dc.publisherNorth-West University
dc.subjectANNen_US
dc.subjectArtificial neural networken_US
dc.subjectAutoGANNen_US
dc.subjectGANNen_US
dc.subjectGeneralized additive neural networken_US
dc.subjectInsample model selectionen_US
dc.subjectMLPen_US
dc.subjectMultilayer perceptronen_US
dc.subjectN2C2S algorithmen_US
dc.subjectOut-of-sample model selectionen_US
dc.subjectPredictionen_US
dc.subjectPredictive modellingen_US
dc.subjectSBCen_US
dc.subjectSchwarz information criterionen_US
dc.subjectKNNen_US
dc.subjectKunsmatige neurale netwerken_US
dc.subjectVeralgemeende additiewe neurale netwerken_US
dc.subjectVANNen_US
dc.subjectIn-steekproefmodel-seleksieen_US
dc.subjectMultilaag perseptronen_US
dc.subjectN2K2S-algoritmeen_US
dc.subjectBuite-steekproefmodel-seleksieen_US
dc.subjectVoorspellingen_US
dc.subjectVoorspellingsmodelleringen_US
dc.subjectSchwarz-inligtingskriteriumen_US
dc.titleComparing generalized additive neural networks with multilayer perceptronsen
dc.typeThesisen_US
dc.description.thesistypeMastersen_US
dc.contributor.researchID10789901 - Du Toit, Jan Valentine (Supervisor)


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record