• Login
    View Item 
    •   NWU-IR Home
    • Electronic Theses and Dissertations (ETDs)
    • Natural and Agricultural Sciences
    • View Item
    •   NWU-IR Home
    • Electronic Theses and Dissertations (ETDs)
    • Natural and Agricultural Sciences
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Bias reduction studies in nonparametric regression with applications : an empirical approach

    Thumbnail
    View/Open
    Krugell_M.pdf (2.014Mb)
    Date
    2014
    Author
    Krugell, Marike
    Metadata
    Show full item record
    Abstract
    The purpose of this study is to determine the effect of three improvement methods on nonparametric kernel regression estimators. The improvement methods are applied to the Nadaraya-Watson estimator with crossvalidation bandwidth selection, the Nadaraya-Watson estimator with plug-in bandwidth selection, the local linear estimator with plug-in bandwidth selection and a bias corrected nonparametric estimator proposed by Yao (2012). The di erent resulting regression estimates are evaluated by minimising a global discrepancy measure, i.e. the mean integrated squared error (MISE). In the machine learning context various improvement methods, in terms of the precision and accuracy of an estimator, exist. The rst two improvement methods introduced in this study are bootstrapped based. Bagging is an acronym for bootstrap aggregating and was introduced by Breiman (1996a) from a machine learning viewpoint and by Swanepoel (1988, 1990) in a functional context. Bagging is primarily a variance reduction tool, i.e. bagging is implemented to reduce the variance of an estimator and in this way improve the precision of the estimation process. Bagging is performed by drawing repetitive bootstrap samples from the original sample and generating multiple versions of an estimator. These replicates of the estimator are then used to obtain an aggregated estimator. Bragging stands for bootstrap robust aggregating. A robust estimator is obtained by using the sample median over the B bootstrap estimates instead of the sample mean as in bagging. The third improvement method aims to reduce the bias component of the estimator and is referred to as boosting. Boosting is a general method for improving the accuracy of any given learning algorithm. The method starts of with a sensible estimator and improves iteratively, based on its performance on a training dataset. Results and conclusions verifying existing literature are provided, as well as new results for the new methods.
    URI
    http://hdl.handle.net/10394/15345
    Collections
    • Natural and Agricultural Sciences [2778]

    Copyright © North-West University
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of NWU-IR Communities & CollectionsBy Issue DateAuthorsTitlesSubjectsAdvisor/SupervisorThesis TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsAdvisor/SupervisorThesis Type

    My Account

    LoginRegister

    Copyright © North-West University
    Contact Us | Send Feedback
    Theme by 
    Atmire NV