## A comparison of VaR estimates in risk management

##### Abstract

In risk management one is often concerned with estimating high quantiles of the underlying distribution of a given sample of data. This is known to be challenging, especially if the quantile of interest is extremely high. Estimating such an extreme quantile becomes even more challenging when the sample size is very small. This is the case when one is interested in the measurement of operational risk within a bank. Even though the focus of this thesis is on operational risk quantification, the core principles illustrated in this thesis can be applied to various fields.
The most popular approach in the quantification of operational risk is the so-called advanced measurement approach (AMA). This approach allows banks to use internal models to measure its operational risk. A widely used model under the AMA is the loss distribution approach (LDA). Essentially the LDA makes use of both the frequency and severity distributions to obtain the aggregate distribution where the risk modeller is eventually interested in various quantiles of the aggregate distribution. From the literature it is clear that the frequency has a minimal impact on the aggregate distribution and that the latter is mainly influenced by the severity distribution (see e.g. BCBS, 2011). For this reason the majority of this thesis focusses on modelling the severities and in particular, on the estimation of extreme quantiles of the severity distribution. There are two main approaches in modelling the severities under the AMA. These are the so-called spliced distribution approach and the full-parametric approach. The first objective of this thesis is to investigate which quantile estimation procedure performs uniformly the best in an operational risk context under the spliced distribution approach. The second objective is to investigate the quantile estimation procedures under the full-parametric approach. Attending to both the first and second objectives will lead to sufficient estimates of the severity distribution. Given sufficient estimates of both the frequency and severity distributions, it is unclear from the literature which procedure is optimal when convoluting both the frequency and severity distributions in order to approximate the aggregate distribution. To this end the third objective of this thesis is to investigate the most widely used approximation techniques. The three objectives of this thesis are discussed in the following three paragraphs respectively. The basic idea underlying the spliced distribution approach is to estimate extreme quantiles of the severity distribution by focusing on extreme observations only. A natural approach for this purpose is to make use of extreme value theory (EVT). A well-known quantile estimator that was derived under EVT is the so-called Weissman quantile estimator. Alternative quantile estimators are desired since the Weissman quantile estimator relies heavily on its asymptotic properties and, as it was mentioned previously, the sample sizes often encountered in operational risk are relatively small. A new quantile estimator was investigated and is discussed in the thesis. The idea of the new estimator is that a lower quantile (a quantile at a lower probability level) is rather estimated which is then extrapolated to the desired quantile using a multiplying factor. This estimator is referred to as multipliers throughout this thesis. The multipliers proved to perform uniformly the best under this approach in an operational risk context. The full-parametric approach consists of estimating extreme quantiles by fitting one class of distributions to the entire sample of data. Popular choices of these distribution classes are the Burr, LogNormal and normal inverse Gaussian (NIG) distributions. This approach enables one to model the entire severity distribution where each observation is used (as opposed to the spliced distribution approach using only extreme observations) to inform the fitting of the statistical distributions. It is well known that the data in an operational risk context is often heavy-tailed and that in some cases is even too heavy-tailed to model using the distributions popular in operational risk. A natural alternative to these distributions is to use its logged version. An example of such a logged version is the LogNormal distribution being the logged version of the Normal distribution. Since the NIG distribution is a parameter-rich distribution (in the sense that it has four parameters making it a very flexible distribution) it was decided to investigate its logged version, referred to as the log normal inverse Gaussian (LNIG) distribution. Although the LNIG distribution outperformed some of the widely used distributions (especially when aiming to reduce the bias of its quantile estimates) it did not prove to be the optimal distribution to use when following the full-parametric approach.
A number of techniques exist for approximating the quantiles of the aggregate distribution. These are Monte Carlo simulation, Panjer recursion, fast Fourier transforms, single loss approximations and perturbative approximations. Even though these techniques are discussed extensively in the literature, there is no study in the literature that widely compares them in an operational risk context. To this end it was decided to conduct an extensive comparison (theoretical and numerical) of these approximation techniques in order to determine which procedure performs uniformly the best. It was found that the second order perturbative approximation performed best for approximating extreme quantiles of aggregate distributions typically used in operational risk.