Next Article in Journal
State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering
Previous Article in Journal
Micro-Macro Connected Stochastic Dynamic Economic Behavior Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interval Estimation of Value-at-Risk Based on Nonparametric Models

1
Department of Applied Mathematics, Faculty of Sciences, Lebanese University, Beirut 2038 1003, Lebanon
2
Department of Economics, Faculty of Economic Sciences & Business Administration, Lebanese University, Beirut 2038 1003, Lebanon
3
Department of Robotics, LIRMM University of Montpellier II, 61 rue Ada, 34392 Montpellier CEDEX 5, France
*
Author to whom correspondence should be addressed.
Submission received: 13 August 2018 / Revised: 25 November 2018 / Accepted: 6 December 2018 / Published: 10 December 2018

Abstract

:
Value-at-Risk (VaR) has become the most important benchmark for measuring risk in portfolios of different types of financial instruments. However, as reported by many authors, estimating VaR is subject to a high level of uncertainty. One of the sources of uncertainty stems from the dependence of the VaR estimation on the choice of the computation method. As we show in our experiment, the lower the number of samples, the higher this dependence. In this paper, we propose a new nonparametric approach called maxitive kernel estimation of the VaR. This estimation is based on a coherent extension of the kernel-based estimation of the cumulative distribution function to convex sets of kernel. We thus obtain a convex set of VaR estimates gathering all the conventional estimates based on a kernel belonging to the above considered convex set. We illustrate this method in an empirical application to daily stock returns. We compare the approach we propose to other parametric and nonparametric approaches. In our experiment, we show that the interval-valued estimate of the VaR we obtain is likely to lead to more careful decision, i.e., decisions that cannot be biased by an arbitrary choice of the computation method. In fact, the imprecision of the obtained interval-valued estimate is likely to be representative of the uncertainty in VaR estimate.

1. Introduction

Controlling financial risk is an important issue for financial institutions. For the necessity of risk management, the first task is to measure risk. Value-at-Risk (VaR) is probably the most widely used risk measurement in financial institutions. It has made its way into the Basel II Capital-Adequacy framework. VaR is an estimate of the largest loss over a specified time horizon at a particular probability level. For example, if the daily VaR of an investment portfolio is $10 million with a 95 % confidence level, this means that we can be 95 % confident that the portfolio will not lose more than $10 million over the next day. In a more formal way, the VaR of a portfolio at a probability level α can be defined as McNeil et al. (2005, p. 38).
VaR α = inf { x , F X ( x ) = P ( X x ) α } = F X 1 ( α ) ,
where X is a random variable representing daily percent return for a total stock index with Cumulative Distribution Function (CDF) F X . As McNeil et al. (2005) point out, VaR is simply a quantile of the corresponding loss distribution, which makes its computation easy. Various methods have been proposed in the relevant literature to compute the VaR. Each method differs in the methodology used, the hypotheses, and the way the models are implemented. A highly questionable fact is that each method leads to different results. Thus the choice of computational method is likely to have a large impact on the way a financial institution manages its credit portfolio.
The most commonly used approach is the nonparametric Historical Simulation (HS) method described in Linsmeier and Pearson (1997). It is based on the empirical CDF of the historically simulated returns by attributing an equal probability weight to each day’s return. The HS approach is easy to implement but suffers from two major drawbacks: First, the success of the approach depends on the ability to collect a large series of data; second, this is an unconditional model and thus we need a number of extreme scenarios in the historical record to provide more informative estimates of the tail of the loss distribution McNeil et al. (2005). Moreover, estimates of extreme quantiles are inefficient since extrapolation beyond past observations is impossible. To avoid these drawbacks, several parametric and nonparametric approaches have been developed. For example, Butler and Schachter (1998) propose the kernel estimators in conjunction with the historical simulation method. Also, Charpentier and Oulidi (2010) calculate the VaR by using several nonparametric estimators based on Beta kernel. This study shows that those estimators improve the efficiency of traditional ones, not only for light tailed, but also heavy tailed distributions.
Numerous recent VaR models are referred to as parametric. From the point of view of the risk manager, estimate VaR assuming normal distribution of asset returns is inappropriate and lead to underestimate the left tail at low probability level. For this reason, most recent research papers deal with going beyond the normal model and attempt to capture the related phenomena of heavy and long tails and asymmetric form of the returns series. EVT (Extreme Value Theory) and the so-called GHYP (Generalized HYPerbolic) distributions are among the most widely used. The main advantage of hypothesizing a GHYP distribution is its ability to account for the statistical properties of financial market data such as volatility clustering, asymmetry and heavy-tail phenomena (see McNeil et al. (2005) for an introduction and Paolella and Polak (2015) for a recent application). Kuester et al. (2006) use an EVT-based approach and focuses on the long tails of the return distribution. Braione and Scholtes (2016) study the performance of forecasting VaR under different parametric distributional assumptions and show the predominance and the predictive ability for the skewed and heavy-tailed distributions in the univariate case. The main drawback of using a parametric model is the high dependency of the obtained method to the hypothesized distribution model. To lower this dependency, some authors have proposed a group of semiparametric methods that are based on extreme value theory. These methods have been successfully used by financial analysts in estimating VaR Danielsson and De Vries (2000). However, all the above mentioned attempts have the common weakness of a low robustness to modelling leading to variations in the VaR estimation. To enable this method to be used in a prudent manner, it would be of instrumental interest to estimate the variation of the computed VaR w.r.t. the modelling. Some attempts have been proposed to achieve such a goal. For example, Butler and Schachter (1998) propose a method to measure the precision of a VaR estimate. Jorion (1996) suggests that VaR always be reported with confidence intervals and shows that it is possible to improve the efficiency of VaR estimates using their standard errors. Kupiec (1995) proposes a method for quantifying the uncertainty, in the estimated VaR, induced by the fact that the return distribution is unknown. Pritsker (1997) proposes to estimate a standard error by using a Monte Carlo VaR analysis.
In this paper, we propose a completely new approach to estimate the variations in the VaR induced by choosing a particular model in a kernel-based approach. The idea is that an estimate of the CDF of the daily percent return can be used to compute the VaR as suggested by Equation (1). Such an estimate can be obtained by using a kernel-based approach Silvermann (1986). In Kernel Density Estimation (KDE), the role of the kernel is to achieve an interpolation to lower the impact of sampling on the obtained estimation. Roughly speaking, it smoothes the empirical CDF. However, as in the above mentioned methods, there is a systematic bias induced by the choice of the kernel used to estimate the CDF. Our proposal is to focus on the new nonparametric approach developed by Loquin and Strauss (2008a) to estimate the CDF. This approach makes use of the ability of a new kind of interpolating kernel, called maxitive kernel, to represent a convex set of conventional kernels—that we call summative kernels Loquin and Strauss (2008b). Within this approach, the estimate of the CDF is interval-valued. Such an interval-valued estimate of the CDF, also called a p-box Destercke et al. (2007), is the convex set of all the CDF that would have been computed by using the convex set of conventional summative kernel represented by the maxitive kernel. We show that this approach can advantageously be used to compute with accuracy the corresponding convex set of kernel-based VaR estimates. This approach has advantages over Monte Carlo based approaches since its complexity is comparable to that of classical kernel estimation. Moreover, within this approach the bounds are exact while Monte Carlo approaches provide an inner estimation of those bounds.
This paper is structured as follows. Section 2 reviews some nonparametric and parametric approaches and bootstrap methods used to compute the confidence interval of VaR. In Section 3, we introduce the empirical distribution function and the kernel cumulative estimator based on summative kernel. We define in Section 4 the maxitive kernel, which forms the basis of our approach, and it is shown how an interval-valued estimation of VaR, based on maxtive kernel, can be derived that has relevant properties in this context. Section 5 presents and discusses our empirical findings. Firstly, we show how the choice of kernel function affects the VaR estimates. Secondly, we investigate the performance of the interval-valued proposed in this paper by comparing it to three very competitive approaches: The simple Normal VaR, the HS VaR, and the GHYP VaR. Finally, Section 6 concludes this paper with some further remarks.
Throughout this paper, we consider that the observations belong to a convex and compact subset (universe) Ω of I R , called the reference set. P ( Ω ) is the collection of all measurable subsets of Ω . It naturally contains the empty set and is closed under complementation and countable unions. Then ( Ω , P ( Ω ) ) is a measurable space. Let L ( Ω ) be the set of functions defined on Ω with values in I R .

2. A Review of Some Statistical Approaches and Nonparametric Bootstrap Methods

2.1. Historical Simulation

The HS method calculates the Value-at-Risk using real historical data of asset returns and captures the non-normal distribution of the returns. HS is a nonparametric method because it doesn’t make a specific assumption about distribution of returns. However HS method assumes that the distribution of past returns is a good and complete representation of expected future returns.
The VaR with α % of probability level is calculated as the α % percentile of the sorted data return values. For example, with a returns data with 1000 observations, the 1 % VaR estimate is simply the negative of the 10th sample order statistic. This HS VaR can be defined as follows:
VaR α = percentile { r t } t = 1 T , α % ,
where r t is the asset return at time t.

2.2. GHYP Parametric Distribution

The generalized hyperbolic distribution (GHYP) was introduced by Barndorff-Nielsen (1978) to fit financial returns. The GHYP is an asymmetric heavy-tailed distribution that can account for the extreme events and cater for skewness embedded in the data. It has since been applied in diverse disciplines such as physics, biology, financial mathematics (see Eberlein and Keller (1995); Sørensen and Bibby (1997); Paolella (2007, chp. 9)).
The probability density function (pdf) of univariate GHYP distribution with the parameterization of Eberlein et al. (1998) is given, for x R , by:
f G H D ( x ; λ , χ , ψ , μ , σ 2 , γ ) = a K λ 1 / 2 χ + ( x μ ) 2 σ 2 ψ + γ 2 σ 2 exp γ ( x μ ) σ 2 ( χ + ( x μ ) 2 σ 2 ) ( ψ + γ 2 σ 2 ) λ / 2 1 / 4 ,
with the normalizing constant
a = ( χ ψ ) λ / 2 ψ λ ( ψ + γ 2 σ 2 ) 1 / 2 λ ( 2 π ) 1 / 2 σ K λ ( χ ψ ) ,
where
(a)
K λ denotes a modified Bessel function of the third kind with index λ .
(b)
λ defines the subclasses of GHYP and is related to the fail flatness.
(c)
χ and ψ determine the distribution shape; in general, the larger those parameters are, the closer the distribution is to the normal distribution.
(d)
μ is the location parameter.
(e)
σ is the dispersion parameter (standard deviation)
(f)
γ is the skewness parameter (if γ = 0 , the distribution reduces to the symmetric generalized hyperbolic distribution).
The GHYP family contains many special cases known under special names, listed as follows:
  • Hyperbolic Distribution (HYP)
    If λ = 1 , we get the hyperbolic distribution. However, HYP distribution is characterized by having a hyperbolic log-density function whereas the logdensity for the normal distribution is a parabola. Thus, one may expect the HYP distribution to be coherent alternatives for heavy tailed data.
  • Normal Inverse Gaussian (NIG) Distribution
    If λ = 1 2 , then the distribution is known as normal inverse gaussian (NIG). NIG distribution is also widely used in financial modeling.
  • Variance Gamma (VG) Distribution
    If λ > 0 and χ = 0 , then we obtain the limiting case which is known as variance gamma (VG) distribution.
  • The Skewed Student’s -Distribution (St)
    When λ < 0 and χ = 0 we get another limiting case called the generalized hyperbolic skew student’s t distribution because it generalizes the usual Student t distribution, obtained from the skewed-t by setting the skewness parameter γ = 0 .
In order to estimate the unknown parameters ( λ , χ , ψ , μ , σ 2 , γ ) , one can use the maximum likelihood (ML) with numerical optimization method (EM-based algorithm).

2.3. Nonparametric Bootstrap Approach

The bootstrap method is used in an important number of statistics topics based on building a sampling distribution for a statistic by resampling from the data at hand. The term bootstrap was coined by Efron (1979), and is an allusion to the expression pulling oneself up by one’s bootstraps—in this case, using the sample data as a population from which repeated samples are drawn Fox (2002). The nonparametric bootstrap allows us to empirically estimate the sampling distribution of a statistic θ —such as a mean, median, standard deviation, or quantile (VaR)—or an estimator θ ^ without making assumptions about the form of the population, and without deriving the sampling distribution explicitly. The basic idea of the nonparametric bootstrap is as follows: Assuming a data set S = ( x 1 , , x n ) is available. We draw a sample of size n from among the elements of S , sampling with replacement. This result is called bootstrap sample S 1 = ( x 11 , , x 1 n ) . We repeat this procedure a large number of times, B, selecting many bootstrap samples; the b t h such bootstrap sample is denoted S b = { x b 1 , , x b n } . Next, we compute the statistic θ for each of the bootstrap samples; that is θ b = t ( x b ) , with t denoting some function, that we can use to estimate from data. Then the distribution of θ b around the original estimate θ is analogous to the sampling distribution of the estimator θ around the population parameter. The variance is estimated by the sample variance (but for the bootstrap sample of θ ^ ) and the bias is estimated by the difference between the average of the bootstrap sample and the original θ ^ . Then, the bootstrap estimates of bias and variance are given approximately by:
Bias ( θ ^ ) 1 B b = 1 B θ ^ i θ ^ = θ ^ θ ^ and v a r ( θ ^ ) 1 B 1 b = 1 B ( θ ^ i θ ^ ) 2 .
There are various methods for constructing bootstrap confidence intervals. The normal-theory interval assumes that the distribution of θ ^ is normally distributed (which is often approximately the case for statistics in sufficiently large samples), and uses the bootstrap estimate of sampling variance, and perhaps of bias, to construct a 100 ( 1 α ) -percent confidence interval of the form:
( θ ^ θ ^ ) z 1 α 2 s e ( θ ) , ( θ ^ θ ^ ) + z 1 α 2 s e ( θ ^ ) ,
where s e ( θ ^ ) = v a r ( θ ^ ) is the bootstrap estimate of the standard error of θ ^ , and z 1 α 2 is the 1 α 2 quantile of the standard-normal distribution (e.g., 1.96 for a 95-percent confidence interval, where α = 0.05 ).
We can also estimate confidence intervals using percentiles of the sample distribution: The upper and lower bounds of the confidence interval would be given by the percentile points (or quantiles) of the sample distribution of parameter estimates. This alternative approach, called the bootstrap percentile interval, is to use the empirical quantiles of θ b to form a confidence interval [ θ ( lo ) , θ ( up ) ] , where θ ( 1 ) , , θ ( r ) are the ordered bootstrap replicates of the statistic; lo = [ ( r + 1 ) α / 2 ] ; up = [ ( r + 1 ) ( 1 α / 2 ) ] ; and [ · ] is the nearest integer function. This basic percentile interval approach is limited itself, particularly if parameter estimators are biased Dowd (2005). It is therefore often better to use the bias-corrected, accelerated (or BC a ) percentile intervals. In order ton find the BC a , we firstly calculate z = Φ 1 ρ , where Φ 1 ( · ) is the standard-normal quantile function, and ρ = b = 1 r ( θ b θ ) r + 1 is the adjust proportion of bootstrap replicates at or below the original-sample estimate θ . If the bootstrap sampling distribution is symmetric, and if θ is unbiased, then this proportion will be close to 0.5 , and z will be close to 0. Now, let θ ( i ) be the value of θ produced when the i t h observation is deleted from the sample there are n of these quantities. Let θ ¯ = 1 n i = 1 n θ ( i ) be the average of the θ ( i ) . We calculate
β = i = 1 n θ ( i ) θ ¯ 3 6 i = 1 n θ ( i ) θ ¯ 2 3 2 .
We calculate
β 1 = Φ z + z z 1 α 2 1 β ( z z 1 α 2 ) and β 2 = Φ z + z + z 1 α 2 1 β ( z + z 1 α 2 ) ,
where Φ ( · ) is the standard-normal cumulative distribution function. The values β 1 and β 2 are used to locate the endpoints of the corrected percentile confidence interval: [ θ ( lo ) , θ ( up ) ] , where lo = [ r β 1 ] and up = [ r β 2 ] . When β and z are both 0, β 1 = Φ ( z 1 α 2 ) = Φ ( z α 2 ) = α 2 and β 2 = Φ ( z 1 α 2 ) = 1 α 2 , which corresponds to the (uncorrected) percentile interval.
For more background on the bootstrap approach and a broader array of applications see Efron and Tibshirani (1993); Davison and Hinkley (1997).
The bootstrap approach can be used to estimate the Value-at-Risk (VaR) and a confidence intervals for VaR. A resampling method based on the bootstrap and a bias-correction to improving the Value-at-Risk (VaR) forecasting ability of the normal-GARCH model has been developed in Hartz et al. (2006). As mentioned in Dowd (2005), if we have a data set of n observations, we create a new data set by taking n drawings, each taken from the whole of the original data set. Each new data set created in this way gives us a new VaR estimate. We then create a large number of such data sets and estimate the VaR of each. The resulting VaR distribution function enables us to obtain estimates of the confidence interval for our VaR. For estimate confidence intervals using a bootstrap approach, we produce a bootstrapped histogram of resample-based VaR estimates, and then read the confidence interval from the quantiles of this histogram. A very good discussion on this and other improvements can be seen in Dowd (2005, chp. 4). In Section 5.3, we use the bias corrected method to estimate the confidence interval of VaR obtained form HS, Normal and GHYP distribution. The purpose of this simulation exercise is to compare these methodologies with our maxitive kernel approach, in order to identify that our interval-valued estimation of VaR perform properly.

3. Summative Kernels

3.1. Summative Kernels and Probability Distributions

Kernels are used in signal processing and nonparametric statistics to define a weighted neighborhood around a location u Ω . In kernel regression, it is used to estimate the conditional expectation of a random variable (see e.g., Silvermann 1986; Wand and Jones 1995).
Definition 1.
A summative kernel—or conventional kernel—is a positive function κ : Ω I R + that verifies the summativity property
Ω κ ( u ) d u = 1 .
From a basic summative kernel κ , we can define a summative kernel κ Δ x translated in x Ω and dilated with a bandwidth Δ > 0 by
u Ω , κ Δ x ( u ) = 1 Δ κ ( u x Δ ) .
By convention, κ ( u ) = κ 1 0 ( u ) .
The most used summative kernels in functional estimation are usually unimodal, symmetric, and centered (i.e., defining a weighted neighborhood around the origin). When κ ( u ) is symmetric and unimodal function, then the following conditions are fulfilled
Ω u κ ( u ) d u = 0 and Ω u 2 κ ( u ) d u = c > 0 .
For example, the Epanechnikov kernel proposed by Epanechnikov (1969) illustrated in Figure 1 is defined by
κ ( u ) = 3 4 1 u 2 if | u | 1 , 0 otherwise .
In the following, K ( Ω ) represents the set of unimodal, symmetric, and centered kernels. A summative kernel κ can be seen as a probability distribution inducing a probability measure P κ : P ( Ω ) [ 0 , 1 ] on the measurable space ( Ω , P ( Ω ) ). P κ is defined by
A Ω , P κ ( A ) = A κ ( u ) d u .
Considering the summative kernel κ , we can define what we call the summative expectation as follows:
Definition 2.
Let s be a function of L ( Ω ) and let κ be a summative kernel of K ( Ω ) . The summative expectation of s, in the neighborhood defined by κ, is the classical expectation of s w.r.t. P κ :
E κ ( s ) = Ω s d P κ = Ω s ( u ) κ ( u ) d u ,
since d P κ = κ ( u ) d u .

3.2. Empirical Distribution Function

Let ( x 1 , , x n ) be a finite set of observations of n random variables ( X 1 , , X n ) i.i.d with unknown pdf f and CDF F. One can estimate F by the empirical CDF F n defined by
F n ( x ) = 1 n i = 1 n 1 l ( x i x ) ,
where 1 l is the indicator function, namely 1 l ( x i x ) = 1 if x i x and zero otherwise. Obviously 1 l ( x i x ) = 1 with probability P ( x i x ) = F ( x ) , and 1 l ( x i x ) = 0 with probability P ( x i > x ) = 1 F ( x ) . Then 1 l ( x i x ) is a Bernoulli random variable with success probability F ( x ) . Since the x i are independent, so are the 1 l ( x i x ) . Thus Y n = i = 1 n 1 l ( x i x ) (a sum of independent B e r { F ( x ) } random variables) is a B i n { n , F ( x ) } random variable, and
F n ( x ) 1 n B i n { n , F ( x ) } .
Then x Ω ,
E ( F n ( x ) ) = F ( x ) and V a r ( F n ( x ) ) = F ( x ) ( 1 F ( x ) ) n .
This implies that the empirical CDF is convergent in probability to the true CDF:
x Ω , F n ( x ) P F ( x ) .
Thus the empirical estimate of Value-at-Risk
VaR α , n = inf x , F n ( x ) = P ( X x ) α ,
converges towards the true VaR α .

3.3. Summative Kernel Cumulative Estimator

The empirical distribution function (Equation (4)) is not smooth as it jumps by 1 n at each point x i ( i = 1 n ). A kernel estimator of the CDF, introduced by authors such as Nadaraya (1964); and Watson and Leadbetter (1964), is a smoothed version of the empirical distribution estimator. Such an estimator arises as an integral of the Parzen-Rosenblatt kernel density estimator (see Rosenblatt 1956; Parzen 1962).
Definition 3.
The summative-kernel cumulative estimator of the CDF (also called the Parzen-Rosenblatt kernel cumulative estimator), based on the summative kernel κ is given, in each point x Ω , by
F ^ κ n ( x ) = 1 n i = 1 n Γ x x i ,
where Γ ( x ) = x κ ( u ) d u .
Let κ K ( Ω ) be a summative kernel function of the second order with support [ 1 , 1 ] , the function Γ ( u ) verifies the following properties
Γ ( u ) = 0 if u ] , 1 ] , 1 if u ] 1 , ] .
1 1 Γ 2 ( u ) d u 1 1 Γ ( u ) d u = 1 , 1 1 Γ ( u ) κ ( u ) d u = 1 2 , 1 1 u Γ ( u ) κ ( u ) d u = 1 2 1 1 1 Γ 2 ( u ) d u .
Note that if κ is the pdf of a probability measure P κ , then Γ is the cumulative distribution function of P κ . For example, κ being the Epanechnikov kernel defined by (2), the function Γ , depicted in Figure 2, has the following expression:
Γ ( u ) = 0 if u 1 , 1 4 u 3 + 3 4 u + 1 2 if | u | 1 , 1 if u 1 .
Let κ K ( Ω ) be a summative kernel with support [ 1 , 1 ] , for a fixed x, the bias and the variance of F ^ κ n ( x ) are given by
E F ^ κ n ( x ) F ( x ) = Δ 2 2 f ( x ) Ω u 2 κ ( u ) d u + o ( Δ 2 ) ,
v a r ( F ^ κ n ( x ) ) = 1 n F ( x ) 1 F ( x ) + Δ f ( x ) 1 + 1 κ 2 ( u ) d u 1 + o ( Δ ) .
Some theoretical properties of the estimator F ^ κ n have been investigated, among others, by Winter (1973); Winter (1979); Sarda (1993); Yamato (1973); and Jones (1990). Some properties have long been known, e.g., the uniform convergence of F ^ κ n towads F when the pdf f = d F is continuous Nadaraya (1964); and Yamato (1973) or without conditions on f Singh et al. (1983). The asymptotic expression of the Mean Integrated Squared Error (MISE) (i.e., Ω ( F ^ κ n ( x ) F ( x ) d x ) 2 has been studied in Swanepoel (1988). For a continuous pdf f, it has been proved that the best kernel is the uniform kernel of bandwidth Δ > 0 defined by κ ( u ) = 1 2 Δ 1 l [ Δ , Δ ] ( u ) and for a discontinuous function f, in a finite number of points, the best kernel is the exponential kernel of bandwidth Δ > 0 defined by κ ( u ) = Δ 2 e Δ u .
The method of choosing the optimal value of bandwidth in kernel estimation of the cumulative distribution function is of crucial interest. Many procedures—such as plug-in and cross validation—have been proposed in the relevant literature (see e.g., Sarda 1993; Polansky and Baker 2000; Altman and Leger 1995) to choose (estimate) the optimal bandwidth for estimating the CDF of the random process underlying a sample. As mentioned in Quintela Del Rio and Estevez-Perez (2013), the Polansky and Baker plug-in bandwidth is given by
Δ PB = ζ ( κ ) n ξ 2 2 ( κ ) V ^ 2 ( g 2 ) 1 / 3 ,
where
ξ 2 ( κ ) = Ω u 2 κ ( u ) d u ; ζ ( κ ) = 2 Ω u κ ( u ) Γ ( u ) d u ,
V ^ r ( g ) = 1 n 2 g r + 1 i = 1 n j = 1 n H ( r ) x i x j g
estimates
V r = Ω f ( r ) ( u ) f ( u ) d u ,
where r 2 is an even integer and
g 2 = 2 H ( 2 ) ( 0 ) n ξ 2 2 ( H ) V 4 1 / 5 .
The kernel function H is not necessarily equal to κ . An iterative method for calculating the plug-in bandwidth has been proposed by Polansky and Baker (2000). As mentioned in Quintela Del Rio and Estevez-Perez (2013), the plug-in bandwidth is calculated as follows: let b > 0 be an integer, firstly, we calculate V ^ 2 b + 2 using
V ^ r = ( 1 ) r / 2 r ! ( 2 σ ^ ( x i ) ) r + 1 ( r / 2 ) ! ( 3.14 ) 1 / 2 ,
where σ is the standard deviation of the data which can be estimated by σ ^ ( x i ) = min { s ^ , Q 3 Q 1 1.349 } , with s ^ is the sample standard deviation, and Q 1 , Q 3 denoting the first and third quartile, respectively. Secondly, begin form j = b to j = 1 , we calculate V ^ 2 j ( g ^ 2 j ) , where
g ^ 2 j = 2 H ( 2 j ) ( 0 ) n ξ 2 ( H ) V ^ 2 j + 2 1 / ( 2 j + 3 ) ,
with
V ^ 2 j + 2 = V ^ 2 b + 2 , if j = b , V ^ 2 j + 2 ( g ^ 2 j + 2 ) , if j < b .
Thirdly, the plug-in bandwidth is
Δ ^ PB = ζ ( κ ) n ξ 2 2 ( κ ) V ^ 2 j ( g ^ 2 j ) 1 / 3 .
In practice, for most applications, we consider b = 2 Quintela Del Rio and Estevez-Perez (2013).
Sarda (1993) introduced a cross-validation method to estimate the optimal bandwidth which minimize the MISE:
C V ( Δ ) = i = 1 n F n ( x i ) F ^ κ ; i n ( x i ) 2 ,
where F n is the empirical distribution function (Equation (4)) and F ^ κ ; i n is the kernel cumulative distribution estimator computed by leaving out x i :
F ^ κ ; i n ( u ) = 1 n 1 j i Γ ( u x j ) .
Adrian et al. (1998) proposed a modified cross-validation method which minimizes the function
C V ( Δ ) = 1 n i = 1 n Ω 1 l ( u x i ) F ^ κ ; i n ( u ) 2 d u .
For more details about bandwidth selection of summative kernel CDF estimation, we refer our reader to Quintela Del Rio and Estevez-Perez (2013).
Now, if we suppose that all parameters have been chosen appropriately for F ^ κ n to be a consistent estimate of the CDF, then a kernel estimator of VaR, denoted VaR ^ α , κ , can be easily obtained by inverting the equation F ^ κ n ( x ) = α . In that case, VaR ^ α , κ satisfies:
1 n i = 1 n Γ VaR ^ α , κ x i = α .
The kernel VaR estimator VaR ^ α , κ can be seen as a smoothed version of the empirical distribution function of VaR ( VaR α , n ) defined by (Equation (5)). For further details about the properties of the kernel VaR estimator (see e.g., Gourieroux et al. 2000; Chen and Tang 2005; Sheather and Marron 1990). However, the validity of such an estimate highly depends on the appropriateness of the chosen kernel and bandwidth. In the next Section, we propose another approach that allows to estimate the dependance of the obtained estimate to the parameters of the method, and therefore to increase the robustness of the decision process based on the data.

4. Maxitive Kernels

4.1. Maxitive Kernels and Possibility Distributions

In many applications the summative kernels and their bandwidth are chosen in a very empirical way. As proposed in Loquin and Strauss (2008b), the empirical character of choosing a kernel could be taken in consideration by taking a family of kernels rather than one kernel. This family of kernels can be represented by using a maxitive kernel.
Definition 4.
A maxitive kernel is a positive function π : Ω 0 , 1 that verifies the following maxitivity property
sup u Ω π ( u ) = 1 .
From a basic maxitive kernel π , we can define a maxitive kernel π Δ x translated in x Ω and dilated with a bandwidth Δ > 0 by
u Ω , π Δ x ( u ) = π u x Δ .
By convention, π ( u ) = π 1 0 ( u ) .
For example, the triangular maxitive kernel proposed in Loquin and Strauss (2008b) illustrated in Figure 3 is defined by
π ( u ) = 1 | u | if | u | 1 , 0 otherwise .
A maxitive kernel can be seen as a possibility distribution Loquin and Strauss (2008b), inducing two dual non-additive confidence measures, a possibility measure Π π , and a necessity measure N π Dubois and Prade (1988); and De Cooman (1997) defined, A Ω , by
Π π ( A ) = sup x A π ( x ) ( possibility ) , N π ( A ) = 1 Π π ( A c ) ( necessity ) ,
with A c being the complementary set of A in Ω .
A maxitive kernel π is said to dominate a summative kernel κ Loquin and Strauss (2008b), if the possibility measure Π π dominates the probability measure P κ , i.e.,
A Ω , P κ ( A ) Π π ( A ) .
In that sense, a maxitive kernel defines the convex set of summative kernels it dominates. This set is denoted M ( π ) and defined by
M ( π ) = κ K ( Ω ) / A Ω , N π ( A ) P κ ( A ) Π π ( A ) .
The specificity of a maxitive kernel π , defined by its integral, i.e., S p ( π ) = Ω π ( u ) d u , is a measure of the information contained by the possibility measure associated to π Loquin and Strauss (2008b). A maxitive kernel π 1 is at least as informative as another one π 2 if u Ω , π 1 ( u ) π 2 ( u ) Dubois (2006). In that case π 1 is at least as specific as π 2 . It also characterizes the amount of summative kernels dominated by π in the sense that, if u Ω , π 1 ( u ) π 2 ( u ) , then S p ( π 1 ) S p ( π 2 ) and M ( π 1 ) M ( π 2 ) . Moreover, if u 0 Ω , such that π 1 ( u 0 ) < π 2 ( u 0 ) , then S p ( π 1 ) < S p ( π 2 ) and M ( π 1 ) M ( π 2 ) .
In this context, Dubois et al. (2004) proved that the triangular maxitive kernel, defined by (Equation (9)), with a support [ Δ , + Δ ] is the most specific maxitive kernel that dominates all summative symmetric and unimodal kernels whose support belongs to [ Δ , + Δ ] .

4.2. Choquet Integrals and Maxitive Expectation

We start with the definition of a capacity or non-additive measure which generalize the notion of additive measure, i.e., probability. The notion of capacity was introduced by Gustave Choquet in 1953 and has played an important role in game theory, fuzzy set theory, Dempster-Shafer theory and many others (see Denneberg 1994; Shafer 1976; Choquet 1953).
Definition 5.
Let ( Ω , P ( Ω ) ) be the measurable space. A capacity ν on ( Ω , P ( Ω ) ) is a set function ν : P ( Ω ) 0 , 1 satisfying:
  • ν is normalized (i.e., ν ( ) = 0 and ν ( Ω ) = 1 ) .
  • ν is monotone (i.e., A , B P ( Ω ) , A B ν ( A ) ν ( B ) ) .
One of the most important concepts closely related to additive measures is integration. It has a natural generalization to non-additive measure theory. Historically the first applied integral with respect to non-additive measures is the Choquet Integral formalized by Gustave Choquet.
Definition 6.
Let ( Ω , P ( Ω ) ) be the measurable space and ν : P ( Ω ) 0 , 1 a capacity. Let s be a bounded function of L ( Ω ) . The continuous Choquet integral of s w.r.t. ν is defined by
C ν ( s ) = C Ω s d ν = 0 ν { u Ω : s ( u ) z } 1 d z + 0 ν { u Ω : s ( u ) z } d z ,
where the integral on the right hand side is a Riemann integral.
Definition 7.
Let ν be a capacity on ( Ω , P ( Ω ) ) and x = ( x 1 , , x n ) Ω n be a discrete function [ 1 , , n ] Ω of n samples. The discrete Choquet integral of x w.r.t. ν is defined by:
C ν ( x ) = C Ω x d ν = i = 1 n x τ ( i ) x τ ( i 1 ) ν ( A τ ( i ) ) ,
where τ is the permutation on [ 1 , , n ] , such that x τ ( 1 ) x τ ( n ) , A τ ( i ) : = { τ ( i ) , , τ ( n ) } and x τ ( 0 ) = 0 by convention.
Note that, a probability P being a special case of (additive) capacity, the Choquet integral w.r.t. P coincides with the classical expected value w.r.t. P.
Recall that, if X is a random variable defined on a probability space ( Ω , P ( Ω ) , P ), then the classical expected value of X is given by
E P ( X ) = Ω X d P = Ω X ( u ) d P ( u ) ,
where P : P ( Ω ) [ 0 , 1 ] is a probability measure on the measurable space ( Ω , P ( Ω ) ) and the integral is a Lebesgue-Stieltjes integral.
The notion of expectation w.r.t. a summative kernel can be extended to a maxitive kernel, as defined in Loquin and Strauss (2008b).
Definition 8.
Let π be a maxitive kernel and let s be a bounded function of L ( Ω ) . The expectation of s w.r.t π is defined by:
E ¯ ̲ π ( s ) = E ̲ π ( s ) , E ¯ π ( s ) = C N π ( s ) , C Π π ( s ) ,
where Π π (rsp. N π ) is the possibility (rsp. necessity) measure induced by the maxitive kernel π and C is the Choquet integral.
An important property, that will be used in the sequel, is that the maxitive interval-valued expectation of s w.r.t. π is the convex set of all the summative precise-valued expectations w.r.t. all the summative kernels dominated by π Loquin and Strauss (2008b).
Property 1.
Let π be a maxitive kernel and let M ( π ) be the set of the summative kernels it dominates (see Equation (10)). Let s be a bounded function of L ( Ω ) . We have
g E ¯ ̲ π ( s ) , κ M ( π ) : E κ ( s ) = g ,
and
κ M ( π ) , E κ ( s ) E ¯ ̲ π ( s ) ,
where E κ ( s ) resp. E ¯ ̲ π ( s ) is the summative (resp. maxitive) expectation of s w.r.t. κ (resp.π).

4.3. Maxitive Kernel Cumulative and VaR Estimator

As mentioned in Loquin and Strauss (2008a) (Theorem 4), the summative kernel cumulative estimator F ^ κ n , defined by (Equation (3)), in each point x Ω , can be written as the summative expectation of the empirical CDF F n , defined by (Equation (4)), in a probabilistic neighborhood defined by the summative kernel κ translated in x
F ^ κ n ( x ) = E κ x ( F n ) .
Based on this reformulation, Loquin and Strauss (2008a) propose an extension of the summative kernel estimator of the CDF. This extension called maxitive kernel estimator of the CDF or interval-valued estimation of the CDF.
Definition 9.
Let F n be the empirical CDF based on the sample ( x 1 , , x n ) . Let π be a maxitive kernel and let E ¯ ̲ π ( · ) be the maxitive expectation w.r.t. π (Expression (14)). The maxitive kernel cumulative estimator is defined, x Ω , by
F ¯ ̲ π n ( x ) = F ̲ π n ( x ) , F ¯ π n ( x ) = E ¯ ̲ π x ( F n ) = E ̲ π x ( F n ) , E ¯ π x ( F n ) .
The computation of F ¯ ̲ π n involves two Choquet integrals. This computation Loquin and Strauss (2008a) is given, for all x Ω , by
F ̲ π n ( x ) = E ̲ π x ( F n ) = C N π x ( F n ) = 1 n i = 1 n 1 π x ( x i ) 1 l ( x i x ) ,
F ¯ π n ( x ) = E ¯ π x ( F n ) = C Π π x ( F n ) = 1 n i = 1 n π x ( x i ) 1 l ( x x i ) + 1 l ( x i x ) .
The estimation of the CDF is usually computed on p regularly spaced points of Ω . Let { y j } j { 1 , , p } be those p points. The algorithm of compute of F ¯ ̲ π n ( y j ) j { 1 , , p } , in each point { y j } j { 1 , , p } of Ω , is given by Algorithm 1.
Algorithm 1: Computation of F ̲ π n ( y j ) , F ¯ π n ( y j ) j { 1 , , p } .
Econometrics 06 00047 i007
The interval-values estimate F ¯ ̲ π n based on a maxitive kernel π (Expression (9)) is the most specific interval containing all summative estimate F ^ κ n (Expression (6)) based on a summative kernel κ dominated by π , i.e., such that κ M ( π ) .
Property 2.
Let π be a maxitive kernel on Ω and M ( π ) the set of all the summative kernels dominated by π. We have
x Ω , κ M ( π ) , F ^ κ n ( x ) F ¯ ̲ π n ( x ) ,
and
g F ¯ ̲ π n ( x ) , κ M ( π ) , F ^ κ n ( x ) = g .
Notice that, due to Property 2, both F ¯ π and F ̲ π are also estimates of the sought after CDF. Clearly, F ̲ π n and F ¯ π n are two strictly increasing continuous functions on Ω , and so they both have an inverse. Then, we can infer immediately an maxitive interval-valued estimation of the VaR VaR ¯ ̲ α , π = [ VaR ̲ α , π , VaR ¯ α , π ] :
VaR ̲ α , π = inf x , F ̲ π ( x ) = P ( X x ) α = F ̲ π 1 ( α ) , VaR ¯ α , π = inf x , F ¯ π ( x ) = P ( X x ) α = F ¯ π 1 ( α ) ,
that inherits from the properties of the CDF: VaR ¯ ̲ α , π = VaR ^ α , κ | κ M { π } .
The Matlab program computing the interval-valued estimation of VaR is outlined in Appendix A to the present paper.

5. Experiment: Empirical Results

5.1. Data and Experimental Process

The data used in the present study consisted of daily closing prices collected between January 2010 and December 2016 for four stock indexes from developed countries: S&P500, DJI, Nikkei225, and CAC40. The daily close prices were converted to daily log-returns. We took the first differences of the natural logarithm of the daily prices. For an observed price P t , the corresponding one-day log-return on day t was defined by: r t = ln P t P t 1 . Table 1 contains descriptive statistics for the sample return of the four considered stock indexes. We observed that the returns were negatively skewed and characterized by heavy tails since the kurtoses were significantly greater than 3. The series had a distribution with tails that were significantly fatter than those of normal distribution. The Jarque-Bera test also indicated that hypothesizing the four data to be normally distributed can be rejected with a very low significance level.
Due to these properties, the normal distribution would be an inappropriate model for calculating the daily VaR—as mentioned in the introduction. We thus rather envisage a nonparametric asymmetric approach based on kernel estimation of the CDF. However, as previously mentioned, estimating the VaR using a kernel based approach can be highly biased, since the choice of the kernel has an important impact on assessing the VaR estimate at a given probability level.
In this section, we propose to gauge this impact by estimating the VaR at a several probability levels with different types of kernels, whose bandwidths have been adapted to the available datasets. We confirmed that no kernel can be considered as optimal in this context and show that the new maxitive interval-valued nonparametric approach we propose leads to a more cautious behavior when applied to real data. In fact, the bounds of the interval-valued estimate of the VaR, for a given probability level, can be instrumental in applications such as reserve requirements for banks. We then show that this approach allows discarding some parametric approaches since they are not adapted to heavy tailed data like those we consider.

5.2. Nonparametric Estimate of the VaR: Comparing Summative and Maxitive Approaches

In this experiment, we have considered estimating the VaR for low probability levels—namely at levels lower than 0.1 —for each of the four datasets. In the first experiment, we have focused on DJI and Nikkei225 indexes. The CDF(s) have been estimated by considering four of the most used summative kernels whose bandwidth have been adapted using the rule of thumb method: Epanechnikov, normal, biweight, and triweight (linear)—see Silvermann (1986) for their analytical expressions. Figure 4 illustrates the impact of the kernel choice in the VaR estimation. This impact is more marked with high volatility stocks (Nikkei225). For example, referring to Figure 4a, it can be seen that the VaR estimate based on the Epanechnikov kernel is absolutely greater than the one based on the normal kernel at the probability level of 1 % while the opposite situation occurs at the probability level of 2.5 % . Referring to each plots of Figure 4, it is obvious that no kernel can be seen as always providing the upper or lower valued CDF. Since the CDF curves intersect each other for different probability levels then the risk of bias in the VaR estimate is relatively great, especially when the number of observations in the data set is low—compare Figure 4b–d. In this context, no kernel function can be considered as more or less conservative than the others. Also, other indexes would have produced similar results.
This first experiment shows that the high dependence of the CDF estimate to the chosen kernel shape can highly impact and bias the VaR estimate. The goal of the second experiment aims to show that none of the four considered kernel can be considered as optimal in this context. This experiment needs a ground truth which is not available. A resampling methodology was carried out to estimate such a ground truth. This methodology consists of five steps:
  • Step 1: Fit a GARCH model to the four stock returns data. The fitted model considered for the returns r t is a AR(1)1- GARCH(1,1) given by:
    r t = μ t + ε t , ε t = σ t z t , t = 1 , , n , σ t 2 = η 0 + η 1 ε t 1 2 + η 2 σ t 1 2 ,
    where r t is the return value at time t, z t is a standard Normal random variable and n is the sample size ( n = 200 and 1500). The conditional mean μ t , is assumed to follow an AR(1) model given by: μ t = φ 0 + φ 1 r t 1 . By definition ε t is serially uncorrelated with mean zero, but a time varying conditional variance equal to σ t 2 . The three positive parameters η 0 , η 1 , η 2 and the two parameters φ 0 , φ 1 are, respectively, the parameters of the GARCH(1,1) and the AR(1) models. The ML method provides a systematic way to adjust the parameters of the model to give the best fit. Table 2 lists the fitted models for each daily stock returns data.
  • Step 2: Generate 1000 simulated samples from each returns data using the coefficients obtained from the above fitted models. The main reason behind proposing GARCH models to simulate our real data is the dependence properties and the volatility phenomena of stock returns; see Engle (1982).
    In a first step, we generate an i.i.d series, z t , by the random generator in MATLAB (R2010a) and another series σ t . The random numbers sampled were all assumed to be normally distributed with expectation zero and unit variance. Then, the innovations for the GARCH series have been obtained via the equation ε t = σ t z t . Finally the generation of r t from the AR(1) process is straightforward. Therefore we run the series for 1500 and 200 times in each of the 1000 simulated samples.
  • Step 3: Calculate the VaR estimates for each sample data generated in step two using the 4 kernel functions. These VaRs are calculated at five chosen levels of probability ( α = 1 % , 2.5 % , 5 % , 7.5 % , 10 % ) with 2 times horizons ( n = 200 and n = 1500 ).
  • Step 4: Assess the performance of each kernel function by comparing their VaRs with the empirical VaRs. The performance criterion we examine is the mean square error (MSE), i.e.,:
    MSE = E Q ^ ( n ) ( p ) Q ( p ) 2 1 m k = 1 m Q ^ n ( k ) ( p ) Q ( p ) 2 ,
    where Q ^ ( n ) ( p ) is a vector of quantiles obtained from the simulated samples.
  • Step 5: Tabulate the results that lead us to some important conclusions.
The results of the MSE of each α (%) VaR estimates are reported in Table 3 and Table 4. From Table 3 and Table 4, for S&P500, the best kernel function that estimate the VaR at level of 10 % , 7.5 % , 5 % and 2.5 % is the Epanechnikov kernel, while the Normal kernel is the best at 1 % probabilities level. For Nikkei225, it seems that the Normal perform better than the other kernel functions. For DJI, the Epanechnikov kernel is more accurate to estimate the VaR, while for CAC40 the Normal and is matched more accurately. From Table 4, when n = 200 , it appears that the Biwieight and Epanechnikov kernel are more likely to estimate VaR more accurately at several probability levels. However, the performance of kernel functions rapidly declines as n and α get smaller.
The simulations results showed that, for large sample sizes (here n = 1500 ), all kernel functions are consistent estimators, i.e., MSE values are close to 0. On the other hand, MSE values are larger for the smaller sample size (here n = 200 ). This demonstrates that choosing a particular kernel, when the sample size is low, is risky. Therefore a coherent interval-valued of VaR as we propose is likely to provide a more careful decision in this context.

5.3. Interval Estimation of Value-at-Risk and Some Numerical Comparisons

Here we apply the maxitive kernel estimator presented in Section 4.3 to obtain the lower and upper bound for VaR corresponding to each four stock indexes for three probability levels ( α = 10 % , 5 % , 1 % ). To obtain the optimal bounds for the VaR, the bandwidth has been, once again, chosen using the most popular methods such as biased cross-validation method and plug-in method presented in Section 3.3. As shown in Table 5, the maximum available bandwidth is taken to insure that all kernel estimators are inside the interval given by the maxitive kernel estimator. In order to examine the performance of the maxitive kernel estimation method, we divide the data into three samples: the first sample corresponds to data with 6 year time horizons, the second sample is chosen with time horizons of 3 years and the third correspond to one year time horizon. In a first step, after choosing the best-fit distribution from the family of GHYP distribution by using the AIC2 criteria, we compute the VaR across the candidate distribution. Next, we compute the VaR confidence interval based on these three distributions using the bias-corrected and accelerated (BCa) bootstrap method. Finally, we compare our proposed maxitive interval-valued with those based on the bootstrap technique under different samples sizes. So far, the bootstrap confidence interval of VaR for the three distributions: GHYP, Normal, and HS are presented together in Table 6, Table 7 and Table 8. With these punctual VaRs at hand for comparison purposes, we evaluate the performance of our approach. In this section, we have chosen to illustrate our results and to discuss the benefit of our approach in two ways. In Table 6, Table 7 and Table 8, we show the explicit results for the minimum and maximum value, as well as the width of the interval estimation of VaR for several probability levels between 1 % and 10 % . Figure 5 shows the PP-plots, i.e., the F ^ ( x i ) of all models against the F ( x i ) . Based on the numerical results we can formulate several conclusions: Evidently, the lower and upper bounds increase with the probability level α . However, it is important to note that the intervals estimation of VaRs for the two indexes Nikkei225 and CAC40 are larger than those of DJI and S&P500 indexes. Note also that the influence of the volatility stock market is much more important than the influence of the sample size. Also, the widths of maxitive VaR intervals are rather tight than those derived from the GHYP, HS, and normal distributions. For example, for the short time horizon 1 year with the high volatility stock (Nikkei225) and at the 1 % of probability level the width of the interval-valued of VaR is 1.073 while the widths of the BCa ( 99 % ) confidence interval based on HS, normal, and GHYP distributions are respectively 4.236 , 1.925 , and 4.246 respectively (See Table 6). Furthermore, we can remark that the maxitive interval-valued estimation of the VaR is on the left side of the normal VaR and this shows the ability of our approach to model very dangerous financial risks while the normal distribution is not consistent with tail-thickness and right tail risk. This results indicate that our approach is more accurate and informative especially for the smaller sample size.
Next, in order to inspect the goodness-of-fit of the used models a graphical tool (PP-plot) is constructed (Figure 5) to compare the empirical cumulative function to the fitted cumulative functions. This plot confirms that the Epanechnikov kernel function and the GHYP distribution give a good global fit for the 4 returns data. The points of the PP-plot are close to the 45 degree line and they also lie within the interval estimation using maxitive kernel.
In contrast, from the same figure, the PP-plot of the normal cumulative function against the empirical cumulative function shows that the left end pattern is above the 45 degree line and the right end is below it. Thus, the normal distribution underestimates the VaR at low probability level. This is due to the fact that the normal distribution ignores the presence of fat tails in the actual distribution. Based on these results, we can conclude that the GHYP and the maxitive kernel method provide a better fit than a normal distribution to market return data. Thus, the maxitive kernel method seems to be a good choice to estimate the risk for VaR.

6. Conclusions

Using an estimate of the Value-at-Risk (VaR) based on a small-sized sample may pose a risk to financial application due to the high dependence of this VaR estimate to the computation method. In fact, computing a VaR estimate can be performed in many ways including parametric and nonparametric approaches. We have shown, with experiments based on daily closing prices data of four stock indexes, that no method can be said to be optimal to achieve this estimate. Moreover, the bias induced by choosing a particular method is particularly sensitive for estimates based on small samples. In this paper, we have presented a new method for computing the VaR which highly differs from its competitors in the sense that it is interval-valued. This interval-valued VaR estimate is the set of all estimates that could have been obtained, with the same data sample, by using a set of kernel-based estimation methods. In our experiments, we noted that the output of parametric methods, like the GHYP VaR estimate, always belong to the interval-valued VaR we propose while others, like the normal VaR estimate, do not. It appears that VaR estimates belonging to the interval-valued VaR estimate we propose are likely to be less risky than those that do not. Moreover, a wide interval-valued VaR estimate is a marker of a high risk for a trader since it reflects thick tails, pronounced skewness, and excess kurtosis of financial asset price returns.
The interval-valued estimation, based on maxitive kernel, that we propose in this paper is a convex envelope of the kernel VaR estimation. Although the VaR is probably one of the most popular tool in risk management, an alternative measure for Value-at-Risk which satisfied the conditions for a coherent risk measure has been proposed (see e.g., Rockafellar and Uryasev 2000; Artzner et al. 1999). This risk measure is called Expected Shortfall (ES; also known as conditional VaR or average VaR). Indeed, the Basel Committee published, in January 2016 Basel Committee on Banking Supervision (2016), revised standards for minimum capital requirements for market risk which include a shift from Value-at-Risk to expected shortfall as the preferred risk measure. Then, the future research should be conducted into finding an interval-valued estimation of ES, based on maxitive kernel, and to compare it with the interval-valued estimation of VaR and the ES for many distributions. In this regard, Broda and Paolella (2011) presents easily-computed expressions for evaluating the expected shortfall for numerous distributions which are now commonly used for modelling asset returns. Another interesting avenue of research would be to construct a CoVaR interval combining GARCH model with maxitive kernel.

Author Contributions

All authors contributed equally to the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Matlab Code for Maxitive Interval-Valued Estimation of VaR

Econometrics 06 00047 i001Econometrics 06 00047 i002Econometrics 06 00047 i003

Integrated Summative Kernels

Econometrics 06 00047 i004Econometrics 06 00047 i005

Triangular Maxitive Kernel

Econometrics 06 00047 i006

References

  1. Adrian, Bowman, Peter Hall, and Tania Prvan. 1998. Bandwidth selection for the smoothing of distribution functions. Biometrika 85: 799–808. [Google Scholar]
  2. Altman, Naomi, and Christian Leger. 1995. Bandwidth selection for kernel distribution function estimation. Journal of Statistical Planning and Inference 46: 195–214. [Google Scholar] [CrossRef] [Green Version]
  3. Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, and David Heath. 1999. Coherent measures of risk. Mathematical Finance 9: 203–28. [Google Scholar] [CrossRef]
  4. Barndorff-Nielsen, Ole. 1978. Hyperbolic distributions and distributions on hyperbolae. Scandinavian Journal of Statistics 5: 151–57. [Google Scholar]
  5. Basel Committee on Banking Supervision. 2016. Minimum Capital Requirements for Market Risk. Technical Report. Available online: http://www.bis.org (accessed on 3 November 2018).
  6. Braione, Manuela, and Nicolas Scholtes. 2016. Forecasting value-at-risk under different distributional assumptions. Econometrics 4: 1–27. [Google Scholar] [CrossRef]
  7. Broda, Simon A., and Marc S. Paolella. 2011. Expected shortfall for distributions in finance. In Statistical Tools for Finance and Insurance. Edited by Pavel Čížek, Wolfgang Karl Härdle and Rafał Weron. Berlin: Springer, pp. 57–99. [Google Scholar]
  8. Butler, J. S., and Barry Schachter. 1998. Estimating value-at-risk with a precision measure by combining kernel estimation with historical simulation. Review of Derivatives Research 1: 371–90. [Google Scholar]
  9. Charpentier, Arthur, and Abder Oulidi. 2010. Beta kernel quantile estimators of heavy-tailed loss distributions. Statistics and Computing 20: 35–55. [Google Scholar] [CrossRef]
  10. Chen, Song Xi, and Cheng Yong Tang. 2005. Nonparametric inference of value-at-risk for dependent financial returns. Journal of Financial Econometrics 3: 227–55. [Google Scholar] [CrossRef]
  11. Choquet, Gustave. 1953. Théorie des capacités. Annales de l’Institut Fourier 5: 131–295. [Google Scholar] [CrossRef]
  12. Danielsson, Jon, and Casper De Vries. 2000. Value-at-risk and extreme returns. Annales d’économie et de Statistique 60: 236–69. [Google Scholar] [CrossRef]
  13. Davison, Anthony Christopher, and David Victor Hinkley. 1997. Boostrap Methods and Their Applications. New York: Cambridge University Press. [Google Scholar]
  14. De Cooman, Gert. 1997. Possibility theory, 1: The measure-and integral-theoretic groundwork. International Journal of General Systems 25: 291–323. [Google Scholar] [CrossRef]
  15. Denneberg, Dieter. Non Additive Measure and Integral. Dordrecht: Kluwer Academic Publishers.
  16. Destercke, Sebastien, Didier Dubois, and Eric Chojnacki. 2007. On the relationships between random sets, possibility distributions, p-boxes and clouds. Paper presented at 28th Linz Seminar on Fuzzy Set Theory, Linz, Austria, February 6–10. [Google Scholar]
  17. Dowd, Kevin. 2005. Measuring Market Risk, 2nd ed. New York: John Wiley & Sons Inc. [Google Scholar]
  18. Dubois, Didier. 2006. Possibility theory and statistical reasoning. Computational Statistics and Data Analysis 51: 47–69. [Google Scholar] [CrossRef] [Green Version]
  19. Dubois, Didier, and Henri Prade. 1988. Théorie des possibilités: Applications à la représentation des connaissances en informatique. Paris: Masson. [Google Scholar]
  20. Dubois, Didier, Henri Prade, Laurent Foulloy, and Gilles Mauris. 2004. Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliable Computing 10: 273–97. [Google Scholar] [CrossRef]
  21. Eberlein, Ernst, and Ulrich Keller. 1995. Hyperbolic distributions in finance. Bernoulli 1: 281–99. [Google Scholar] [CrossRef]
  22. Eberlein, Ernst, Ulrich Keller, and Karsten Prause. 1998. New insights into smile, mispricing, and value at risk: The hyperbolic model. The Journal of Business 71: 371–405. [Google Scholar] [CrossRef]
  23. Efron, Bradley. 1979. Bootstrap methods: Another look at the jackknife. The Annals of Statistics 7: 1–26. [Google Scholar] [CrossRef]
  24. Efron, Bradley, and Robert J. Tibshirani. 1993. An Introduction to the Bootstrap. London: Chapman and Hall. [Google Scholar]
  25. Engle, F. Robert. 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econometrica 50: 987–1007. [Google Scholar] [CrossRef]
  26. Epanechnikov, V. A. 1969. Nonparametric estimation of a multidimensional probability density. Theory of Probability and its Applications 14: 153–58. [Google Scholar] [CrossRef]
  27. Fox, John. 2002. Bootstrapping Regression Models Appendix to an R and S-Plus Companion to Applied Regression. Available online: http://statweb.stanford.edu/~owen/courses/305a/FoxOnBootingRegInR.pdf (accessed on 10 December 2018).
  28. Gourieroux, Christian, Jean-Paul Laurent, and Olivier Scaillet. 2000. Sensitivity analysis of values at risk. Journal of Empirical Finance 7: 225–45. [Google Scholar] [CrossRef]
  29. Hartz, Christoph, Stefan Mittnik, and Marc Paolella. 2006. Accurate value-at-risk forecasting based on the normal-garch model. Computational Statistics and Data Analysis 51: 2295–312. [Google Scholar] [CrossRef]
  30. Jones, Michael Chris. 1990. The performance of kernel density functions in kernel distribution function estimation. Statistics & Probability Letters 37: 129–32. [Google Scholar]
  31. Jorion, Philippe. 1996. Risk2: Measuring the risk in value at risk. Financial Analysts Journal 52: 47–56. [Google Scholar] [CrossRef]
  32. Kuester, Keith, Stefan Mittnik, and Marc S. Paolella. 2006. Value–at–risk prediction: A comparison of alternative strategies. Journal of Financial Econometrics 4: 53–89. [Google Scholar] [CrossRef]
  33. Kupiec, Paul. 1995. Techniques for verifying the accuracy of risk measurement models. Journal of Derivatives 3: 73–84. [Google Scholar] [CrossRef]
  34. Linsmeier, Thomas J., and Neil D. Pearson. 1997. Quantitative disclosures of market risk in the sec release. Pearson Accounting Horizons 11: 107–35. [Google Scholar]
  35. Loquin, Kevin, and Olivier Strauss. 2008a. Imprecise functional estimation: The cumulative distribution case. In Soft Methods in Probability and Statistics. Edited by Didier Dubois, Maria Asuncion Lubiano, Henri Prade, Przemyslaw Grzegorzewski and Olgierd Hryniewicz. Advanced in Soft Computing. Heidelberg: Springer, vol. 48, pp. 175–82. [Google Scholar]
  36. Loquin, Kevin, and Olivier Strauss. 2008b. On the granularity of summative kernels. Fuzzy Sets and Systems 159: 1952–72. [Google Scholar] [CrossRef]
  37. McNeil, Alexander J., Rudiger Frey, and Paul Embrechts. 2005. Quantitative Risk Management. Princeton: Princeton University Press. [Google Scholar]
  38. Nadaraya, Elizbar. A. 1964. Some new estimates for distribution function. Theory of Probability and its Application 9: 497–500. [Google Scholar] [CrossRef]
  39. Paolella, Marc S. 2007. Intermediate Probability: A Computational Approach. Chichester: John Wiley & Sons. [Google Scholar]
  40. Paolella, Marc S., and Paweł Polak. 2015. Comfort: A common market factor non-gaussian returns model. Journal of Econometrics 187: 593–605. [Google Scholar] [CrossRef]
  41. Parzen, Emanuel. 1962. On estimation of a probability density function and mode. The Annals of Mathematical Statistics 33: 1065–76. [Google Scholar] [CrossRef]
  42. Polansky, Alan M., and Edsel R. Baker. 2000. Multistage plug-in bandwidth selection for kernel distribution function estimates. Journal of Statistical Computation and Simulation 65: 63–80. [Google Scholar] [CrossRef]
  43. Pritsker, M. 1997. Evaluating value at risk methodologies: Accuracy versus computational time. Journal of Financial Services Research 12: 201. [Google Scholar] [CrossRef]
  44. Quintela Del Rio, A., and G. Estevez-Perez. 2013. Nonparametric kernel distribution function estimation with kerdiest: An r package for bandwidth choice and applications. Journal of Statistical Software 50: 1–21. [Google Scholar]
  45. Rockafellar, R. Tyrrell, and Stanislav Uryasev. 2000. Optimization of conditional value-at-risk. Journal of Risk 2: 21–41. [Google Scholar] [CrossRef] [Green Version]
  46. Rosenblatt, Murray. 1956. Remarks on some nonparametric estimates of a density function. The Annals of Mathematical Statistics 27: 832–37. [Google Scholar] [CrossRef]
  47. Sarda, Pascal. 1993. Smoothing parameter selection for smooth distribution functions. Journal of Statistical Planning and Inference 35: 65–75. [Google Scholar] [CrossRef]
  48. Shafer, Glenn. 1976. A Mathematical Theory of Evidence. Princeton: Princeton University Press. [Google Scholar]
  49. Sheather, Simon J., and J. Stephen Marron. 1990. Kernel quantile estimators. Journal of the American Statistical Association 85: 410–16. [Google Scholar] [CrossRef]
  50. Silvermann, Bernard Walter. 1986. Density Estimation for Statistics and Data Analysis. London: Chapman and Hall. [Google Scholar]
  51. Singh, Radhey S., Theo Gasser, and Bhola Prasad. 1983. Nonparametric estimates of distributions functions. Communication in Statistics—Theory and Methods 12: 2095–108. [Google Scholar] [CrossRef]
  52. Sørensen, Michael, and Bo Martin Bibby. 1997. A hyperbolic diffusion model for stock prices. Finance and Stochastics 1: 25–41. [Google Scholar]
  53. Swanepoel, Jan W. H. 1988. Mean integrated squared error properties and optimal kernels when estimating a distribution function. Communications in Statistics—Theory and Methods 17: 3785–99. [Google Scholar] [CrossRef]
  54. Wand, M. P., and M. C. Jones. 1995. Kernel Smoothing. London: Chapman and Hall. [Google Scholar]
  55. Watson, Geoffrey S., and M. R. Leadbetter. 1964. Hazard analysis II. Sankhya Series A 26: 101–16. [Google Scholar]
  56. Winter, B. 1973. Strong uniform consistency of integrals of density estimators. Canadian Journal of Statistics 1: 247–53. [Google Scholar] [CrossRef]
  57. Winter, B. 1979. Convergence rate of perturbed empirical distribution functions. Journal of Applied Probability 16: 163–73. [Google Scholar] [CrossRef]
  58. Yamato, Hajime. 1973. Uniform convergence of an estimator of a distribution function. Bulletin on Mathematical Statistics 15: 69–78. [Google Scholar]
1.
Autoregressive of order 1.
2.
Akaike Information Criterial. AIC = 2 ln L GHYP + 2 ς , where ς is the number of estimated parameters and L(GHYP) is the likelihood of the GHYP model.
Figure 1. Epanechnikov kernel with a bandwidth Δ = 1 .
Figure 1. Epanechnikov kernel with a bandwidth Δ = 1 .
Econometrics 06 00047 g001
Figure 2. Integrated Epanechnikov κ kernel with a bandwidth Δ = 1 .
Figure 2. Integrated Epanechnikov κ kernel with a bandwidth Δ = 1 .
Econometrics 06 00047 g002
Figure 3. Triangular maxitive kernel with a bandwidth Δ = 1 .
Figure 3. Triangular maxitive kernel with a bandwidth Δ = 1 .
Econometrics 06 00047 g003
Figure 4. Cumulative distribution function (CDF) Kernel smoothed estimators for the DJI daily returns with sample size 200 (a) and 650 (b); and for the Nikkei225 daily returns with sample size 200 (c) and 650 (d). The black curve is the Epanechnikov distribution; red curve is the Triweight distribution. The blue curve corresponds to the Biweight distribution and the green curve is the Normal distribution.
Figure 4. Cumulative distribution function (CDF) Kernel smoothed estimators for the DJI daily returns with sample size 200 (a) and 650 (b); and for the Nikkei225 daily returns with sample size 200 (c) and 650 (d). The black curve is the Epanechnikov distribution; red curve is the Triweight distribution. The blue curve corresponds to the Biweight distribution and the green curve is the Normal distribution.
Econometrics 06 00047 g004
Figure 5. A graphical tool (PP-plot) of the theoretical CDF of Epanechnikov (in black), Generalized HYPerbolic (GHYP, in green), and normal (in violet) versus the empirical CDF for the four returns data sets: (a) Daily CAC40 (b) Daily DJI (c) Daily Nikkei225 and (d) Daily S&P500. In each of the four plots, the empirical CDF on the horizontal axis and the theoretical CDF on the vertical axis. The highest (in red) and lowest (in blue) dotted lines correspond respectively to the upper and lower bounds of the inteval CDF using maxitive kernel.
Figure 5. A graphical tool (PP-plot) of the theoretical CDF of Epanechnikov (in black), Generalized HYPerbolic (GHYP, in green), and normal (in violet) versus the empirical CDF for the four returns data sets: (a) Daily CAC40 (b) Daily DJI (c) Daily Nikkei225 and (d) Daily S&P500. In each of the four plots, the empirical CDF on the horizontal axis and the theoretical CDF on the vertical axis. The highest (in red) and lowest (in blue) dotted lines correspond respectively to the upper and lower bounds of the inteval CDF using maxitive kernel.
Econometrics 06 00047 g005
Table 1. Descriptive statistics of daily returns on 4 stock indexes of a financial institution observed between 1 January 2010 and 1 September 2016.
Table 1. Descriptive statistics of daily returns on 4 stock indexes of a financial institution observed between 1 January 2010 and 1 September 2016.
IndexS&P500Nikkei225DJICAC40
No. of observations1702167317021581
Min−6.895−11.15−5.7−5.634
Max4.6310.664.159.22
Mean0.03790.0270.0320.0038
Standard deviation0.9911.4670.921.379
Skewness−0.441−0.468−0.384−0.015
Kurtosis7.089.776.4165.85
Jarque-Bera1239.53262.7869.91535.17
p-valuep-valuep-valuep-value
0.0000.0000.0000.000
Table 2. Fitted GARCH models.
Table 2. Fitted GARCH models.
Sample SizeIndex μ t AR(1)GARCH(1,1)
S&P500 r t = 0.032 0.132 r t 1 + ε t σ t 2 = 0.08 + 0.23 ε t 1 2 + 0.663 σ t 1 2
200Nikkei225 r t = 0.05 0.08 r t 1 + ε t σ t 2 = 0.179 + 0.12 ε t 1 2 + 0.827 σ t 1 2
DJI r t = 0.033 0.116 r t 1 + ε t σ t 2 = 0.08 + 0.25 ε t 1 2 + 0.63 σ t 1 2
CAC40 r t = 0.05 0.031 r t 1 + ε t σ t 2 = 0.032 + 0.044 ε t 1 2 + 0.93 σ t 1 2
S&P500 r t = 0.046 0.063 r t 1 + ε t σ t 2 = 0.078 + 0.207 ε t 1 2 + 0.69 σ t 1 2
1500Nikkei225 r t = 0.068 0.0043 r t 1 + ε t σ t 2 = 0.088 + 0.176 ε t 1 2 + 0.814 σ t 1 2
DJI r t = 0.039 0.069 r t 1 + ε t σ t 2 = 0.07 + 0.22 ε t 1 2 + 0.68 σ t 1 2
CAC40 r t = 0.025 0.027 r t 1 + ε t σ t 2 = 0.03 + 0.0767 ε t 1 2 + 0.906 σ t 1 2
Table 3. Mean square error results for several types of kernel functions and n = 1500 .
Table 3. Mean square error results for several types of kernel functions and n = 1500 .
α IndexEpanechnikovNormalBiweightTriweightBest
S&P5000.012850.012870.012860.01286Epanechnikov
10 % Nikkei2250.2260.2230.2270.2265Normal
DJI0.005350.005330.005340.0535Normal
CAC400.02560.02550.025510.02551Normal
S&P5000.035750.035800.0357640.035763Epanechnikov
7.5 % Nikkei2250.03160.03150.03180.0318Normal
DJI0.010750.010750.010740.01075Biweight
CAC400.03390.0340.03410.0342Normal
S&P5000.072880.072950.072890.07289Epanechnikov
5 % Nikkei2250.04350.0420.04350.0436Normal
DJI0.038770.038790.038780.03882Epanechnikov
CAC400.0780.07840.07820.0781Normal
S&P5000.017630.017640.0176350.017637Epanechnikov
2.5 % Nikkei2250.08580.08570.08590.08578Normal
DJI0.1410.14090.14080.1409Epanechnikov
CAC400.1320.1310.1330.134Normal
S&P5000.058040.058010.058040.058017Normal
1 % Nikkei2250.2460.24670.24810.2482Epanechnikov
DJI0.24040.24050.24060.2407Epanechnikov
CAC400.3630.3630.3640.362Triweight
Table 4. Mean square error results for several types of kernel functions and n = 200 .
Table 4. Mean square error results for several types of kernel functions and n = 200 .
α IndexEpanechnikovNormalBiweightTriweightBest
S&P5000.03370.03350.03360.0338Normal
10 % Nikkei2250.7660.7650.7670.766Normal
DJI0.04410.04430.04420.0442Epanechnikov
CAC400.03470.03480.03460.0348Biweight
7.5 % S&P5000.06320.06340.06330.064Epanechnikov
Nikkei2250.9380.9370.9390.936Triweight
DJI0.06140.06150.06120.0613Biweight
CAC400.05880.05910.05920.0594Epanechnikov
S&P5000.11690.11730.11670.1183Biweight
5 % Nikkei2251.1311.1291.1311.127Triweight
DJI0.09750.09770.09740.0977Biweight
CAC400.1480.1490.1470.0148Epanechnikov
S&P5000.280.2840.2810.283Biwieght
2.5 % Nikkei2251.4531.4511.4531.448Triweight
DJI0.2520.2550.2530.254Epanechnikov
CAC400.29050.28880.2870.2889Biweight
S&P5000.8690.8210.870.881Epanechnikov
1 % Nikkei2251.9771.9951.9811.980Epanechmikov
DJI0.7020.6910.6960.693Normal
CAC400.7940.7910.7880.793Biweight
Table 5. Optimal bandwidth estimation for the four stocks return data.
Table 5. Optimal bandwidth estimation for the four stocks return data.
Return DataTime HorizonBandwidth Selection MethodEpanechnikovNormalBiweightTriweightMaxitive Kernel Bandwidth
S&P5006 yearsPlug-in0.1890.0840.2450.2500.250
Cross validation0.1740.0600.1900.211
3 yearsPlug-in0.2280.1010.2690.2940.294
Cross validation0.1970.1180.2110.267
1 yearPlug-in0.3180.1410.3760.4050.405
Cross validation0.2770.0920.3380.397
Nikkei2256 yearsPlug-in0.3320.1470.4050.4400.440
Cross validation0.3300.1100.3310.412
3 yearsPlug-in0.4060.1800.4800.5400.540
Cross validation0.3070.1010.3900.430
1 yearPlug-in0.8400.3740.9961.1181.118
Cross validation0.8410.3820.9941.098
DJI6 yearsPlug-in0.1730.0770.2050.2240.224
Cross validation0.1500.0500.1800.208
3 yearsPlug-in0.2280.1010.2700.2940.294
Cross validation0.1890.1130.2100.223
1 yearPlug-in0.3600.1600.4250.4680.468
Cross validation0.3260.1480.3810.422
CAC406 yearsPlug-in0.3230.1430.3820.4310.446
Cross validation0.3740.0800.3900.446
3 yearsPlug-in0.4230.1880.4980.5640.564
Cross validation0.4330.1440.5010.544
1 yearPlug-in0.930.4161.1071.2501.250
Cross validation0.9340.4111.1501.230
Table 6. A comparison between the bound estimates of the daily VaR based on the maxitive kernel estimation method and bootstrap confidence intervals of Value-at-Risk (VaRs) based on three different distributions: Historical simulation method (HS), Normal, and Generalized HYPerbolic (GHYP) distributions. These VaR methods are applied to four daily stock returns at several probability levels α % and time horizon of 1 year.
Table 6. A comparison between the bound estimates of the daily VaR based on the maxitive kernel estimation method and bootstrap confidence intervals of Value-at-Risk (VaRs) based on three different distributions: Historical simulation method (HS), Normal, and Generalized HYPerbolic (GHYP) distributions. These VaR methods are applied to four daily stock returns at several probability levels α % and time horizon of 1 year.
Returns α Interval-Valued of VaRpdfEstimated VaRBootstrap Confidence Intervals of VaRs
BCa ( 90 % )BCa ( 95 % )BCa ( 99 % )
L.B.U.B.WidthL.B.U.B.WidthL.B.U.B.WidthL.B.U.B.Width
S&P500 HS−0.933−1.089−0.7730.316−1.126−0.7580.368−1.162−0.6650.497
10 % −1.092−0.7510.341Normal−1.117−1.303−0.9330.370−1.336−0.8930.443−1.379−0.8250.554
GHYP−0.839−0.993−0.7240.269−1.016−0.6940.322−1.094−0.6730.421
HS−1.325−1.552−1.1460.406−1.581−1.1090.472−1.632−1.07 00.562
5 % −1.605−1.1520.453Normal−1.440−1.716−1.2560.460−1.757−1.2340.523−1.822−1.1310.691
GHYP−1.264−1.486−1.0530.433−1.555−1.0150.540−1.595−0.9000.695
HS−2.196−2.548−1.9380.610−2.631−1.8760.755−2.773−1.7361.037
1 % −3.240−2.1171.123Normal−2.052−2.399−1.8180.581−2.456−1.7880.668−2.469−1.6780.791
GHYP−2.281−2.589−1.9970.592−2.683−1.9630.720−2.701−1.8510.850
Nikkei225 HS−4.331−5.055−3.7131.342−5.176−3.5901.586−5.906−3.4192.487
10 % −5.013−3.2361.777Normal−2.401−2.805−2.1000.705−2.831−1.9060.925−2.970−1.7681.202
GHYP−3.889−4.588−3.4341.154−4.772−3.3761.396−4.875−3.1441.731
HS−5.797−6.852−5.1961.656−7.054−5.1211.933−7.054−4.7892.265
5 % −6.760−5.0101.750Normal−3.061−3.625−2.6770.948−3.687−2.6011.086−3.806−2.4311.375
GHYP−5.567−6.458−4.8061.652−6.886−4.7432.143−7.009−4.6152.394
HS−9.259−10.31−7.7882.522−10.710−7.6393.071−11.320−7.0844.236
1 % −10.170−9.0971.073Normal−4.301−5.233−3.7741.459−5.409−3.7431.666−5.512−3.5871.925
GHYP−9.907−11.53−8.7662.764−11.760−8.5493.211−11.970−7.7244.246
DJI HS−0.897−1.081−0.7520.329−1.157−0.7080.449−1.184−0.6480.536
10 % −1.099−0.7660.333Normal−1.081−1.276−0.9380.338−1.362−0.9150.447−1.376−0.8580.518
GHYP−0.838−1.016−0.6940.322−1.040−0.6730.367−1.061−0.6230.438
HS−1.262−1.550−1.0970.453−1.576−0.9990.577−1.138−0.6690.469
5 % −1.540−1.0940.446Normal−1.394−1.597−1.1810.416−1.642−1.1420.500−1.741−1.0780.663
GHYP−1.237−1.459−1.0850.374−1.470−1.0390.431−1.496−0.9890.507
HS−2.0062.303−1.727−4.030−2.349−1.6690.680−2.439−1.5680.871
1 % −2.949−1.8371.112Normal−1.982−2.295−1.7600.535−2.315−1.6900.6252.396−1.588−3.984
GHYP−2.166−2.458−1.9190.539−2.532−1.8630.669−2.646−1.7150.931
CAC40 HS−2.670−3.075−2.2460.829−3.16 0−2.1980.962−3.184−2.0741.110
10 % −3.555−2.1541.401Normal−1.920−2.184−1.6160.568−2.305−1.5610.744−2.388−1.5150.873
GHYP−2.702−3.258−2.0991.159−3.257−2.2341.023−3.258−2.0991.159
HS−3.605−4.142−3.1021.040−4.156−3.0451.111−4.415−2.9401.475
5 % −4.202−3.1841.018Normal−2.446−2.700−1.9910.709−2.818−1.9610.857−2.978−1.9611.017
GHYP−3.576−4.084−3.1060.978−4.150−2.9361.214−4.433−2.6081.825
HS−5.100−5.759−4.6181.141−6.129−4.4771.652−6.125−4.4771.648
1 % −6.051−4.7711.280Normal−3.433−3.836−2.9790.857−3.903−2.8631.040−4.124−2.7571.367
GHYP−5.463−6.391−4.8601.531−6.71 0−4.8211.889−6.717−4.4112.306
Table 7. A comparison between the bound estimates of the daily VaR based on the maxitive kernel estimation method and bootstrap confidence intervals of VaRs based on three different distributions: Historical simulation method (HS), Normal, and GHYP distributions. These VaR methods are applied to four daily stock returns at several probability levels α % and time horizon of 3 years.
Table 7. A comparison between the bound estimates of the daily VaR based on the maxitive kernel estimation method and bootstrap confidence intervals of VaRs based on three different distributions: Historical simulation method (HS), Normal, and GHYP distributions. These VaR methods are applied to four daily stock returns at several probability levels α % and time horizon of 3 years.
Returns α Interval-Valued of VaRpdfEstimated VaRBootstrap Confidence Intervals of VaRs
BCa ( 90 % )BCa ( 95 % )BCa ( 99 % )
L.B.U.B.WidthL.B.U.B.WidthL.B.U.B.WidthL.B.U.B.Width
S&P500 HS0.860−0.941−0.7650.176−0.958−0.7530.205−1.001−0.7370.264
10 % −0.996−0.7320.264Normal−1.088−1.199−1.0020.197−1.221−0.9860.235−1.250−0.9560.294
GHYP−0.834−0.933−0.7630.170−0.946−0.7460.200−0.969−0.7090.260
HS−1.253−1.382−1.1300.252−1.363−1.1450.218−1.429−1.0850.344
5 % −1.394−1.1300.264Normal−1.402−1.527−1.3010.226−1.548−1.2860.262−1.598−1.2400.358
GHYP−1.207−1.316−1.1120.204−1.335−1.0900.245−1.376−1.0400.336
HS−2.059−2.208−1.9220.286−2.235−1.8970.338−2.308−1.8510.457
1 % −2.230−1.9790.251Normal−1.993−2.163−1.8540.309−2.192−1.8290.363−2.268−1.7880.480
GHYP−2.067−2.246−1.9360.310−2.275−1.9090.366−2.313−1.8320.481
Nikkei225 HS−2.602−2.937−2.3860.551−3.021−2.3410.680−3.119−2.2310.888
10 % −3.024−2.2830.741Normal−1.976−2.176−1.7560.420−2.235−1.7210.514−2.350−1.6120.738
GHYP−2.517−2.773−2.2540.519−2.816−2.2090.607−2.929−2.0900.839
HS−3.911−4.359−3.5660.793−4.486−3.5030.983−4.647−3.3461.301
5 % −4.324−3.6160.708Normal−2.539−2.851−2.2980.553−2.885−2.2460.639−3.028−2.1700.858
GHYP−3.832−4.338−3.4980.840−4.384−3.4230.961−4.441−3.2921.149
HS−6.273−6.980−5.6971.283−7.053−5.5991.454−7.308−5.4251.883
1 % −6.602−6.0520.550Normal−3.595−4.012−3.2630.749−4.077−3.1820.895−4.279−3.0811.198
GHYP−7.421−8.262−6.7381.524−8.432−6.6321.800−8.695−6.2732.422
DJI HS−0.851−0.927−0.7630.164−0.945−0.7550.190−0.986−0.7420.244
10 % −0.976−0.7430.233Normal−1.072−1.168−0.9680.200−1.184−0.9530.231−1.202−0.9260.276
GHYP−0.830−0.906−0.7580.148−0.930−0.7480.182−0.981−0.7220.259
HS−1.245−1.349−1.1480.201−1.377−1.1360.241−1.410−1.0880.322
5 % −1.373−1.1610.212Normal−1.379−1.489−1.2860.203−1.506−1.2630.243−1.564−1.2300.334
GHYP−1.193−1.300−1.1020.198−1.319−1.0860.233−1.375−1.0620.313
HS−1.816−1.959−1.6940.265−1.972−1.6640.308−2.021−1.6220.399
1 % −1.975−1.7520.223Normal−1.956−2.081−1.8150.266−2.108−1.7900.318−2.173−1.7260.447
GHYP−2.025−2.169−1.8910.278−2.197−1.8560.341−2.256−1.8300.426
CAC40 HS−1.963−2.152−1.8320.320−2.148−1.7710.377−2.230−1.7420.488
10 % −2.311−1.6580.653Normal−1.630−1.781−1.5070.274−1.806−1.4820.324−1.836−1.4280.408
GHYP−1.968−2.137−1.8370.300−2.161−1.8040.357−2.235−1.7380.497
HS−2.740−2.952−2.5630.389−2.960−2.5370.423−3.062−2.4350.627
5 % −3.095−2.4810.614Normal−2.093−2.255−1.9410.314−2.295−1.9100.385−2.359−1.8800.479
GHYP−2.756−2.968−2.5880.380−2.968−2.5180.450−3.041−2.4480.593
HS−4.428−4.770−4.1610.609−4.806−4.1120.694−4.903−4.0050.898
1 % −4.855−4.1370.718Normal−2.961−3.157−2.7450.412−3.198−2.2370.961−3.348−2.6770.671
GHYP−4.519−4.870−4.2680.602−4.874−4.1780.696−5.103−4.0341.069
Table 8. A comparison between the bound estimates of the daily VaR based on the maxitive kernel estimation method and bootstrap confidence intervals of VaRs based on three different distributions: Historical simulation method (HS), Normal, and GHYP distributions. These VaR methods are applied to four daily stock returns at several probability levels α % and time horizon of 6 years.
Table 8. A comparison between the bound estimates of the daily VaR based on the maxitive kernel estimation method and bootstrap confidence intervals of VaRs based on three different distributions: Historical simulation method (HS), Normal, and GHYP distributions. These VaR methods are applied to four daily stock returns at several probability levels α % and time horizon of 6 years.
Returns α Interval-Valued of VaRpdfEstimated VaRBootstrap Confidence Intervals of VaRs
BCa ( 90 % )BCa ( 95 % )BCa ( 99 % )
L.B.U.B.WidthL.B.U.B.WidthL.B.U.B.WidthL.B.U.B.Width
S&P500 HS−1.070−1.156−1.0070.149−1.168−0.9940.174−1.201−0.9680.233
10 % −1.189−0.9410.248Normal−1.233−1.314−1.1550.159−1.331−1.1410.190−1.377−1.1110.266
GHYP−1.029−1.103−0.9590.144−1.121−0.9460.175−1.152−0.9130.239
HS−1.574−1.674−1.4810.193−1.693−1.4630.230−1.729−1.4320.297
5 % −1.679−1.4430.236Normal−1.593−1.699−1.5020.197−1.720−1.4830.237−1.754−1.4490.305
GHYP−1.548−1.655−1.4680.187−1.678−1.4500.228−1.699−1.4100.289
HS−2.824−2.979−2.6790.300−3.007−2.6560.351−3.086−2.6090.477
1 % −2.939−2.7720.167Normal−2.269−2.407−2.1530.254−2.430−2.1280.302−2.475−2.0830.392
GHYP−2.810−2.974−2.6650.309−3.007−2.6370.370−3.074−2.5820.492
Nikkei225 HS−2.360−2.532−2.2070.325−2.561−2.1790.382−2.629−2.1350.494
10 % −2.717−2.0370.680Normal−1.853−1.996−1.7280.268−2.021−1.7020.319−2.120−1.6580.462
GHYP−2.331−2.497−2.1840.313−2.544−2.1590.385−2.607−2.1190.488
HS−3.258−3.486−3.0530.433−3.535−3.1800.355−3.627−2.9580.669
5 % −3.698−2.9910.707Normal−2.386−2.565−2.2400.325−2.596−2.2070.389−2.697−2.1540.543
GHYP−3.307−3.533−3.1110.422−3.571−3.0730.498−3.653−3.0090.644
HS−5.589−5.970−5.2520.718−6.057−5.2060.851−6.190−5.0551.135
1 % −5.964−5.2840.680Normal−3.386−3.614−3.1710.443−3.659−3.1270.532−3.746−3.0350.711
GHYP−5.959−6.389−5.6430.746−6.451−5.5930.858−6.631−5.4831.148
DJI HS−0.917−0.985−0.8560.129−0.995−0.8450.150−1.017−0.8230.194
10 % −1.023−0.8150.208Normal−1.147−1.226−1.0800.146−1.236−1.0670.169−1.266−1.0380.228
GHYP−0.909−0.972−0.8530.119−0.994−0.8420.152−1.022−0.8220.200
HS−1.341−1.432−1.2730.159−1.451−1.2600.191−1.467−1.2270.240
5 % −1.459−1.2520.207Normal−1.482−1.569−1.4000.169−1.587−1.3860.201−1.618−1.3570.261
GHYP−1.336−1.422−1.2610.161−1.437−1.2490.188−1.468−1.2230.245
HS−2.292−2.426−2.1830.243−2.451−2.1610.290−2.505−2.1240.381
1 % −2.436−2.1920.244Normal−2.109−2.223−2.0050.218−2.253−1.9840.269−2.292−1.9380.354
GHYP−2.326−2.450−2.2140.236−2.474−2.1860.288−2.533−2.1440.389
CAC40 HS−2.197−2.319−2.0900.229−2.349−2.0660.283−2.391−2.0240.367
10 % −2.501−1.8880.613Normal−1.764−1.863−1.6650.198−1.880−1.6500.230−1.919−1.6230.296
GHYP−2.218−2.351−2.1130.238−2.372−2.0890.283−2.416−2.0530.363
HS−3.235−3.405−3.0910.314−3.438−3.0620.376−3.510−3.0050.505
5 % −3.577−2.9760.601Normal−2.265−2.384−2.1520.232−2.410−2.1240.286−2.456−2.0910.365
GHYP−3.142−3.309−3.0050.304−3.339−2.9780.361−3.406−2.9310.475
HS−5.110−5.367−4.8800.487−5.412−4.8250.587−5.532−4.7290.803
1 % −5.477−5.0230.454Normal−3.205−3.370−3.0540.316−3.407−3.0300.377−3.470−2.9490.521
GHYP−5.281−5.542−5.0530.489−5.578−5.0130.565−5.677−4.9210.756

Share and Cite

MDPI and ACS Style

Khraibani, H.; Nehme, B.; Strauss, O. Interval Estimation of Value-at-Risk Based on Nonparametric Models. Econometrics 2018, 6, 47. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics6040047

AMA Style

Khraibani H, Nehme B, Strauss O. Interval Estimation of Value-at-Risk Based on Nonparametric Models. Econometrics. 2018; 6(4):47. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics6040047

Chicago/Turabian Style

Khraibani, Hussein, Bilal Nehme, and Olivier Strauss. 2018. "Interval Estimation of Value-at-Risk Based on Nonparametric Models" Econometrics 6, no. 4: 47. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics6040047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop