Next Article in Journal
Artificial Intelligence Factory, Data Risk, and VCs’ Mediation: The Case of ByteDance, an AI-Powered Startup
Previous Article in Journal
Portfolio Optimization Constrained by Performance Attribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large Deviations for a Class of Multivariate Heavy-Tailed Risk Processes Used in Insurance and Finance

by
Miriam Hägele
and
Jaakko Lehtomaa
*,†
Department of Mathematics and Statistics, University of Helsinki, P.O. Box 64 (Gustaf Hällströmin katu 2), FI-00014 Helsinki, Finland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Risk Financial Manag. 2021, 14(5), 202; https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm14050202
Submission received: 22 March 2021 / Revised: 22 April 2021 / Accepted: 26 April 2021 / Published: 2 May 2021
(This article belongs to the Special Issue Nonparametric Analysis of Economic and Financial Time Series Data)

Abstract

:
Modern risk modelling approaches deal with vectors of multiple components. The components could be, for example, returns of financial instruments or losses within an insurance portfolio concerning different lines of business. One of the main problems is to decide if there is any type of dependence between the components of the vector and, if so, what type of dependence structure should be used for accurate modelling. We study a class of heavy-tailed multivariate random vectors under a non-parametric shape constraint on the tail decay rate. This class contains, for instance, elliptical distributions whose tail is in the intermediate heavy-tailed regime, which includes Weibull and lognormal type tails. The study derives asymptotic approximations for tail events of random walks. Consequently, a full large deviations principle is obtained under, essentially, minimal assumptions. As an application, an optimisation method for a large class of Quota Share (QS) risk sharing schemes used in insurance and finance is obtained.

1. Introduction and Assumptions

1.1. Introduction

Applications in finance and insurance require multivariate models with heavy-tailed distributions to accurately describe multivariate risks. This includes understanding the possible dependence types of large observations. The case where such observations are restricted to a subset, say orthant, of the d-dimensional space R d is studied in the setting of multivariate regular variation in Lehtomaa and Resnick (2020). Many studies on multivariate heavy-tailed distributions are built on the assumption of extremely heavy tails assuming, e.g., regular variation Hult and Lindskog (2006); Hult et al. (2005); Mikosch and Wintenberger (2016); Nyrhinen (2009). In this paper, we concentrate on the less studied situation where the large observations can be found from any direction and where the tails are not as heavy as regularly varying tails. Such situations appear naturally in the case of financial returns of portfolios since the tails are often observed to have a lognormal type distribution Hardy (2001); Jensen and Maheu (2018); Tegnér and Poulsen (2018) and the observations can be present in all orthants Lehtomaa and Resnick (2020).
We study asymptotic approximations of random walks, i.e., multivariate processes ( S n ) : = ( S n ) n = 1 in R d where
S n = X 1 + + X n
and the increments X , X 1 , X 2 , are independent and identically distributed (i.i.d.) random vectors. The class of studied increments is closely related to the class of multivariate subexponential vectors. Our class concerns lighter than polynomial tails where the variables have finite moments of all orders. There exist at least three different approaches in the literature to define multivariate subexponentiality. The definitions in Cline and Resnick (1992); Omey (2006) require, in addition to subexponentiality of the marginal distributions, a multivariate version of long-tailedness. The approach in Samorodnitsky and Sun (2016) uses an alternative definition via fixed ruin sets in order to define a one-dimensional distribution function with respect to each set. The distribution class considered in this paper is consistent with the definition of Samorodnitsky and Sun (2016). For the one-dimensional case, Denisov et al. (2008) provides an overview of large deviation results for subexponential distributions.
We write X in product form as
X = R U .
The one-dimensional radius variable R controls the heaviness of the increments, and U indicates which directions (defined by unit vectors) are possible. Variable R can have, for example, Weibull or Lognormal type distribution. This definition can be extended to include the class of elliptical distributions, which frequently appear in the literature in applications in finance, see, for instance, Hult and Lindskog (2002); Klüppelberg et al. (2007). Notably, the tail decay speed of R is not restricted to a narrowly defined parametric class.
The proof methods are based on earlier results concerning one-dimensional random walks such as the ones presented in Lehtomaa (2017). A full large deviations principle with non-trivial rate function under, essentially, minimal assumptions on the distribution is also derived. This study complements Mikosch and Rodionov (2020), which considers lognormal distributions, and the result presented in Bazhba et al. (2020), which focuses on Weibull distributions in the one-dimensional setting. As an application, we get an optimisation method for Quota Share (QS) risk sharing schemes, which are widely used in the field of reinsurance. In a QS-contract, there are two participants called the ceding company and the reinsurance company. They agree to share a random risk Y so that one pays q Y and the other pays ( 1 q ) Y . Our aim is to optimise the portions q when a company buys reinsurance for all lines of business, i.e., each component of X is shared with a reinsurance company. The optimisation is obtained from the viewpoint of both the ceding and the reinsurance company.

1.2. Notation

We denote vectors by bold symbols and their components by upper indices, e.g., for x R d we write x = ( x 1 , , x d ) T . The inner product of the vectors x and y is denoted by x , y = j = 1 d x j y j and x 2 is the L 2 -norm. Here, x 2 is called the length of x and x / x 2 the direction of x . S d 1 is the d-dimensional unit sphere, the subset of R d including all vectors with L 2 -norm equal to one. The notation B stands for the interior of the set B, B ¯ for its closure, and B c for its complement. For r R + and S S d 1 , we set
V r , S : = x R d : x 2 > r , x x 2 S ,
where the expression A : = B means A is defined by B. B ( x , a ) defines a ball centred at x with radius a, and we denote the inverse function of f by f 1 .
The asymptotic relation f ( x ) g ( x ) , as x means lim x f ( x ) / g ( x ) = 1 and the little-o notation g ( x ) = o ( f ( x ) ) means lim x g ( x ) / f ( x ) = 0 . We take the limit x or n , where x denotes real and n natural numbers. The symbol 𝟙 ( C ) denotes the indicator function of the event C, P ( C ) its probability, and E ( X ) stands for the expectation of X. By A, we denote a symmetric d × d matrix, and Ω R d is the ellipsoid generated by the linear transformation Λ : R d R d ,   Λ ( x ) = A x of the unit sphere, Ω = Λ ( S d 1 ) .

1.3. Model Assumptions

The aim is to derive a large deviations principle for elliptical multivariate distributions with moderate heavy tails. Therefore, we study the asymptotic behaviour of the random walk ( S n ) , where
S n = X 1 + + X n ,
and X , X 1 , X 2 , are i.i.d. increments. Here, X is the product of a heavy-tailed random variable R and a random vector U or Θ , similarly to the setting in Hägele (2020). We assume that U is distributed on the d-dimensional unit sphere S d 1 , and that Θ is distributed on a d-dimensional ellipsoid Ω .
We make the following, essentially minimal, assumptions on R , U , and Θ . The assumptions are essentially minimal in the sense that omitting one or more of them typically causes the main results to not be true in the given set of non-parametric distributions. However, we assume certain level of smoothness of the tail of R. This assumption is made because it admits presenting the results in a less technical way.
(A1)
The tail function of the random variable R satisfies
log ( P ( R > x ) ) h ( x ) ,
as x , where h ( x ) is an increasing and concave function such that
(i)
h ( x ) = o ( x ) and
(ii)
log ( x ) = o ( h ( x ) ) , as x .
(A2)
The random vector U S d 1 has a distribution on the d-dimensional unit sphere S d 1 . Let S S d 1 be a subset with positive Lebesgue measure. We assume that P ( U S ) > 0 . In addition, U is assumed to be asymptotically independent of the random variable R in the sense that
lim x P ( U S | R > x ) = P ( U S ) ,
and E ( R U ) = 0 .
Remark 1.
The random vector U does not have to be uniformly distributed on the unit sphere. It is enough that for any non-empty open set S S d 1 , it holds that
P ( R > 0 , U S ) > 0 .
For instance, in this model, it is possible to study a multivariate distribution which assigns more probability mass to some directions than others given that Assumption (A2), E ( X ) = 0 , is still fulfilled.
We can then define the random vector Θ by a linear transformation from the unit sphere S d 1 to the d-dimensional ellipsoid Ω centred at the origin. The linear transformation Λ : R d R d with Λ ( x ) = A x , where A is a symmetric, positive definite matrix, generates the ellipsoid Ω = Λ ( S d 1 ) . The random vector Θ can then be written as a transformed vector, Θ = Λ ( U ) . If A is a diagonal matrix, the ellipsoid is orientated along the axes.
Instead of defining Θ through the linear transformation Λ , we can write its definition in a similar way as for the random vector U .
(A2′)
The d-dimensional random vector Θ is distributed on an ellipse or ellipsoid Ω centred at the origin with E ( R Θ ) = 0 . It holds for every set S Ω with positive Lebesgue measure that P ( Θ S ) > 0 , and Θ is asymptotically independent of the random variable R in the sense that lim x P ( Θ S | R > x ) = P ( Θ S ) .
Remark 2.
Assumption (A1) implies that the random variable R is heavy-tailed in the sense that E ( e s R ) = for all s > 0 . The fact that log ( x ) = o ( h ( x ) ) implies E ( R s ) < for all s > 0 so the random variable R has finite moments of all orders. Furthermore, it follows from Assumptions (A2) and (A2′) that the support of the random vector U or Θ is the entire set S d 1 or Ω.
Remark 3.
There are visual methods and statistical tests to analyse whether the data fulfils the assumptions. For the heaviness of the tail, one can test the one-dimensional empirical tail distribution of the norm of the observations, for example, with the method introduced in Asmussen and Lehtomaa (2017). The case where probability mass is concentrated on a subset of possible directions is studied in, e.g., Lehtomaa and Resnick (2020).
Assumption (A1) is closely related to the class of subexponential distributions that is introduced, for instance, in Embrechts et al. (1997); Foss et al. (2011).
Lemma 1.
If Assumption (A1) holds with (2) as an equality for large enough arguments, the distribution of R belongs to the class of subexponential distributions.
Proof. 
The statement follows directly from Theorem 2 of Teugels (1975), which gives a sufficient condition for subexponentiality of a distribution based on tail functions. The condition has three requirements, two of which are immediately true by our definition. To check the remaining condition, we can define an auxiliary function g ( x ) : = ( x / h ( x ) ) 1 / 2 . Then g ( x ) and x g ( x ) , as x . Without loss of generality, we can assume h ( 0 ) 0 , see Remark 5. Due to the concavity, it holds that
lim x P ( R > x g ( x ) ) P ( R > x ) = lim x exp h 1 g ( x ) x x + h ( x ) lim x exp 1 g ( x ) x h ( x ) + h ( x ) = lim x exp g ( x ) x h ( x ) = lim x exp ( g ( x ) 1 ) = 1
The corresponding upper bound of the limit is immediately valid by definition. □
Example 1.
Simple examples of distributions of R that fulfil Assumption (A1) include Weibull distributions with parameter β ( 0 , 1 ) and lognormal type distributions which are defined by the relation P ( R > x ) e ( log ( x ) ) p for x > x 0 , where p > 1 .
Assumption (A1) can be used to obtain bounds even if it does not hold immediately for a given tail function P ( R > x ) . For example, if R can be stochastically bounded by, say, R and R in the sense that
P ( R > x ) P ( R > x ) P ( R > x )
and the variables R and R satisfy (A1) (possibly with different concave functions), a result can be obtained if the asymptotic behaviour concerning the upper and lower bounds coincides in a suitable sense. For a concrete example of this, recall that a random variable R belongs to the class of stretched exponential distributions if, for large enough x, inequalities
l 1 ( x ) e l ( x ) x β P ( R > x ) l 2 ( x ) e l ( x ) x β
hold, where β ( 0 , 1 ) and l , l 1 , l 2 are slowly varying functions. This class is studied in particular in Gantert (1998); Gantert et al. (2014). Here, Assumption (A1) is valid if l ( x ) x β is a concave function for large enough x. If it is not, we can still find, based on Theorem 1 of Lehtomaa (2017), a function h ̲ ( x ) which satisfies Assumption (A1) and inequality P ( R > x ) e h ̲ ( x ) for large enough x and
lim inf x log P ( R > x ) h ̲ ( x ) = 1 .
This fact can be used in the proofs by replacing P ( R > x ) by e h ̲ ( x ) in suitable places in order to obtain results also for the stretched exponential class.

2. Asymptotics of Spherical Distributions

2.1. Large Deviations Principle

Throughout this section, we study the random walk ( S n ) generated by random vectors of the form X = R U , where the random variable R fulfils Assumption (A1), and the random vector U fulfils Assumption (A2). We examine the probability of the asymptotic event that the random walk exceeds a threshold in a selected norm in order to prove a large deviations theorem. In this study, we choose to use the L 2 -norm because it is, in our view, a natural choice when dealing with ellipses.
We start by considering a spherical distribution. The result is later extended to the setting of asymptotically elliptical heavy-tailed distributions. The proofs of the theorems stated below can be found in Section 2.3. The first result concerns logarithmic asymptotics of the norm of the random walk.
Theorem 1.
Let a > 0 be a fixed number. Suppose the increment of the random walk ( S n ) is of the form X = R U , where Assumptions (A1) and (A2) hold. Then,
lim n log ( P ( S n 2 > n a ) ) h ( n a ) = 1 .
The asymptotic relation derived in Theorem 1 yields a full large deviations principle with non-trivial rate function for asymptotically spherical heavy-tailed distributions under an additional technical assumption.
Theorem 2.
Let X = R U , where R and U fulfil Assumptions (A1) and (A2). Additionally, assume that, for a > 0 , the limit
lim x h ( a x ) h ( x )
exists. Then, the process { S n / n } satisfies the large deviations principle with rate function
I ( x ) = lim n h ( n x 2 ) h ( n ) , i f x 0 0 , i f x = 0
and scale h, i.e.,
inf y B I ( y ) lim inf n log ( P ( S n / n B ) ) h ( n ) lim sup n log ( P ( S n / n B ) ) h ( n ) inf y B ¯ I ( y )
for all Borel sets B R d .
The large deviations principle in Theorem 2 is a multivariate equivalent of the large deviation principle in Lehtomaa (2017) for d-dimensional spherical random vectors.
Corollary 1.
If, in addition to the assumptions of Theorem 2, I ( x ) is continuous for all x R d , and the Borel set B fulfils B ¯ = B ¯ , it holds that
lim n log ( P ( S n n B ) ) h ( n ) = inf x B I ( x ) .
Proof. 
The claim follows directly from the assumed continuity of the rate function. □
Remark 4.
The existence of Limit (3) implies
lim x h ( a x ) h ( x ) = a α
for some α 0 due to Theorem 1.4.1 in Bingham et al. (1989) so h ( x ) is, in fact, a regularly varying function.
The rate function is symmetric with respect to the origin. Furthermore, the one-dimensional rate function along any line segment with an endpoint in the origin is concave like the rate function in the one-dimensional case examined in Lehtomaa (2017).
The following example examines the rate function for typical distributions that fulfil Assumption (A1).
Example 2.
(i) 
Let R be Weibull distributed with parameter β ( 0 , 1 ) . Then, h ( x ) = c x β with some constant c > 0 so the index α from Remark 4 is equal to β. Furthermore,
I ( x ) = x 2 β
for all x R d and I is a good rate function. The rate function I ( x ) is continuous so Corollary 1 holds.
(ii) 
If R has a lognormal type distribution of the form P ( R > x ) e ( log ( x ) ) p for x > x 0 with some parameter p > 1 , the index α from Remark 4 is 0, and
I ( x ) = 1
for all x R d \ { 0 } so the rate function I jumps at the origin. The rate function is non-trivial, but not good, according to the terminology used in the context of large deviations.

2.2. Auxiliary Results

In order to prove Theorems 1 and 2, we need some auxiliary lemmas. The auxiliary results study the projection of the random walk to a one-dimensional setting and its asymptotics.
The orthogonal projection P v ( · ) on the subspace spanned by the vector v S d 1 is defined as
P v ( x ) : = v , x v ,
where the inner product defined as p v ( x ) : = v , x indicates the length of the projected vector in the subspace according to the L 2 -norm.
The next result shows that the projection of the random vector X has the same asymptotic behaviour as X 2 .
Lemma 2.
Suppose X = R U where Assumptions (A1) and (A2) hold. Let a > 0 and v S d 1 . Then,
lim n log ( P ( p v ( X ) > n a ) ) h ( n a ) = 1 .
Proof. 
The asymptotic upper bound is due to
log ( P ( p v ( X ) > n a ) ) log ( P ( R > n a ) ) h ( n a )
since v , U 1 .
To prove the asymptotic lower bound, we fix δ > 0 and set S ( v , δ ) S d 1 to be the δ -environment of the vector v in S d 1 . Defining
c v δ : = min y S ( v , δ ) v , y
which is positive choosing δ small enough, it holds that
log ( P ( p v ( X ) > n a ) ) log P v , U R > n a , U S ( v , δ ) log P R > n a c v δ , U S ( v , δ ) = log P R > n a c v δ + log P U S ( v , δ ) | R > n a c v δ .
Hence,
lim inf n log ( P ( p v ( X ) > n a ) ) h ( n a ) lim inf n log P ( U S ( v , δ ) ) h n a c v δ h ( n a ) = lim inf n h n a c v δ h ( n a )
which holds for every δ > 0 small enough and thus proves the claim since c v δ 1 , as δ 0 . □
In the proof of Lemma 4, we divide the probability into the term caused by a single big jump and its complement. The following lemma provides an upper bound for the remaining term not caused by a single big jump.
Lemma 3.
Let Y , Y 1 , Y 2 , be i.i.d. real-valued random variables with E ( Y ) = 0 and finite moments of all order, and suppose a > 0 . Furthermore, let
lim inf x ( log ( P ( Y > x ) ) h ( x ) 1 ,
where h ( x ) fulfils the assumptions on the function h ( x ) stated in Assumption (A1), and additionally, h ( 0 ) 0 . Then,
lim sup n log P i = 1 n Y i > n a , max i = 1 n Y i n a h ( n a ) 1 .
The proof uses similar ideas as the proof of Theorem 2 in Lehtomaa (2017).
Proof of Lemma 3.
Due to the inequality
E e b n i = 1 n Y i 𝟙 i = 1 n Y i > n a , max i = 1 n Y i n a e b n n a P i = 1 n Y i > n a , max i = 1 n Y i n a
and the independence of the random variables, it holds that
P i = 1 n Y i > n a , max i = 1 n Y i n a exp b n n a E e b n Y 𝟙 ( Y n a ) n .
To bound the expectation from above, we split it into two parts
E e b n Y 𝟙 ( Y n a ) = E e b n Y 𝟙 ( Y c n ) + E e b n Y 𝟙 ( c n < Y n a ) = E 1 + E 2 .
Taking δ ( 0 , 1 ) and setting b n : = ( 1 δ ) h ( n a ) / n a 0 , one can choose ε ( n ) 0 such that c n : = b n 1 ε ( n ) , as n . Then, it holds b n c n 0 and therefore one can apply Taylor series to the first term E 1 ,
E 1 = E ( ( 1 + b n Y + o ( b n Y ) ) 𝟙 ( Y c n ) ) = P ( Y c n ) + b n E ( Y 𝟙 ( Y c n ) ) ( 1 + o ( 1 ) ) .
Integrating E 2 by parts and rewriting it in terms of the tail distribution of Y, one gets
E 2 = e b n n a P ( Y n a ) e b n c n P ( Y c n ) c n n a b n e b n x P ( Y x ) d x = e b n c n P ( Y > c n ) e b n n a P ( Y > n a ) + b n c n n a e b n y P ( Y > y ) d y .
For every δ > 0 , it holds P ( Y > y ) exp ( ( 1 δ / 2 ) h ( y ) ) for all y c n choosing n large enough. Applying additionally Taylor series to the first term of the latter equation results in the upper bound
E 2 ( 1 + b n c n + o ( b n c n ) ) P ( Y > c n ) + b n c n n a exp b n y 1 δ 2 h ( y ) d y .
Due to the concavity of the function h ( x ) and the fact that y n a , it holds that
h ( y ) y n a h ( n a )
and therefore
b n c n n a exp b n y 1 δ 2 h ( y ) d y = b n c n n a exp ( 1 δ ) y n a h ( n a ) ( 1 δ ) h ( y ) δ 2 h ( y ) d y b n c n n a exp δ 2 h ( y ) d y b n n a exp δ 2 h ( c n ) .
Applying the inequality log ( x ) x 1 to the term n log ( E 1 + E 2 ) / h ( n a ) yields
n h ( n a ) log E e b n Y 𝟙 ( Y n a ) n h ( n a ) log ( P ( Y c n ) + b n E ( Y 𝟙 ( Y c n ) ) ( 1 + o ( 1 ) ) + P ( Y > c n ) + b n c n P ( Y > c n ) ( 1 + o ( 1 ) ) + b n n a exp δ 2 h ( c n ) ) 1 δ + o ( 1 ) a E ( Y 𝟙 ( Y c n ) ) + c n P ( Y > c n ) + ( 1 δ ) n exp δ 2 h ( c n ) .
The first terms converge to zero, because E ( Y ) = 0 and all moments are finite. Choosing ε ( n ) such that log ( n ) = o h ( c n ) , also the last term converges to zero. Finally,
lim sup n log P i = 1 n Y i > n a , max i = 1 n Y i n a h ( n a ) lim sup n b n n a h ( n a ) + n h ( n a ) log E e b n Y 𝟙 ( Y n a ) = ( 1 δ ) .
The last inequality holds for any δ > 0 , which implies the claim. □
Remark 5.
If h is an increasing concave function with h ( x ) = o ( x ) and log ( x ) = o ( h ( x ) ) , one can always construct an increasing concave function h with h ( 0 ) 0 and h ( x ) h ( x ) , as x by changing h to a suitable linear function near the origin. This means that h is a sub-additive function.
We can now state the principle of a single big jump for projections of multivariate random walks.
Lemma 4.
Let X = R U , assume (A1) and (A2), and let v S d 1 and a > 0 be fixed. Then, it holds that
lim n log ( P ( p v ( S n ) > n a ) ) log ( P ( p v ( X ) > n a ) ) = 1 .
Proof. 
At first, we show the asymptotic lower bound
lim inf n log ( P ( p v ( S n ) > n a ) ) log ( P ( p v ( X ) > n a ) ) 1 .
Using the principle of a single big jump and the weak law of large numbers, it follows that
lim inf n log ( P ( p v ( S n ) > n a ) ) log ( P ( p v ( X ) > n a ) ) lim inf n log ( P ( p v ( S n 1 ) ε n a , p v ( X n ) > ( 1 + ε ) n a ) ) log ( P ( p v ( X ) > n a ) ) lim inf n log ( P ( p v ( S n 1 ) ε n a ) ) log ( P ( p v ( X ) > n a ) ) log ( P ( p v ( X ) > ( 1 + ε ) n a ) ) log ( P ( p v ( X ) > n a ) ) = lim inf n h ( ( 1 + ε ) n a ) h ( n a ) .
The last expression applies, additionally to the weak law of large numbers, Lemma 2. The fact that this holds for every ε > 0 implies the asymptotic lower bound.
It remains to show the asymptotic upper bound
lim sup n log ( P ( p v ( S n ) > n a ) ) log ( P ( p v ( X ) > n a ) ) 1 .
Dividing the probability into the case where at least the projection of one random vector exceeds the threshold n a and its complement implies
P ( p v ( S n ) > n a ) n P ( p v ( X ) > n a ) + P ( p v ( S n ) > n a , p v ( X i ) n a   for all   i = 1 , n ) .
Applying Lemma 1.2.15 in Dembo and Zeitouni (1993),
lim sup ε 0 ε log ( a ε 1 + a ε 2 ) = max lim sup ε 0 ε log ( a ε 1 ) , lim sup ε 0 ε log ( a ε 2 ) ,
we can examine the terms separately since log ( P ( p v ( X ) > n a ) ) 1 0 , as n . We get
lim sup n log ( P ( p v ( S n ) > n a ) ) log ( P ( p v ( X ) > n a ) ) max lim sup n log ( n P ( p v ( X ) > n a ) ) log ( P ( p v ( X ) > n a ) ) , lim sup n log ( P ( p v ( S n ) > n a , p v ( X i ) n a   for all   i = 1 , n ) ) log ( P ( p v ( X ) > n a ) ) .
Clearly, the first term is −1. For the second term, we set Y = p v ( X ) . Since
log ( P ( p v ( X ) > n a ) ) = log ( P ( v , U R > n a ) ) log ( P ( R > n a ) ) h ( n a ) ,
we can apply Lemma 3 with the help of Remark 5. Finally, applying Lemma 2 we get the upper bound −1 also for the second term, which completes the proof. □

2.3. Proof of Theorems 1 and 2

We can now state the proofs of the main results of Section 2.
Proof of Theorem 1.
First, we approximate the event { S n 2 > n a } by projections in different directions. The fact that the principle of a single big jump holds for any orthogonal projection yields the desired asymptotic behaviour. Since P ( S n 2 > n a ) P ( p v ( S n ) > n a ) for every v S d 1 , the asymptotic lower bound
lim inf n log ( P ( S n 2 > n a ) ) h ( n a ) 1
is an immediate consequence of Lemmas 2 and 4.
To prove the corresponding upper bound, we cover the set { x : x 2 > n a } by a finite union of m sets defined by orthogonal projections and study the limit, as m . To this end, let m 2 d . We aim to choose vectors v k S d 1 , k = 1 , , m , such that unions of form k = 1 m { x : p v k ( x ) > ε } , where ε > 0 , can be used to cover the whole space except some neighbourhood of the origin. For example, in R 2 , an easy way to define the vectors v k S 1 is to take v k = cos ( 2 k π / m ) , sin ( 2 k π / m ) T .
Choosing v k appropriately, for instance, such that they are uniformly spaced on the unit sphere, there exists some ε ( m ) > 0 such that
D : = k = 1 m { x : p v k ( x ) > ( 1 ε ( m ) ) n a } { x : x 2 > n a }
and, more specifically, we can choose the numbers ε ( m ) so that ε ( m ) 0 , as m . Hence,
P ( S n 2 > n a ) P ( S n D ) k = 1 m P ( p v k ( S n ) > ( 1 ε ( m ) ) n a ) .
Lemma 1.2.15 in Dembo and Zeitouni (1993) yields
lim sup n log ( P ( S n 2 > n a ) ) h ( n a ) lim sup n log k = 1 m P ( p v k ( S n ) > ( 1 ε ( m ) ) n a ) h ( n a ) max k = 1 m lim sup n log P ( p v k ( S n ) > ( 1 ε ( m ) ) n a ) h ( n a ) .
Applying Lemma 4 and Lemma 2, we get
lim sup n log ( P ( S n 2 > n a ) ) h ( n a ) lim sup n h ( ( 1 ε ( m ) ) n a ) h ( n a ) .
Letting m , it holds that ε ( m ) 0 and the term above converges to −1. □
Proof of Theorem 2.
The proof of the large deviations principle is based on the fact that
lim n log ( P ( S n V n a , S ) ) h ( n a ) = 1 ,
since open sets of the form V a , S defined in (1), where a > 0 and S is an open subset of S d 1 , generate R d .
The limit superior of (4) follows directly from Theorem 1 due to the inequality
P ( S n V n a , S ) P ( S n 2 > n a ) .
Rewriting
P ( S n V n a , S ) = P ( S n 2 > n a ) P S n S n 2 S | S n 2 > n a
yields the limit inferior of (4) since the last probability is positive.
The weak law of large numbers implies lim n P ( S n / n B ( 0 , ε ) ) = 1 for all ε > 0 and thus if 0 B R d
lim n log ( P ( S n / n B ) ) h ( n ) = 0 .
Finally, by (4), it is easy to obtain the inequalities
lim sup n log ( P ( S n / n F ) ) h ( n ) inf y F I ( y ) for all closed sets F R d , lim inf n log ( P ( S n / n G ) ) h ( n ) inf y G I ( y ) for all open sets G R d
which result in the full large deviations principle. □

3. Asymptotics in the Elliptical Case

3.1. Contraction Principle

To extend the result to asymptotically elliptically distributed random vectors, we apply the contraction principle, Theorem 4.2.1 in Dembo and Zeitouni (1993), to the large deviations result in Theorem 2.
In this section, we study the asymptotics of the random walk ( S n ) with increments X = R Θ , where Θ is distributed on some d-dimensional ellipsoid Ω centred at the origin. For asymptotically elliptical distributions, a suitable linear map in the contraction principle is the bijective function Λ : R d R d defined by Λ ( x ) = A x mapping vectors from S d 1 to Ω . Here, A is a symmetric, positive definite and thus invertible d × d matrix, such that Λ ( S d 1 ) = Ω . Since Λ is linear,
Λ ( S n ) = i = 1 n Λ ( X i )
holds.
Theorem 3.
Let X = R Θ , where R and Θ fulfil assumptions (A1) and (A2′) and Limit (3) exists. Let A be a symmetric, positive definite d × d matrix and define Λ : R d R d by Λ ( x ) = A x such that Λ maps S d 1 to Ω. Then, the process { S n / n } satisfies the large deviations principle with scale h, so for all Borel sets B R d
inf y Λ 1 ( B ) I ( y ) lim inf n log ( P ( S n / n B ) ) h ( n ) lim sup n log ( P ( S n / n B ) ) h ( n ) inf y Λ 1 ( B ¯ ) I ( y )
where
I ( x ) = lim n h ( n x 2 ) h ( n ) , i f x 0 0 , i f x = 0
and Λ 1 : R d R d with Λ 1 ( x ) = A 1 x .
Example 3.
Similarly to Example 2, if X fulfils the assumptions of Theorem 3, where R follows a Weibull distribution with parameter β ( 0 , 1 ) ,
I ( x ) = x 2 β
is a good rate function. If R has a lognormal type distribution, the rate function is a constant everywhere except at the origin and hence not good.

3.2. Proof of Theorem 3

The proof of the main result in this section relies on the linearity of the function Λ and the contraction principle for large deviation principles.
Proof of Theorem 3.
The linear transformation
Λ ( x ) = A x
where A is a symmetric, positive definite d × d matrix is a bijective function Λ : R d R d . Its inverse function Λ 1 : R d R d exists due to the invertibility of the matrix A and is Λ 1 ( x ) = A 1 x . The linear transformation Λ maps S d 1 to Ω . By the linearity of Λ , it holds Λ ( S n ) = i = 1 n Λ ( X i ) , so the mapping of the random walk is equal to the sum of mappings of the increments.
The claim follows then from the contraction principle, see, for instance, Theorem 4.2.1 in Dembo and Zeitouni (1993). Applying the contraction principle with the continuous function Λ 1 , we get
inf y B J ( y ) = inf y B inf x R d { I ( x ) : Λ ( x ) = y } = inf x R d { I ( x ) : x Λ 1 ( B ) } = inf x Λ 1 ( B ) I ( x ) ,
which completes the proof. □

4. Applications to Reinsurance

4.1. Introduction and Setting

An insurance company with d lines of business might optimise the asymptotic behaviour of their ruin probability by sharing its risks for some lines of business using quota share reinsurance contracts. Quota share reinsurance is a proportional reinsurance Embrechts et al. (1997), where the insurer (the ceding company) pays only a fixed ratio for each claim, while the reinsurance company pays the rest. In general contracts, the insurance company shares both the losses and the profits with the reinsurer.
A diagonal d × d matrix Q can be used to represent a quota share reinsurance strategy for an insurance company with d lines of business. The element q k , k refers then to the quota share ratio of the kth line of business, i.e., the ceding company pays q k , k X k of the kth line of business, and the reinsurance company pays the remaining part ( 1 q k , k ) X k . It is natural to assume that q k , k ( 0 , 1 ] for all indices k = 1 , , d since typically the insurance company keeps some share for every line of business. Under this assumption, Matrix Q is invertible.
For the ceding company, the aim could be to find a quota share reinsurance strategy defined by a matrix Q such that
lim n log ( P ( S n / n 2 > a ) ) h ( n ) > lim n log ( P ( Q S n / n 2 > a ) ) h ( n )
because it reduces the asymptotic size of the ruin probability P ( S n / n 2 > a ) .
We look at the quota share reinsurance from two different perspectives. In Section 4.2, we model the optimal quota share reinsurance strategy from the point of view of the ceding company, which gets reinsurance only for the lines of business with highest risks. Section 4.3 compares different quota share risk sharing strategies from the viewpoint of a reinsurance company that wants to offer quota share reinsurance while minimising their own risks.
We model the risk process of an insurance company with d lines of business as a d-dimensional random walk ( S n ) . The component of the increment X represents the difference between the claim size and the associated net premium in the corresponding line of business. We assume that X is asymptotically elliptically distributed: Let X = R Θ where R and Θ fulfil Assumptions (A1) and (A2′), and Ω is an ellipse or ellipsoid defined by the bijection Λ : R d R d with Λ ( x ) = A x where A is a symmetric, positive definite d × d matrix.
If there are several reinsurance strategies that yield the same right-hand side of Inequality (5), the insurance company chooses the one with the smallest premium. In this setting, we define the premium of the reinsurance strategy Q as
p ( Q ) = 1 T ( I Q ) p = j = 1 d ( 1 q j , j ) p j
where p is a positive premium vector, 1 denotes the d-dimensional vector with ones and I the d × d identity matrix. The positive premium vector p contains the premium rates of the reinsurances for each line of business when the entire component is reinsured. For example, p j could be connected to the expected loss of the reinsurance company added with a safety loading. In (6), the constants p j are multiplied by the factors ( 1 q j , j ) , where the q-coefficients can be selected by the ceding company. The premium vector is considered as given, and the ceding company cannot affect the values of this vector. Therefore, (6) is the premium for the entire reinsurance strategy. If the insurance company does not take reinsurance for the jth line of business, it selects q j , j = 1 . Hence, the premium for the reinsurance of the jth line of business is zero in this case.

4.2. Quota Share Reinsurance Strategy of the Ceding Company

The aim of the insurance company is to identify the riskiest lines of business and choose its quota share reinsurance strategy reducing these risks. The ceding company typically only wants to insure the highest risks. This is why it is natural to assume q k , k = 1 for at least one index k, which represents the line of business with the lowest asymptotic risk.
Since quota share reinsurance is defined component-by-component, one can find an optimal reinsurance strategy for distributions on ellipsoids orientated along the axes. The general case can be mathematically reduced into this setting by rotating the original data. However, if the data is transformed using a rotation, suitable QS contracts might not be immediately available on the market because the new axes would not correspond to the lines of business.
Remark 6.
If Ω is a d-dimensional ellipsoid orientated along the axes, there exists a positive, diagonal d × d matrix A such that the mapping Λ : R d R d , Λ ( x ) = A x of the unit sphere S d 1 generates Ω. Furthermore, if A is a positive, diagonal d × d matrix, the ellipsoid generated by Λ ( S d 1 ) is orientated along the axes.
Theorem 4.
Let X = R Θ , where Θ fulfils Assumption (A2′) and Ω = Λ ( S d 1 ) is defined by the linear transformation Λ ( x ) = A x , where A is a diagonal d × d matrix with a k , k > 0 for all k = 1 , , d . Furthermore, we assume that R follows a Weibull distribution with parameter β ( 0 , 1 ) . Then, the quota share reinsurance strategy defined by the matrix Q = min j = 1 d a j , j A 1 yields the inequality
lim n log ( P ( S n / n 2 > a ) ) h ( n ) lim n log ( P ( Q S n / n 2 > a ) ) h ( n )
for any a > 0 , and Q minimises the right-hand side of Inequality (7) over all strategies. The minimum is unique under the additional condition that p ( Q ) is also minimised.
Proof. 
The assumptions on R imply the large deviations principle
lim n log ( P ( Q S n / n 2 > a ) ) h ( n ) = inf y B ( 0 , a ) c A 1 Q 1 y 2 β .
Therefore, it is sufficient to show that the matrix Q is the matrix that maximises
max Q ˜ Q inf x B ( 0 , a ) c A 1 Q ˜ 1 x 2
where Q is the set of d × d diagonal matrices with q j , j ( 0 , 1 ] for all j = 1 , , d and q k , k = 1 for at least one index k { 1 , , d } . Without loss of generality, we assume min j = 1 d a j , j = a 1 , 1 . By the property c x 2 = | c | x 2 , the infimum is always achieved at the boundary, so
inf x B ( 0 , a ) c A 1 Q ˜ 1 x 2 = inf x 2 = a A 1 Q ˜ 1 x 2 .
The fact that inf x 2 = a A 1 Q 1 x 2 = a / a 1 , 1 follows directly from the definition of Q, since
A 1 Q 1 x = A 1 ( min j = 1 d a j , j A 1 ) 1 x = 1 a 1 , 1 A 1 A x = 1 a 1 , 1 x .
It remains to show that a matrix Q ˜ that maximises (8) is of the form q ˜ k , k a 1 , 1 / a k , k for all k = 1 , , d . In order for Q ˜ to maximise (8), it has to hold that
A 1 Q ˜ 1 x 2 = i = 1 d x i 2 a i , i 2 q ˜ i , i 2 1 2 a a 1 , 1
for all x with x 2 = a . Checking the inequality for a times the unit vectors, we get the condition q ˜ k , k a 1 , 1 / a k , k for all k = 1 , , d . Due to the additional condition that q k , k = 1 for at least one index, we need to set q 1 , 1 = 1 . Now, taking q ˜ 1 , 1 = 1 and q ˜ k , k < a 1 , 1 / a k , k for all k = 2 , , d yields
inf x 2 = a A 1 Q ˜ 1 x 2 = i = 1 d x i 2 a i , i 2 q ˜ i , i 2 1 2 a 2 a 1 , 1 2 q ˜ 1 , 1 2 1 2 = a a 1 , 1
so
inf x 2 = a A 1 Q ˜ 1 x 2 = inf x 2 = a A 1 Q 1 x 2 .
Comparing the premium of the reinsurance strategies Q and Q ˜ , it is easy to obtain p ( Q ) < p ( Q ˜ ) due to the positivity of the vector p . Thus, Q is the quota share reinsurance strategy that maximises (8) with the lowest premium. □
Remark 7.
Theorem 4 can be extended to distributions for which Assumption (A1) hold and, in addition, lim n h ( n x 2 ) / h ( n ) = a α for α > 0 . If lim n h ( n x 2 ) / h ( n ) is a constant, a quota share reinsurance does not improve the asymptotic behaviour of the logarithmic ruin probability, and hence Theorem 4 does not hold for lognormal type distributions.
The optimal matrix Q transforms the set Ω to a d-dimensional ball. Hence, the probability that the reinsured risk process exceeds a threshold in a selected norm has the same asymptotics in all directions. Taking reinsurance defined by the matrix Q reduces the risks of the riskier lines of business to the same level of the less risky line of business.
Remark 8.
If a k , k = a 1 , 1 for all k { 1 , , d } , Ω describes a d-dimensional ball, and quota share reinsurance does not improve the asymptotic behaviour of the logarithmic ruin probability.
If the ellipsoid is not orientated along the axes, it is not possible to find an optimal quota share reinsurance, which is defined component-by-component and results in equal asymptotic behaviour of all directions. Depending on the orientation of the ellipsoid, it might be possible to find a quota share reinsurance strategy defined component-by-component that still reduces the risks in the most risky directions and thus reduces the ruin probability.
Example 4.
Consider an insurance company with 3 lines of business that models their risks, the difference between the claim size and the associate net premium, by a random vector X . Assume X fulfils the assumptions in Theorem 4 with
A = 1 0 0 0 2 0 0 0 3 .
The optimal QS strategy for the ceding company is then given by the matrix
Q = 1 0 0 0 1 2 0 0 0 1 3 .
The fact that min j = 1 3 a j , j = a 1 , 1 indicates that the first line of business is the less risky one and thus the insurance company decides to take reinsurance only for the second and third line of business. To minimise its own risks according to (7), the insurance company needs to take reinsurance of at least 1 / 2 of the second line of business and 1 / 3 of the third line of business. Taking more than the needed share of reinsurance does not reduce the risk, but only increases the premium of the reinsurance strategy.

4.3. Quota Share Risk Sharing of the Reinsurance Company

A reinsurance company is interested in optimising the risk sharing portfolio such that
lim n log ( P ( S n / n 2 > a ) ) h ( n ) > lim n log ( P ( ( I Q ) S n / n 2 > a ) ) h ( n ) ,
where Q is a positive, diagonal d × d matrix with 0 < q j , j < 1 for all j = 1 , , d . The condition q j , j < 1 for all j = 1 , , d is due to the natural assumption that the reinsurer offers reinsurance for all lines of business. Comparing the situation with the quota share reinsurance strategy of the ceding company, the same situation leads to a square matrix of smaller dimension since the insurance company wants to get an offer for reinsurance only in the most risky lines of business, and covers the risks of the line of business with the lowest risk itself. The reinsurance company collects the premium
p ( Q ) = 1 T ( I Q ) p = j = 1 d ( 1 q j , j ) p j
and covers the amount ( 1 q j , j ) X j of the jth line of business. The following theorem states the optimising quota share strategy.
Theorem 5.
Let X = R Θ , where Θ fulfils Assumption (A2′) and Ω = Λ ( S d 1 ) is defined by the linear transformation Λ ( x ) = A x , where A is a diagonal d × d matrix with a k , k > 0 for all k = 1 , , d . Additionally, we assume that R has a Weibull distribution with parameter β ( 0 , 1 ) . Then, the quota share reinsurance defined by the matrix Q = I A 1 / c for some c > max j = 1 d 1 / a j , j yields the inequality
lim n log ( P ( S n / n 2 > a ) ) h ( n ) > lim n log ( P ( ( I Q ) S n / n 2 > a ) ) h ( n ) .
Proof. 
The large deviations principle implies
lim n log ( P ( S n / n 2 > a ) ) h ( n ) = inf y B ( 0 , a ) c A 1 y 2 β = inf y 2 = a A 1 y 2 β
and equivalently
lim n log ( P ( ( I Q ) S n / n 2 > a ) ) h ( n ) = inf y 2 = a A 1 ( I Q ) 1 y 2 β .
For the jth unit vector e j it holds A 1 ( a e j ) 2 = a / a j , j , which results in
inf y 2 = a A 1 y 2 min a a 1 , 1 , , a a d , d max a a 1 , 1 , , a a d , d < c a = inf y 2 = a A 1 ( I Q ) 1 y 2 .
This proves Inequality (9). □
As in Theorem 4, A is a diagonal matrix which implies that the ellipsoid that defines the distribution of X is orientated along the axes. The constant c defines the risk share of the reinsurance company and hence the amount that they reassure. The quota share ratio of the reinsurer of the jth line of business is 1 q j , j = 1 / ( a j , j c ) so increasing c reduces the risks for the reinsurer. As in Section 4.2 also the optimal risk sharing strategy of the reinsurance company includes bigger ratio for the lines of business with smaller risks.
The ceding company as well as the reinsurance company optimise their risks by taking or offering reinsurance that maps their share of the initial ellipsoid to a ball. The optimising strategy of the ceding company yields
A Q = min i = 1 d a i , i I
whereas the optimising strategy of the reinsurance company results in
A ( I Q ˜ ) = 1 c I ,
where c > max i = 1 d 1 / a i , i or 1 / c < min i = 1 d a i , i . The radius of the ball defined by A Q is 1 / min i = 1 d a i , i and therefore smaller than the radius of the ball defined by A ( I Q ˜ ) which is c . Increasing c increases the radius of the ball generated by A ( I Q ˜ ) . In general, a bigger radius implies smaller risks for the insurance or reinsurance company. Increasing c reduces the share of the reinsurance company, and therefore also the risks of the reinsurance company. The minimum min i = 1 d a i , i indicates the line of business with the lowest risk.

5. Conclusions

The assumptions of the main result require that the support of the random vectors is, asymptotically, the whole space. In particular, the components of increments are asymptotically dependent. However, the studied model admits more flexibility than many typical models in the sense that it does not require uniformly distributed random vectors on ellipses. This makes it possible to derive asymptotics for a wide class of zero-mean random walks. The general case with the non-zero expectation can be studied by centring the increments.
In economics and finance, quantification of tail risk plays a central role, because rare events can have a large impact on the operation of a financial entity. The derived large deviations principle quantifies, asymptotically, the probabilities of rare events of random walks. This makes it possible to derive further results in applications such as the one presented in the QS optimisation method. For financial data that fulfils the assumptions, the large deviations principle can be used to establish optimal bounds on the probabilities of rare events where the decay rate to zero of probabilities is not exponential. The derived results give theoretical estimates for probabilities that might be difficult to analyse numerically.
Further research could investigate the convergence rate of the tail probabilities in order to give a suitable threshold after which the large deviations principle gives a sufficiently accurate estimate. Direct approximation of probabilities by Monte Carlo simulation suggests that, to calculate the ratio in Theorem 2.1, at least one of n or a must be quite large. With a finite sample, the challenge is that the interesting events are tail events of which there are only few observations. It seems that a smaller n can be used for the heavier distributions in the studied class than for the ones with lighter tails.
In addition, it would be natural to ask how the set Ω can be deduced from observed data and if the large deviations principle holds even if Ω is, say, not the boundary of a convex set.

Author Contributions

Both authors had equal contribution in the writing of the paper. Investigation, M.H. and J.L.; Writing—original draft, M.H.; Writing—review and editing, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research of Miriam Hägele was funded by Doctoral Programme in Mathematics and Statistics (DOMAST) and by a grant awarded by Vakuutustiedon kehittämissäätiö.

Acknowledgments

The authors acknowledge the use of the services and facilities of the University of Helsinki during the writing of the paper. Open access funding provided by University of Helsinki. Suggestions made by the reviewers and the editor helped to improve the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asmussen, Søren, and Jaakko Lehtomaa. 2017. Distinguishing log-concavity from heavy tails. Risks 5: 10. [Google Scholar] [CrossRef] [Green Version]
  2. Bazhba, Mihail, Jose Blanchet, Chang-Han Rhee, and Bert Zwart. 2020. Sample path large deviations for lévy processes and random walks with weibull increments. Annals of Applied Probability 30: 2695–739. [Google Scholar] [CrossRef]
  3. Bingham, Nicholas, Charles Goldie, and Jozef Teugels. 1989. Regular Variation. Cambridge: Cambridge University Press. [Google Scholar]
  4. Cline, Daren, and Sidney Resnick. 1992. Multivariate subexponential distributions. Stochastic Processes and Their Applications 42: 49–72. [Google Scholar] [CrossRef] [Green Version]
  5. Dembo, Amir, and Ofer Zeitouni. 1993. Large Deviations Techniques and Applications. Boston: Jones and Bartlett Publishers. [Google Scholar]
  6. Denisov, Denis, Antonius Dieker, and Vsevolod Shneer. 2008. Large deviations for random walks under subexponentiality: The big-jump domain. The Annals of Probability 36: 1946–91. [Google Scholar] [CrossRef]
  7. Embrechts, Paul, Claudia Klüppelberg, and Thomas Mikosch. 1997. Modelling Extremal Events. Volume 33 of Applications of Mathematics (New York). Berlin: Springer. [Google Scholar] [CrossRef]
  8. Foss, Sergey, Dmitry Korshunov, and Stan Zachary. 2011. An Introduction to Heavy-Tailed and Subexponential Distributions. Springer Series in Operations Research and Financial Engineering; New York: Springer. [Google Scholar] [CrossRef]
  9. Gantert, Nina. 1998. Functional Erdős-Renyi laws for semiexponential random variables. Annals of Probability 26: 1356–69. [Google Scholar] [CrossRef]
  10. Gantert, Nina, Kavita Ramanan, and Franz Rembart. 2014. Large deviations for weighted sums of stretched exponential random variables. Electronic Communications in Probability 19: 1–14. [Google Scholar] [CrossRef] [Green Version]
  11. Hägele, Miriam. 2020. Precise asymptotics of ruin probabilities for a class of multivariate heavy-tailed distributions. Statistics & Probability Letters 166: 108871. [Google Scholar] [CrossRef]
  12. Hardy, Mary. 2001. A regime-switching model of long-term stock returns. North American Actuarial Journal 5: 41–53. [Google Scholar] [CrossRef]
  13. Hult, Henrik, and Filip Lindskog. 2002. Multivariate extremes, aggregation and dependence in elliptical distributions. Advances in Applied Probability 34: 587–608. [Google Scholar] [CrossRef]
  14. Hult, Henrik, and Filip Lindskog. 2006. On regular variation for infinitely divisible random vectors and additive processes. Advances in Applied Probability 38: 134–48. [Google Scholar] [CrossRef]
  15. Hult, Henrik, Filip Lindskog, Thomas Mikosch, and Gennady Samorodnitsky. 2005. Functional large deviations for multivariate regularly varying random walks. The Annals of Applied Probability 15: 2651–80. [Google Scholar] [CrossRef] [Green Version]
  16. Jensen, Mark, and John Maheu. 2018. Risk, return and volatility feedback: A bayesian nonparametric analysis. Journal of Risk and Financial Management 11: 52. [Google Scholar] [CrossRef] [Green Version]
  17. Klüppelberg, Claudia, Gabriel Kuhn, and Liang Peng. 2007. Estimating the tail dependence function of an elliptical distribution. Bernoulli 13: 229–51. [Google Scholar] [CrossRef] [Green Version]
  18. Lehtomaa, Jaakko. 2017. Large deviations of means of heavy-tailed random variables with finite moments of all orders. Journal of Applied Probability 54: 66–81. [Google Scholar] [CrossRef] [Green Version]
  19. Lehtomaa, Jaakko, and Sidney Resnick. 2020. Asymptotic independence and support detection techniques for heavy-tailed multivariate data. Insurance: Mathematics and Economics 93: 262–77. [Google Scholar] [CrossRef]
  20. Mikosch, Thomas, and Igor Rodionov. 2021. Precise Large Deviations for Dependent Subexponential Variables. Bernoulli 27: 1319–47. [Google Scholar] [CrossRef]
  21. Mikosch, Thomas, and Olivier Wintenberger. 2016. A large deviations approach to limit theory for heavy-tailed time series. Probability Theory and Related Fields 166: 233–69. [Google Scholar] [CrossRef] [Green Version]
  22. Nyrhinen, Harri. 2009. On large deviations of multivariate heavy-tailed random walks. Journal of Theoretical Probability 22: 1–17. [Google Scholar] [CrossRef]
  23. Omey, Edward. 2006. Subexponential distribution functions in Rd. Journal of the Australian Mathematical Society 138: 5434–49. [Google Scholar] [CrossRef]
  24. Samorodnitsky, Gennady, and Julian Sun. 2016. Multivariate subexponential distributions and their applications. Extremes 19: 171–96. [Google Scholar] [CrossRef] [Green Version]
  25. Tegnér, Martin, and Rolf Poulsen. 2018. Volatility is log-normal—But not for the reason you think. Risks 6: 46. [Google Scholar] [CrossRef] [Green Version]
  26. Teugels, Jozef. 1975. The class of subexponential distributions. The Annals of Probability 3: 1000–11. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hägele, M.; Lehtomaa, J. Large Deviations for a Class of Multivariate Heavy-Tailed Risk Processes Used in Insurance and Finance. J. Risk Financial Manag. 2021, 14, 202. https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm14050202

AMA Style

Hägele M, Lehtomaa J. Large Deviations for a Class of Multivariate Heavy-Tailed Risk Processes Used in Insurance and Finance. Journal of Risk and Financial Management. 2021; 14(5):202. https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm14050202

Chicago/Turabian Style

Hägele, Miriam, and Jaakko Lehtomaa. 2021. "Large Deviations for a Class of Multivariate Heavy-Tailed Risk Processes Used in Insurance and Finance" Journal of Risk and Financial Management 14, no. 5: 202. https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm14050202

Article Metrics

Back to TopTop