Next Article in Journal
Fast-Slow Coupling Dynamics Behavior of the van der Pol-Rayleigh System
Next Article in Special Issue
Are Worldwide Governance Indicators Stable or Do They Change over Time? A Comparative Study Using Multivariate Analysis
Previous Article in Journal
Forced Convection of Non-Newtonian Nanofluid Flow over a Backward Facing Step with Simultaneous Effects of Using Double Rotating Cylinders and Inclined Magnetic Field
Previous Article in Special Issue
Using HJ-Biplot and External Logistic Biplot as Machine Learning Methods for Corporate Social Responsibility Practices for Sustainable Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Goodness of Fit Test for Multivariate Normality and Comparative Simulation Study

by
Jurgita Arnastauskaitė
1,2,*,
Tomas Ruzgas
2 and
Mindaugas Bražėnas
3
1
Department of Applied Mathematics, Kaunas University of Technology, 51368 Kaunas, Lithuania
2
Department of Computer Sciences, Kaunas University of Technology, 51368 Kaunas, Lithuania
3
Department of Mathematical Modelling, Kaunas University of Technology, 51368 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Submission received: 12 October 2021 / Revised: 18 November 2021 / Accepted: 19 November 2021 / Published: 23 November 2021
(This article belongs to the Special Issue Multivariate Statistics: Theory and Its Applications)

Abstract

:
The testing of multivariate normality remains a significant scientific problem. Although it is being extensively researched, it is still unclear how to choose the best test based on the sample size, variance, covariance matrix and others. In order to contribute to this field, a new goodness of fit test for multivariate normality is introduced. This test is based on the mean absolute deviation of the empirical distribution density from the theoretical distribution density. A new test was compared with the most popular tests in terms of empirical power. The power of the tests was estimated for the selected alternative distributions and examined by the Monte Carlo modeling method for the chosen sample sizes and dimensions. Based on the modeling results, it can be concluded that a new test is one of the most powerful tests for checking multivariate normality, especially for smaller samples. In addition, the assumption of normality of two real data sets was checked.

1. Introduction

Much multivariate data is being collected by monitoring natural and social processes. IBM estimates that we all generate 175 zettabytes of data every day. To add, the data were collected at a rapidly increasing rate, i.e., it is estimated that 90% of data has been generated in the last two years. The need to extract useful information from continuously generated data sets drives demand for data specialists and the development of robust analysis methods.
Data analytics is inconceivable without testing the goodness of fit hypothesis. The primary task of a data analyst is to become familiar with the data sets received. This usually starts by identifying the distribution of the data. Then, the assumption that the data follow a normal distribution is usually tested. Since 1990, many tests have been developed to test this assumption, mostly for univariate data.
It is important to use the powerful tests for the goodness of fit hypothesis to test the assumption of normality because an alternative distribution is not known in general. Based on the outcome of normality verification, one can choose suitable analysis methods (parametric or non-parametric) for further investigation. From the end of the 20th century to the present day, multivariate tests for testing the goodness of fit hypothesis have been developed by a number of authors [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. Some of the most popular and commonly used multivariate tests are Chi-Square [8], Cramer von Mises [2], Anderson-Darling [2], and Royston [3].
Checking the assumption of normality of multivariate data is more complex compared to univariate. Additional data processing is required (e.g., standardization). The development of multivariate tests is more complex because they require checking the properties of invariance and contingency. While for the univariate tests, the invariance property is always satisfied. The properties of invariance, contingency are presented in Section 2 and are discussed in more detail in [2,12,15].
The study aims to perform a power analysis of the multivariate goodness of fit hypothesis tests for the assumption of normality, to find out proposed test performances compared to other well-known tests and to apply the multivariate tests to the real data. The power estimation procedure is discussed in [16].
Scientific novelty. The power analysis of multivariate goodness of fit hypothesis testing for different data sets was performed. The goodness of fit tests were selected as representatives of popular techniques, which had been analyzed by other researchers experimentally. In addition, we proposed a new multivariate test based on the mean absolute deviation of the empirical distribution density from the theoretical distribution density. In this test, the density estimate is derived by using an inversion formula which is presented in Section 3.
The rest of the paper is organized as follows. Section 2 defines the tests for the comparative multivariate test power study. Section 3 presents details of our proposed test. Section 4 presents the data distributions used for experimental test power evaluation. Section 5 presents and discusses the results of simulation modeling. Section 6 discusses the application of multivariate goodness of fit hypothesis tests to real data. Finally, the conclusions and recommendations are given in Section 7.

2. Multivariate Tests for Normality

We denote the p-variate normal distribution as N p μ , , where μ is an expectation vector μ = μ 1 , , μ p T and is the nonsingular covariance matrix. N p indicates a set of all possible p-variate normal distributions. Let X 1 , X 2 , , X n , where X k = X k 1 , X k 2 , , X k p T and k = 1 , 2 , , n , be a finite sample generated by a random p-variate (column) vector X with distribution function F X . The mean vector X ¯ is given by X ¯ = 1 n j = 1 n X j , where n is the sample size and the sample covariate matrix is
S = 1 n j = 1 n ( X j X ¯ ) ( X j X ¯ ) T .
To assess multivariate normality of X (based on the observed sample X 1 , X 2 , , X n ) a lot of statistical tests have been developed. Before reviewing specific tests, selected for this study, let us consider two essential properties. The set N p is closed with respect to affine transformations, i.e.,
F A X + b N p F X N p ,
for any translation vector b p and any nonsingular matrix A p × p . Thus, a reasonable statistic T n for checking the null hypothesis ( H 0 ) of multivariate normality should have the same value for a sample and its affine transforms, that is
T n A X 1 + b , , A X n + b = T n X 1 , , X n .
An invariant test has a statistic, which satisfies the condition (1). It might seem that a test based on a standardized sample
Y j = S 1 2 ( X j X ¯ ) ,
is invariant, however Henze and Zirkler [2] note that this is not always the case. In practice, for a given sample X 1 , X 2 , , X n the alternative distribution is not know. In such a case it is important to use a test for which the probability of correctly rejecting H 0 tends to one as n . Such a test is said to be consistent. For more elaborate discussion on these properties we refer the reader to [2]. Other important denotes are given in Appendix A.

2.1. Tests Based on Squared Radii

This section reviews the properties of several measures of squared radii concerning their use for assessing multivariate normality. Squared radii are defined as
D n , j = X j X ¯ T S 1 X j X ¯ ,   j = 1 ,   2 , . . , n .
D n , j have a distribution which, under normality, is n 1 2 / n times a B e t a p 2 , n p 1 2 distribution [9]. Under H 0 , the distribution of D n , j is approximately χ p 2 for large n .

2.1.1. Chi-Squared (CHI2)

In 1981, Moore and Stubblebine presented multivariate Chi-Squared goodness of fit test based on order statistics [8]. The statistic of the test is defined as
M n , k = k n l = 1 k N n , l n k 2 ,
where N n , l = j = 1 n 1 a l 1 < D n , j a l ,   l = 1 , 2 , , k ;   a 0 = 0 ,   a k = + .   Since M n , k takes the equivalent form [8]:
M n , k = k l = 1 k G n a l G n a l 1 2 ,
where G p · is the probability distribution function of χ 2 p . G p a l G p a l 1 = k 1   l = 1 , 2 , , k ;   G p + = 1 .

2.1.2. Cramer-Von Mises (CVM)

In 1982, Koziol proposed the use of Cramer-von Mises-type multivariate goodness of fit test based on order statistics [2]. This test statistic is defined as
C M = 1 12 n + j = 1 n G p D j 2 j 1 n 2 ,
where D j ,   j = 1 , 2 , , n is order statistics.

2.1.3. Anderson-Darling (AD)

In 1987, Paulson, Roohan and Sullo proposed the Anderson-Darling type multivariate goodness of fit test based on order statistics [2]. The test statistic is defined as
A D = n j = 1 n 2 j 1 n log G p D j + log 1 G p D n + 1 j .

2.2. Tests Based on Skewness and Kurtosis

This section reviews the properties of several measures of multivariate skewness and kurtosis regarding their use as statistics for assessing multivariate normality [2]. The skewness and kurtosis are defined as
s = m 3 m 2 3   ,   k = m 4 m 2 2 ,
where m i = 1 n i = 1 n x i x ¯ i x ¯ = 1 n i = 1 n x i .

2.2.1. Doornik-Hansen (DH)

In 2008, Doornik-Hansen proposed a new multivariate goodness of fit test based on the skewness and kurtosis of multivariate data transformed to ensure independence [6]. The Doornik-Hansen test statistic is defined as the sum of squared transformations of the skewness and kurtosis. Approximately, the test statistic follows a χ 2 distribution
D H = Z 1 T Z 1 + Z 2 T Z 2 χ 2 2 p ,
where Z 1 T = z 11 , , z 1 p and Z 2 T = z 21 , , z 2 p are defined as
Z 1 = δ log y + y 2 1   and   Z 2 = 9 α 1 9 α 1 + χ 2 α 3
where
δ = 1 l o g w 2 ,   w 2 = 1 + 2 β 1 ,   β = 3 n 2 + 27 n 70 n + 1 n + 3 n 2 n + 5 n + 7 n + 9 ,   y = s ( w 2 1 ) n + 1 n + 3 12 n 2 ,
α = a + c × s 2 ,   a = n 2 n + 5 n + 7 n 2 + 27 n 70 6 δ ,   c = n 7 n + 5 n + 7 n 2 + 2 n 5 6 δ ,
δ = n 3 n + 1 n 2 + 15 n 4 ,   χ = 2 l k 1 s 2 ,   l = n + 5 n + 7 n 3 + 37 n 2 + 11 n 313 12 δ .

2.2.2. Royston (Roy)

In 1982, Royston proposed a test that uses the Shapiro-Wilk/Shapiro-Francia statistic to test multivariate normality. If the kurtosis of the sample is greater than 3, then it uses the Shapiro-Francia test for leptokurtic distributions. Otherwise it uses the Shapiro-Wilk test for platykurtic distributions [3,5]. Let W j be the Shapiro-Wilk/Shapiro-Francia test statistic for the j th variable ( j = 1 ,   2 ,   ,   d ) and Z j be the values obtained from the normality transformation [3,5].
if   4 n 11 ,     x = n   and   W j = log γ log 1 W j ,
if   12 n 2000 ,     x = log n   and   W j = log 1 W j .
Thus, it are observed that x and W j change with the sample size. The transformed values of each random variable are obtained by [3,5]
Z j = W j l σ ,
where γ , l and σ are derived from the polynomial approximations. The polynomial coefficients are provided for different [3,5]:
γ = a 0 γ + a 1 γ x + a 2 γ x 2 + + a d γ x d ,
l = a 0 l + a 1 l x + a 2 l x 2 + + a d l x d ,
log σ = a 0 σ + a 1 σ x + a 2 σ x 2 + + a d σ x d .
The Royston’s test statistic for multivariate normality is defined as
H = e j = 1 p ψ j p ~ χ e 2 ,
where e is the equivalent degrees of freedom, Φ · is the cumulative distribution function for the standard normal distribution such that,
e = p 1 + p 1 c ¯ ,
ψ j = Φ 1 Φ Z j 2 2 ,   j = 1 ,   2 , , d .
Let R be the correlation matrix and r i j   is the correlation between i th and j th observations. Then, the extra term c ¯ is found by
c ¯ = i j c i j p p 1 ,   c i j i j ,
where
c i j = g r i j , n   for   i j 1   for   i = j .
When g 0 , n = 0 and g 1 , n = 1 , then g ·   can be defined as
g r , n = r ϱ 1 l v 1 r l ,
where l , ϱ and v are the unknown parameters, which are estimated by Ross modeling [4]. It was found that l = 0.715 and ϱ = 5 for sample size 10 n 2000 and v is a cubic function
v n = 0.21364 + 0.015124 log n 2 0.0018034 log n 3 .

2.2.3. Mardia (Mar1 and Mar2)

In 1970, K.V. Mardia proposed a new multivariate goodness of fit test based on skewness and kurtosis. The statistic for this test is defined as [17]
M S s = n · s 6 p χ 2 p p + 1 p + 2 6 , M k k = n k p p + 2 2 8 p p + 2 d χ 2 1 .

2.3. Other Tests

This section reviews the properties of several measures of non-negative functional distance, a covariance matrix and Energy distance concerning their use as statistics for assessing multivariate normality. A non-negative functional distance that measures the distance between two functions is defined as
D h P , Q = P ^ t Q ^ t 2 φ h t d t ,
where P ^ t is the characteristic function of the multivariate standard normal, Q ^ t is the empirical characteristic function of the standardised observations, φ h t is a kernel (weighting) function
φ h t = 2 π h 2 p / 4 e t T t 2 h 2 ,
where t   ϵ   p and h   ϵ   is a smoothing parameter that needs to be selected [10].

2.3.1. Energy (Energy)

In 2013, G. Szekely and M. Rizzo introduced a new multivariate goodness of fit test based on Energy distance between multivariate distributions. The statistic for this test is defined as [18]
E n = n 2 n j = 1 n E || Y ˜ n , j N 1 || E || N 1 N 2 || 1 n 2 j , k = 1 n || Y ˜ n , j Y ˜ n , k || ,
where Y ˜ n , j = n / n 1 Y n , j , Y n , j = S n 1 2 X j X ¯ n ,   j = 1 , , n is called scattering residues. N 1 and N 2 are independent randomly distributed vectors according to the normal distribution. E || N 1 N 2 || = 2 Γ p + 1 2 / Γ p 2 , where Γ · is a Gamma function. The null hypothesis is rejected when E n acquires large values.

2.3.2. Lobato-Velasco (LV)

In 2004, I. Lobato and C. Velasco improved the Jarque and Bera test and applied it to stationary processes. The statistic for this test is defined as [19]
G = n μ ^ 3 2 6 F ^ 3 + n μ ^ 4 3 μ ^ 2 2 24 F ^ 4 ,
where F ^ k = t = 1 n n 1 ψ ^ t ψ ^ t + ψ ^ n t k 1 is an auto-covariance function.

2.3.3. Henze-Zirkler (HZ)

In 1990, Henze and Zirkler introduced the HZ test [1]. The statistic for this test is defined as
H Z = 1 n 2 i = 1 n j = 1 n e h 2 2 D i j 2 1 + h 2 p 2 i = 1 n e h 2 2 1 + h 2 D i + n 1 + 2 h 2 p 2 ,
where D i j = X i X j T S 1 X i X j , D i = X i X ¯ T S 1 X i X ¯ .
D i gives the squared Mahalanobis distance of i th observation to the centroid and D i j gives the Mahalanobis distance between i th and j th observations. If the sample follows a multivariate normal distribution, the test statistic is approximately log-normally distributed with mean [1]
1 a p 2 1 + p h 2 a + p p + 2 h 4 2 a 2 ,
and variance [1]
2 1 + 4 h 2 p 2 + 2 a p 1 + 2 p h 4 a 2 + 3 p p + 2 h 8 4 a 4 4 w h p 2   1 + 3 p h 4 2 w h + p p + 2 h 8 2 w h 2 ,  
where a = 1 + 2 h 2 and w h = 1 + h 2 1 + 3 h 2 . Henze and Zirkler also proposed an optimal choice of the parameter h in using H Z in the p-variate case as [1]
h * = 1 2 n 2 p + 1 4 1 p + 4 .
A drawback of the Henze-Zirkler test is that, when H 0 is rejected, the possible violation of normality is generally not straightforward. Thus, many biomedical researchers would prefer a more informative and equally or more powerful test than the Henze-Zirkler test [5].

2.3.4. Nikulin-Rao-Robson (NRR) and Dzhaparidze-Nikulin (DN)

In 1981, Moore and Stubblebine suggested a multivariate Nikulin-Rao-Robson (NRR) goodness of fit test [7,8]. This test statistic for a covariance matrix of any dimension is defined as
Y n 2 = V i 2 + 2 p r V i p i 2 1 2 p r p i 2 ,
where V i is a vector of standardized cell frequencies with components
V i = V i n θ ^ n = N i n n / r n / r ,   i = 1 , , r ,
where N i n is the number of random vectors X 1 , , X n falling into E i n θ ^ n ,   i = 1 , , r . Then the limiting covariance matrix of standardized frequencies is V n θ ^ n = l = I q q T B J 1 B T , where B is the r × m matrix with elements
B i j = 1 p i θ p i θ θ j ,   i = 1 , , r ,   j = 1 , , m ,
where q is a r-vector with its entries as 1 / r , 𝕞 = p + p p + 1 / 2 is the number of unknown parameters, J = J θ is the Fisher information matrix of size 𝕞 × 𝕞 for one observation which evaluated as
J θ = 1 0 0 Q 1 ,
where Q is the p p + 1 / 2 × p p + 1 / 2 covariance matrix of w (a vector of the entries of n S arranged column-wise by taking the upper triangular elements) [7]:
w = s 11 , s 12 , s 22 , s 13 , s 23 ,   s 33 , , s p p T .
The second term of Y n 2   recovers information lost due to data grouping. Another useful decomposition of Y n 2 is defined as
Y n 2 = U n 2 + S n 2 ,
where U n 2 is the multivariate statistic defined by Dzhaparidze and Nikulin (1974) [7]. It is defined as
U n 2 = V n T θ ^ n I B n B n T B n 1 B n T V n θ ^ n ,
and in 1985, McCulloch presented a multivariate test statistic [7]:
S n 2 = Y n 2 U n 2 = V n T θ ^ n B n J n B n T B n 1 + B n T B n 1 B n T V n θ ^ n .
If rank B = s , then U n 2 and S n 2 are asymptotically independent and distributed in the limit as χ r s 1 2 and χ s 2 , respectively.

3. The New Test

Our test is based on distribution distance and has been derived using an inversion formula. The estimation of a sample distribution density is based on application of the characteristic function and inversion formula. This method is known for its good properties (i.e., low sensitivity) and has been introduced in [20]. Marron and Wand [21] carried out an extensive comparison of density estimation methods (including the adapted kernel method) and concluded that density estimation based on application of characteristic function and inversion is more accurate for non-Gaussian data sets.
The random p -variate vector X p , which follows a distribution of a mixture model has a density function
f X = f X , θ = k = 1 q p k f k X , θ k ,
where q is the number of clusters (i.e., components, classes) of the mixture, and p k   k = 1 , , q is the a priori probability which satisfy
p k > 0 ,     k = 1 q p k = 1 .
The f k X , θ k is a distribution of the k th class and θ is a set of parameters θ = p 1 , , p q , θ 1 , , θ q . We denote the p -variate sample of independent and identically distributed random values X .
When examining approximations of parametric methods, it should be emphasized that as the data dimension increases, the number of model parameters increases rapidly, making it more difficult to find accurate parameter estimates. It is much easier to find density of univariate data projections
x τ = τ T x ,
than multivariate data density f because of mutually unambiguous compliance.
f f τ ,   τ p .
It is quite natural to try to find the multivariate density f using the density estimates f ^ τ of univariate observational projections [20]. In case of Gaussian mixture model, the projection of the observations (15) is also distributed according to the Gaussian mixture model:
f τ x = f τ x , θ τ = k = 1 q p k , τ φ k , τ x ,
where φ k , τ x = φ x ;   m k , τ ,   σ k , τ 2 is univariate Gaussian density. The parameter set θ of the multivariate mixture and the distribution parameters of the data projections θ τ = p k , τ ,   m k , τ ,   σ k , τ 2 , k = 1 , , q are related by equations:
p j , τ = p j , m j , τ = τ T M j , σ j , τ 2 = τ T R j τ .
The inversion formula is used
f x = 1 2 π p p e i t T x ψ t d t ,
where
ψ t = E e i t T x ,
where ψ t denotes the characteristic function of the random variable   X . Given that u = t , τ = t / t and by changing the variables to a spherical coordinate system we obtain
f x = 1 2 π p τ :   τ = 1 d s 0 e i u τ T x ψ u τ u p 1 d u ,
where the first integral is the surface integral of the unit sphere. The characteristic function of the projection of the observed random variable is
ψ τ u = E e i u τ T X ,
and has the property
ψ u τ = ψ τ u .
By selecting the set T of uniform distributed directions on the sphere and replacing the characteristic function with its estimate, a density estimate is obtained [20,22]:
f ^ x = A p # T τ T 0 e i u τ T x ψ ^ τ u u p 1 e h u 2 d u ,
where # T denotes a size of set T . Using the p-variate ball volume formula
V p R = π p 2 R p Γ p 2 + 1 = π p 2 R p p 2 ! , when   p   mod   2 0 , 2 p + 1 2 π p 1 2 R p p ! ! , when   p   mod   2 1 ,
the constant A p defined as
A p = V p 1 R 2 π p = p 2 p π p 2 Γ p 2 + 1 .
Computer simulation studies have shown that the density estimates obtained using the inversion formula are not smooth. Therefore, in Formula (24), an additional multiplier e h u 2 is used. This multiplier smoothes the estimate f ^ x with the Gaussian kernel function. Moreover, this form of the multiplier allows the integral value to be calculated analytically. Monte Carlo studies have shown that its use significantly reduces the error of estimates. Formula (24) can be used to estimate the characteristic function of the projected data. Let us consider two approaches. The first one is based on the density approximation of the Gaussian distribution mixture model. In this case, the parametric estimate of the characteristic function is used:
ψ ^ τ u = k = 1 q ^ τ p ^ k , τ e i u m ^ k , τ u 2 σ ^ k , τ 2 / 2 .
By substituting ψ ^ τ u in (24) by (27), we get
f ^ x = A p # T τ T k = 1 q ^ τ p ^ k , τ 0 e i u m ^ k , τ τ T x u 2 h + σ ^ k , τ 2 / 2 u p 1 d u = A p # T τ T k = 1 q ^ τ p ^ k , τ I p 1 m ^ k , τ τ T x σ ^ k , τ 2 + 2 h σ ^ k , τ 2 + 2 h p ,
where
I j y = R e 0 e i y t t 2 / 2 t j d t .
We note, that only the real part of the expression is considered here (the sum of the imaginary parts must be equal to zero) in other words, the density estimate f ^ x can acquire only the real values. The chosen form of the smoothing multiplier e h u 2 allows relating the smoothing parameter h with the variances of the projection clusters, i.e., in the calculations the variances are simply increased by 2 h . Next, the expression (29) is evaluated.
Let
C j y = 0 cos y t · e t 2 / 2 · t j d t ,
S j y = 0 sin y t · e t 2 / 2 · t j d t ,
then (29) can be written as
0 e i y t t 2 / 2 t j d t = C j y + i S j y .
By integrating in parts, we get
C j y = e t 2 2 t j 1 cos y t 0 + 0 e t 2 2 j 1 t j 2 cos y t y t j 1 sin y t d t = 1 j = 1 + j 1 C j 2 y y S j 1 y ,   j 1 .
S j y is expressed analogously. With respect to the limitations of the j index, the following recursive equations are obtained:
C j y = j 1 C j 2 y y S j 1 y ,   j 2 ,
C 1 y = 1 y S 0 y ,
S j y = j 1 S j 2 y y C j 1 y ,   j 2 ,
S 1 y = Y C 0 y .
The initial function S 0 y is founded by starting with the relation
S 0 y y = 0 t cos y t · e t 2 / 2 d t = C 1 y .
From (35) and (38) it follows that S 0 satisfies the differential equation
S 0 y = 1 y S 0 y ,       S 0 0 = 0 ,
which is solved by writing down S 0 as the Taylor series:
S 0 y = l = 0 c l + 1 l + 1 y l + 1 = 1 l = 2 c l 1 y l .
By equating the coefficients of the same powers, its values are obtained:
c 0 = 0 ,   c 1 = 1 , c l = c l 2 / l ,   l 2 ,
which gives us
S 0 y = l = 0 1 l y 2 l + 1 2 l + 1 ! ! = y y 3 3 ! ! + y 5 5 ! ! y 7 7 ! ! + .
C 0 is found from expression (30):
C 0 y = 0 cos y t · e t 2 / 2 d t = 1 2 cos y t · e t 2 / 2 d t = 1 2 cos y t i sin ( y t ) · e t 2 / 2 d t = π 2 e y 2 / 2 .
The value of the integral (24) then is
I j y = C j y .
One of the disadvantages of the inversion formula method (defined by (24)) is that the Gaussian distribution mixture model (13) described by this estimate (for f k = φ k ) does not represent density accuratelly, except around observations. When approximating the density under study with a mixture of Gaussian distributions, the estimation of the density using the inversion formula often becomes complicated due to a large number of components. Thus, we merge components with small a priori probabilities into one noise cluster.
We have developed and examined a modification of the algorithm which is based on the use of a multivariate Gaussian distribution mixture model. The parametric estimate of the characteristic function of uniform distribution density is defined as
ψ ^ u = 2 b a u sin b a u 2 · e i u a + b 2 ,
in the inversion Formula (19). In the density estimate calculation Formula (24), the estimation of the characteristic function is constructed as a union of the characteristic functions of a mixture of Gaussian distributions and uniform distribution with corresponding a priori probabilities:
ψ ^ τ u = k = 1 q ^ τ p ^ k , τ e i u m ^ k , τ u 2 σ ^ k , τ 2 / 2 + p ^ 0 , τ 2 b a u sin b a u 2 · e i u a + b 2 ,
where the second member describes uniformly distributed noise cluster, p ^ 0 —noise cluster weight, a = a τ , b = b τ . Based on the established estimates of the parameters of the uniform distribution and data projections, it is possible to define the range
a = τ T x min τ T x max τ T x min 2 n 1 ,
b = τ T x max + τ T x max τ T x min 2 n 1 .
By inserting (46) to (24) we obtain
f ^ x = A p # T τ T k = 1 q ^ τ p ^ k , τ 0 e i u m ^ k , τ τ T x u 2 h + σ ^ k , τ 2 / 2 u p 1 d u + 2 p ^ 0 , τ b a 0 e i u a + b 2 τ T x u 2 h · sin b a u 2 · u p 2 d u .
Using notations such as (28), we define the density estimate as
f ^ x = A p # T τ T k = 1 q ^ τ p ^ k , τ I p 1 m ^ k , τ τ T x σ ^ k , τ 2 + 2 h σ ^ k , τ 2 + 2 h p 2 + 2 p ^ 0 , τ b a J p 2 a + b 2 τ T x 2 2 h , b a 2 2 h · 2 h p 1 2 ,
where I j y is given in (29) which is evaluated by (44) and
J j y , t = R e 0 e i y u u 2 / 2 · sin t u · u j d u .
By integrating, we get
0 e i y u u 2 2 · sin t u · u j d u = 0 cos y u + i sin ( y u ) · sin t u · e u 2 2 · u j d u = 0 sin ( y + t u ) + sin ( t y u ) 2 + i cos y t u cos y + t u 2 · e u 2 2 · u j d u = 1 2 S j y + t + 1 2 S j t y + i 1 2 C j y t i 1 2 C j y + t ,
where S j y and C j y are defined in (30) and (31). Then the integral (51) evaluates to
J j y , t = 1 2 S j y + t + 1 2 S j t y .
The above procedure is called a modified inversion formula density estimate. Our proposed normality test is based on the distance function
T = p f z f ^ z d G z ,  
where z is a standardized value, f ^ z is an estimate of density function.
The choice of G(z) (54) is influenced by three aspects [23]:
  • G(z) assigns high weight where |f(z)− f ^ z | is large, f(z) pertaining to the alterative hypothesis. the distribution density is related to the alternative hypothesis.
  • G(z) gives high weight where the f ^ z is a relatively precise estimator of f(z).
  • G(z) is such that the integral (54) has a closed form.
For the distribution free method, the first two aspects are fulfilled by adequately selecting the smoothness parameter h , in addition it yields a closed (54) integral form
T = n 1 t = 1 n f z t f ^ z t .
T does not depend on a moderate sample volume ( 32) but depends on the data dimension. It is convenient to use the test statistics T * = l o g T which had the lowest sensitivity based on the exploratory study. Under the null hypothesis statistic T * approximately follows the Johnson SU distribution which is specified by the shape ( δ > 0 ,   γ ), scale ( λ > 0 ), location ( ξ ) parameters and has the density function
f X = δ λ 2 π g X ξ λ exp 0.5 γ + δ g X ξ λ 2 ,   for   X   ϵ   , + .
where g y = ln y + y 2 + 1 , g y = 1 y 2 + 1 .
In the middle of the twentieth century, N. L. Johnson [24] proposed certain systems of curve derived by the method of translation, which, retain most of the advantages and eliminate some of the drawbacks of the systems first based on this method. Johnson introduced log-normal (SL), bounded (SB), and unbounded (SU) systems. The bounded system range of variation covers the area between the bounding line β 2 β 1 1 = 0 and the Pearson Type III distribution; where ( β 1 , β 2 ) points are obtained from the distribution moments defined by Wicksell [25]:
μ r y = 1 2 π e r z γ / δ e 1 2 z 2 d z = e 1 2 r 2 δ 2 r γ δ 1 .
It follows that
β 1 = e δ 2 1 e δ 2 + 2 2 ,   β 1 > 0 ,
β 2 = e δ 2 4 + 2 e δ 2 3 + 3 e δ 2 2 3 .
The SU system is bounded at one end only (Pearson Type V). The SL system is lying between SB and SU systems. These regions are indicated in Figure 1. The SU system is presented in detail in [24].
Estimates of T * statistic Johnson SU distribution parameters for different dimensions are given in Table 1.
For statistic T * , the invariance and contingency properties were checked. The invariance property is confirmed because standardized data was used. The contingency property is confirmed experimentally (see Section 5).

4. Statistical Distributions

The overviewed normality tests are assessed by the simulation study of 11 statistical distributions grouped into four groups: symmetric, asymmetric, mixed and normal mixture distributions [5]. A description of these distribution groups is given in the following subsections.

4.1. A Group of Symmetric Distributions

Symmetric multivariate distributions are taken from the research [5]:
  • Three cases of the Beta(a,b) distribution − Beta(1,1),Beta(1,2) and Beta(2,2), where a and b are the shape parameters.
  • One case of the Cauchy(t,s) distribution − Cauchy(0,1), where t and s are the location and scale parameters.
  • One case of the Laplace(t,s) distribution − Laplace(0,1), where t and s are the location and scale parameters.
  • One case of the Logistic(t,s) distribution − Logistic(0,1), where t and s are the location and scale parameters.
  • Two cases of the t-Student(ν) distribution − t(2) and t(5), where ν is the number of degrees of freedom.
  • One case of the standard normal N(0,1) distribution.

4.2. A Group of Asymmetric Distributions

Asymmetric multivariate distributions are taken from the research [5]:
  • Five cases of the Chi-squared(ν) distribution − χ2 (1), χ2 (2), χ2 (5), χ2 (10) and χ2 (15), where ν is the number of degrees of freedom.
  • Two cases of the Gamma(a,b) distribution − Gamma(0.5,1) and Gamma(5,1), where a and b are the shape and scale parameters.
  • One case of the Gumbel(t,s) distribution − Gumbel(1,2), where t and s are the location and scale parameters.
  • Two cases of the Lognormal(t,s) distribution − LN(0,1) and LN(0,0.25) where t and s are the location and scale parameters.
  • Tree cases of the Weibull(β) distribution − Weibull(0.8), Weibull(1) and Weibull(1.5), where β is the shape parameter.

4.3. A Group of Mixed Distributions

The generated mixed data distribution
X k = X k 1 , X k 2 , , X k m , , X k p T ,   k = 1 , 2 , , n
is such that the first m variates (i.e., Xk1, Xk2, …, Xkm) follow the standard normal distribution and distribution of the remaining variates is one of the non-normal distributions (Laplace(0,1), χ2(5), t(5), Beta(1,1), Beta(1,2), Beta(2,2)). The experimental research covers the cases for m = p − 1, m = p/2 and m = 1.

4.4. A Group of Normal Mixture Distributions

Normal mixture distributions are considered in this research [5]: nine cases of the multivariate normal mixture distribution MVNMIX (a,b,c,d)MVNMIX (0.5,2,0,0), MVNMIX (0.5,4,0,0), MVNMIX (0.5,2,0.9,0), MVNMIX (0.5,0.5,0.9,0), MVNMIX (0.5,0.5,0.9,0.1), MVNMIX(0.5,0.5,0.9,0.9), MVNMIX(0.7,2,0.9,0.3), MVNMIX(0.3,1,0.9,0.1), MVNMIX(0.3,1,0.9,0.9). The multivariate normal mixture distribution with density:
a N 0 , 1 + 1 a N b 1 , 2 ,
where 1 is the column vector with all elements being 1,
1 = 1 c I + c 11 T   and   2 = 1 d I + d 11 T .

5. Simulation Study and Discussion

This section provides a modeling study that evaluates the power of selected multivariate normality tests. We used the Monte Carlo method to compare our proposed test with 13 multivariate tests described above for dimensions p = 2 ,   5 ,   10 , with sample sizes n = 32 ,   64 ,   128 ,   256 ,   512 ,   1024 at significance level α = 0.05 . Power was estimated by applying the tests on 1 000 000 randomly drawn samples from the alternative distribution (Beta, Cauchy, Laplace, Logistic, Student, Standard normal, Chi-Square, Gamma, Gumbel, Lognormal, Weibull, Mixed, Normal mixture).
The values of the test smoothness parameter ( h ) were selected experimentally: from 0.1 to 5 with a step of 0.1. The value of the test h parameter was determined for each dimension considered. It was found that the best results are obtained (i.e., maximum statistical value) for p = 2 with h = 1.05 , for p = 5 with h = 0.1 , and for p = 10 with h = 2.4 . These smoothness parameter h values were used to carry out the numerical experiments.
The power of 13 (including our proposed test) multivariate goodness of fit hypothesis tests was estimated calculated for different sample sizes, distributions and mixtures. The mean power values for the groups for distributions (given in Section 4), for each test and sample sizes, have been computed and presented in Table 2, Table 3, Table 4 and Table 5. It can be determined that the new test for the groups of symmetric and mixed distributions is the most powerful one. In the group of asymmetric distributions, the new (for p = 2 ) and Roy (for p = 5 and 10 ) tests are the most powerful ones. The new (for p = 2 and 5 ) and Roy (for p = 10 with sample sizes n = 256 ,   512 ,   1024 ) tests are also the most powerful in the group of normal distribution mixtures. Comparing the Mardia (Mar1 and Mar2) tests, based on asymmetry and excess coefficients, it has been found that Mar1 is the most powerful only for the group of asymmetric distributions. For the group of symmetric distributions the power of this test is the lowest (compared to other tests).
In order to supplement and emphasize the results presented in Table 2, Table 3, Table 4 and Table 5, the generalized line diagrams were drawn using the Trellis display [26] multivariate data visualization method. The resulting graph is shown in Figure 2 which shows that the New test is significantly more powerful than the other tests. The power of the Mar1 tests is the lowest compared with the other tests. Figure 2 indicate that the power of the tests increases as the sample size increases. By increasing the dimensions of the power of 8 (AD, CHI2, CVM, Energy, HZ, New, Mar1 and NRR) tests decreases while the power of the other (DH, DN, LV, Mar2 and Roy) tests increases slightly. For small sample sizes, the most powerful tests are New, Roy and DH. For large sample sizes, the most powerful tests are New, Energy, HZ and LV.

6. Examples

6.1. Survival Data

The data set collected in 2001–2020 by the Head of the Department of Urology Clinic of the Lithuanian University of Health Sciences [27] illustrates the practical application. This dataset consists of study data from 2423 patients and two different continuous attributes (patient age and prostate-specific antigen (PSA)). The assumption of normality was verified by filtering patients’ age and PSA by year of death (i.e., deaths during the first 1, 2, 3, 4, 5, 6, 7, 10, and 15 years). The filtered data was standardized. The power and p-value were calculated for the multivariate tests. The significance level of α = 0.05 was used for the study. Based on the obtained results, it was found that all the applied multivariate tests rejected the H 0 the hypothesis of normality. The power of tests CHI2, DH, Energy, HZ, LV, New, Mar, NRR and Roy was 0.999 and the p-value was <0.0001. Except for DN test, which power was 0.576 and the p-value was 0.026.

6.2. IQOS Data

In 2017, the data set of pollution research with IQOS and traditional cigarettes [28] was used by Kaunas University of Technology, Faculty of Chemical Technology, and Department of Environmental Technology for practical application. This data set consists of 33 experiments (with different conditions) in which the numerical (Pn10) and mass concentrations (Pm2.5, Pm10) of particles were measured. The assumption of normality was checked by filtering Pn10, Pm2.5, Pm10 according to the number of the experiment in the smoking phase. The filtered data was standardized. The power and p-values of multivariate tests with a significance level of α = 0.05 were calculated. Based on the obtained results, it was found that all the applied multivariate tests show that the assumption of normality is rejected. Most of the multivariate tests used (CHI2, DH, Energy, HZ, LV, New, Mar, NRR, and Roy) had a power of 0.999 and p-value of <0.0001. The power of the other tests was also close to 0.99 and the p-value was about 0.0001.

7. Conclusions

In this study, the comprehensive comparison of the power of 13 multivariate goodness of fit tests was performed for groups of symmetric, asymmetric, mixed, and normal mixture distributions. Two-dimensional, five-dimensional, and ten-dimensional data sets were generated to estimate the test power empirically.
A new multivariate goodness of fit test based on inversion formula was proposed. Based on the obtained modeling results, it was determined that the most powerful tests for the groups of symmetric, asymmetric mixed and normal mixture distributions are the proposed test and Roy multivariate test. From two real data examples, it was concluded that our proposed test is stable, even when applied to real data sets.

Author Contributions

Data curation, J.A., T.R.; Formal analysis, J.A., T.R.; Investigation, J.A., T.R.; Methodology, J.A., T.R.; Software, J.A., T.R.; Supervision, T.R.; Writing—original draft, J.A., M.B.; writing—review and editing, J.A., M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Generated data sets were used in the study (see in Section 4).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

p is p-variate set of real numbers,
X k = X k 1 , X k 2 , , X k p T p ,   k = 1 ,   2 , , n is p-variate vector,
#T denotes a size of set T,
p is dimension,
h is smoothness parameter,
D j ,     j = 1 ,   2 , , n are ordered statistics,
G p · is the probability distribution function of χ 2 p ,
s   is skewness,
k   is kurtosis,
n   is sample size,
x ¯   is sample mean,
σ 2   is sample variance,
z is a standardized value,
d is number of variables,
e is the equivalent degrees of freedom,
Φ · is the cumulative distribution function for the standard normal distribution,
R is the correlation matrix,
r i j   is the correlation between i th and j th observations,
V i is a vector of standardized cell frequencies,
N i n   is the number of random vectors,
J = J θ is the Fisher information matrix,
Q   is the p p + 1 / 2 × p p + 1 / 2 covariance matrix of w ,
P ^ t is the characteristic function of the multivariate standard normal,
Q ^ t is the empirical characteristic function of the standardised observations,
φ β t is a kernel (weighting) function,
Γ · is a Gamma function,
F ^ k is an auto-covariance function,
D i j is Mahalanobis distance between i th and j th observations,
W j   is the normality transformations,
f k X , θ k is a distribution of the k th class,
θ is a set of parameters θ = p 1 , , p q , θ 1 , , θ q ,
ψ t is the characteristic function of the random variable   X .
p k   k = 1 , , q is the a priori probability,
R is the radius of the ball (bounded sphere),
q ς is a quantile of standardized normal distribution
δ and γ are shape parameters,
λ is scale parameter,
ξ is location parameter.

References

  1. Henze, N.; Zirkler, B. A class of invariant consistent tests for multivariate normality. Commun. Stat. Theory Methods 1990, 19, 3595–3617. [Google Scholar] [CrossRef]
  2. Henze, N. Invariant tests for multivariate normality: A critical review. Stat. Pap. 2002, 43, 467–506. [Google Scholar] [CrossRef]
  3. Royston, J.P. An extension of Shapiro and Wilk’s W test for normality to large samples. Appl. Stat. 1982, 31, 115–124. [Google Scholar] [CrossRef]
  4. Ross, G.J.S.; Hawkins, D. MLP: Maximum Likelihood Program; Rothamsted Experimental Station: Harpenden, UK, 1980. [Google Scholar]
  5. Korkmaz, S.; Goksuluk, D.; Zararsiz, G. MVN: An R Package for Assessing Multivariate Normality. R J. 2014, 6, 151–162. [Google Scholar] [CrossRef] [Green Version]
  6. Doornik, J.A.; Hansen, H. An Omnibus Test for Univariate and Multivariate Normality. Oxf. Bull. Econ. Stat. 2008, 70, 927–939. [Google Scholar]
  7. Voinov, V.; Pya, N.; Makarov, R.; Voinov, Y. New invariant and consistent chi-squared type goodness-of-fit tests for multivariate normality and a related comparative simulation study. Commun. Stat. Theory Methods 2016, 45, 3249–3263. [Google Scholar] [CrossRef]
  8. Moore, D.S.; Stubblebine, J.B. Chi-square tests for multivariate normality with application to common stock prices. Commun. Stat. Theory Methods 1981, A10, 713–738. [Google Scholar] [CrossRef]
  9. Gnanadesikan, R.; Kettenring, J.R. Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics 1972, 28, 81–124. [Google Scholar]
  10. Górecki, T.; Horváth, L.; Kokoszka, P. Tests of Normality of Functional Data. Int. Stat. Rev. 2020, 88, 677–697. [Google Scholar] [CrossRef]
  11. Pinto, L.P.; Mingoti, S.A. On hypothesis tests for covariance matrices under multivariate normality. Pesqui. Operacional. 2015, 35, 123–142. [Google Scholar] [CrossRef] [Green Version]
  12. Dörr, P.; Ebner, B.; Henze, N. A new test of multivariate normality by a double estimation in a characterizing PDE. Metrika 2021, 84, 401–427. [Google Scholar] [CrossRef]
  13. Zhoua, M.; Shao, Y. A Powerful Test for Multivariate Normality. J. Appl Stat. 2014, 41, 351–363. [Google Scholar] [CrossRef] [Green Version]
  14. Kolkiewicz, A.; Rice, G.; Xie, Y. Projection pursuit based tests of normality with functional data. J. Stat. Plan. Inference 2021, 211, 326–339. [Google Scholar] [CrossRef]
  15. Ebner, B.; Henze, N. Tests for multivariate normality-a critical review with emphasis on weighted L^2-statistics. TEST 2020, 29, 845–892. [Google Scholar] [CrossRef]
  16. Arnastauskaitė, J.; Ruzgas, T.; Bražėnas, M. An Exhaustive Power Comparison of Normality Tests. Mathematics 2021, 9, 788. [Google Scholar] [CrossRef]
  17. Mardia, K. Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika 1970, 57, 519–530. [Google Scholar] [CrossRef]
  18. Szekely, J.G.; Rizzo, L.M. Energy statistics: A class of statistics based on distances. J. Stat. Plan. Inference 2013, 143, 1249–1272. [Google Scholar] [CrossRef]
  19. Lobato, I.; Velasco, C. A simple Test of Normality for Time Series. Econom. Theory 2004, 20, 671–689. [Google Scholar] [CrossRef] [Green Version]
  20. Ruzgas, T. The Nonparametric Estimation of Multivariate Distribution Density Applying Clustering Procedures. Ph.D. Thesis, Institute of Mathematics and Informatics, Vilnius, Lithuania, 2007; p. 161. [Google Scholar]
  21. Marron, J.S.; Wand, M.P. Exact mMean Integrated Squared Error. Ann. Stat. 1992, 20, 712–736. [Google Scholar] [CrossRef]
  22. Kavaliauskas, M.; Rudzkis, R.; Ruzgas, T. The Projection-based Multivariate Distribution Density Estimation. Acta Comment. Univ. Tartu. Math. 2004, 8, 135–141. [Google Scholar]
  23. Epps, T.W.; Pulley, L.B. A test for normality based on the empirical characteristic function. Biometrika 1983, 70, 723–726. [Google Scholar] [CrossRef]
  24. Johnson, N.L. Systems of Frequency Curves Generated by Methods of Translation. Biometrika 1949, 36, 149–176. [Google Scholar] [CrossRef] [PubMed]
  25. Wicksell, S.D. The construction of the curves of equal frequency in case of type A correlation. Ark. Mat. Astr. Fys. 1917, 12, 1–19. [Google Scholar]
  26. Theus, M. High Dimensional Data Visualization. In Handbook of Data Visualization; Springer: Berlin/Heidelberg, Germany, 2008; pp. 5–7. [Google Scholar]
  27. Milonas, D.; Ruzgas, T.; Venclovas, Z.; Jievaltas, M.; Joniau, S. The significance of prostate specific antigen persistence in prostate cancer risk groups on long-term oncological outcomes. Cancers 2021, 13, 2453. [Google Scholar] [CrossRef]
  28. Martuzevicius, D.; Prasauskas, T.; Setyan, A.; O’Connell, G.; Cahours, X.; Julien, R.; Colard, S. Characterization of the Spatial and Temporal Dispersion Differences Between Exhaled E-Cigarette Mist and Cigarette Smoke. Nicotine Tob. Res. 2019, 21, 1371–1377. [Google Scholar] [CrossRef]
Figure 1. Regions of Johnson’s systems.
Figure 1. Regions of Johnson’s systems.
Mathematics 09 03003 g001
Figure 2. The summary of average empirical power of all examined distribution groups by sample size and dimensionality.
Figure 2. The summary of average empirical power of all examined distribution groups by sample size and dimensionality.
Mathematics 09 03003 g002
Table 1. Statistic T * Johnson SU distribution parameter estimates.
Table 1. Statistic T * Johnson SU distribution parameter estimates.
ParameterSymbolEstimate
p = 2
Location ξ ^ 4.342807
Scale λ ^ 0.585038
Shape δ ^ 1.498293
Shape γ ^ 0.764906
p = 5
Location ξ ^ 7.025845
Scale λ ^ 0.088023
Shape δ ^ 0.895003
Shape γ ^ 0.400035
p = 10
Location ξ ^ 5.195174
Scale λ ^ 1.578613
Shape δ ^ 2.24856
Shape γ ^ −1.83037
Table 2. An average empirical power for a group of symmetric distributions.
Table 2. An average empirical power for a group of symmetric distributions.
ADCHI2CVMDHDNEnergyHZLVNew Mar1Mar2NRRRoy
p = 2
n = 320.6510.570.6520.6770.5650.650.6440.6960.9990.5320.6050.6080.703
n = 640.7780.6920.7790.8090.6710.770.7650.8150.9990.6170.7510.7360.819
n = 1280.8670.7980.8680.8920.7680.860.8530.8930.9990.6810.8570.8420.891
n = 2560.920.8730.920.9320.8470.9140.9060.9320.9990.7210.9170.910.929
n = 5120.9390.9120.940.9450.9030.9410.9360.9450.9990.7430.9420.9410.944
n = 10240.9450.9320.9450.9490.9370.9480.9470.9490.9990.7580.950.9490.95
p = 5
n = 320.6440.5310.6240.7350.5850.6320.6220.7630.9850.5230.6370.6020.784
n = 640.7910.6560.7750.8640.70.7580.7550.8710.9890.6210.7920.7390.875
n = 1280.8830.7730.8760.9240.8060.8630.8560.9250.9880.6960.890.8510.924
n = 2560.9290.8640.9260.9410.8860.9210.910.9410.9870.7350.9340.9160.941
n = 5120.9420.9160.9420.9460.9320.9450.940.9460.9810.7520.9480.9440.947
n = 10240.9460.9410.9460.9490.9490.9490.9490.9490.9850.7640.950.950.95
p = 10
n = 320.5570.4730.5340.7540.5990.5980.6040.7910.9970.4580.650.5990.834
n = 640.7540.6040.7280.8840.7040.7090.710.8930.9980.5920.8020.7190.905
n = 1280.8780.7260.8650.9340.8170.8210.8310.9350.9980.6760.8990.8440.934
n = 2560.9280.8240.9220.9410.8960.9060.9010.9410.9980.7330.940.9130.943
n = 5120.9420.8910.9410.9450.9360.9430.9370.9450.9910.7470.9510.9420.946
n = 10240.9450.9280.9450.9480.9490.9480.9490.9480.9910.7560.950.9490.95
Table 3. An average empirical power for a group of asymmetric distributions.
Table 3. An average empirical power for a group of asymmetric distributions.
ADCHI2CVMDHDNEnergyHZLVNew Mar1Mar2NRRRoy
p = 2
n = 320.6340.6310.6390.8520.550.8320.8110.870.9990.8130.630.6390.877
n = 640.7440.7670.7440.9560.6570.930.9060.9610.9990.9410.7760.7590.962
n = 1280.8270.8610.8220.9950.7240.9850.9680.9950.9990.9920.8760.8410.995
n = 2560.8970.9310.8920.9990.7740.9990.9950.9990.9990.9990.9470.9150.999
n = 5120.9540.9770.9490.9990.8160.9990.9990.9990.9990.9990.9880.9680.999
n = 10240.9850.9960.9820.9990.8640.9990.9990.9990.9990.9990.9990.9930.999
p = 5
n = 320.6140.60.6080.9150.5510.8540.7980.9320.9820.8030.6230.610.945
n = 640.7630.7790.7610.990.6750.9580.9070.9920.9890.9540.7910.7630.993
n = 1280.8690.8920.8690.9990.7480.9960.9740.9990.9970.9970.9080.8690.999
n = 2560.9460.9650.9470.9990.8120.9990.9980.9990.9970.9990.9780.950.999
n = 5120.9840.9940.9850.9990.8720.9990.9990.9990.9990.9990.9970.9890.999
n = 10240.9950.9990.9950.9990.9260.9990.9990.9990.9990.9990.9990.9990.999
p = 10
n = 320.4830.4430.4590.9440.5320.8290.7440.960.9220.6930.5730.5320.98
n = 640.7070.7120.70.9980.6790.9560.8610.9980.9470.9310.7460.7220.999
n = 1280.8630.870.8590.9990.7760.9970.9540.9990.980.9970.8980.860.999
n = 2560.9550.960.9530.9990.8580.9990.9940.9990.9950.9990.9780.9520.999
n = 5120.990.9940.9890.9990.930.9990.9990.9990.9960.9990.9980.9920.999
n = 10240.9960.9990.9960.9990.9750.9990.9990.9990.9960.9990.9990.9990.999
Table 4. An average empirical power for a group of mixed distributions.
Table 4. An average empirical power for a group of mixed distributions.
ADCHI2CVMDHDNEnergyHZLVNew Mar1Mar2NRRRoy
p = 2
n = 320.4690.4080.4630.4360.4390.5820.5720.4530.9990.4760.4120.4440.451
n = 640.5720.4760.5670.510.5110.7030.6970.5470.9990.5770.5270.5330.513
n = 1280.6830.5710.6790.5910.590.8090.8070.6670.9990.6590.6510.6410.572
n = 2560.780.6670.7780.660.6740.8720.8710.7490.9990.7170.7620.7410.643
n = 5120.8480.7630.8470.7460.7630.8950.8940.8080.9990.760.8430.8270.72
n = 10240.8830.8420.8830.8260.8350.9020.9010.8570.9990.780.8840.8780.764
p = 5
n = 320.6260.4660.5850.5450.5380.7030.7060.5530.9820.5840.5840.5510.47
n = 640.7490.5820.7260.6310.6840.7880.8150.6280.9890.6750.7230.6940.524
n = 1280.8050.670.7910.6950.7710.8450.8640.6920.9950.7220.7910.7690.589
n = 2560.8520.7290.8410.7470.8250.880.8850.7510.9980.750.8380.8220.669
n = 5120.880.7630.8750.7770.8650.8940.8950.810.9990.7660.8640.8630.73
n = 10240.8940.7890.8930.7950.8910.90.8990.8830.9990.7780.8890.8890.764
p = 10
n = 320.6880.4770.6420.580.6690.7190.7450.5920.9160.6140.7310.6790.475
n = 640.7530.5790.7440.690.7530.7440.780.690.9420.680.7960.7530.529
n = 1280.7750.6510.7710.7360.7760.7770.8210.7350.940.7220.7950.7740.602
n = 2560.8020.7090.7950.7610.7930.8230.870.760.9680.7450.8110.790.689
n = 5120.8330.7460.8210.7780.8180.8750.8920.7760.9950.7640.840.8140.745
n = 10240.8660.7630.8530.7910.8420.8970.8990.790.9970.7790.8610.8370.769
Table 5. An average empirical power for a group of normal mixture distributions.
Table 5. An average empirical power for a group of normal mixture distributions.
ADCHI2CVMDHDNEnergyHZLVNew Mar1Mar2NRRRoy
p = 2
n = 320.4650.4220.4680.560.4280.5370.5290.5880.9990.4330.4370.4420.607
n = 640.5760.5080.5810.740.5030.6820.6720.7520.9990.5440.5630.5440.778
n = 1280.710.6180.7150.8930.5820.8360.8230.8950.9990.6330.7070.6640.908
n = 2560.8440.7380.8480.9740.6850.9380.9260.9740.9990.7010.840.8050.978
n = 5120.9430.8450.9450.9980.7910.9860.9770.9980.9990.7330.9310.9240.998
n = 10240.9870.9170.9880.9990.8820.9990.9980.9990.9990.7570.9770.9850.999
p = 5
n = 320.450.3990.4410.5940.4430.5030.4850.6320.980.3840.460.4420.672
n = 640.5740.4910.5630.7820.510.640.6210.7950.9940.5160.590.5390.828
n = 1280.6990.5980.6890.9160.5940.7810.7610.920.9970.6190.7280.6550.934
n = 2560.8060.7020.7980.9790.6910.8940.8770.9790.9990.6940.8320.7660.984
n = 5120.8890.7870.8830.9980.7820.9630.950.9980.9990.7360.9050.8570.998
n = 10240.9460.8570.9420.9990.8590.9920.9850.9990.9990.7580.9540.9250.999
p = 10
n = 320.4020.3920.3960.620.4950.4760.4870.6670.9890.2870.5470.4870.735
n = 640.5560.470.5370.80.5360.5990.5810.8150.9840.4720.6340.550.855
n = 1280.7090.5870.6920.9150.6290.7230.7080.9190.9770.5820.7580.6740.939
n = 2560.8010.6880.790.9730.7230.8340.8150.9740.9710.6690.8340.7710.98
n = 5120.8530.760.8460.9950.80.9120.8870.9950.9710.7110.8850.8330.997
n = 10240.8930.8180.8860.9990.850.960.9390.9990.9730.7390.9240.8750.999
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arnastauskaitė, J.; Ruzgas, T.; Bražėnas, M. A New Goodness of Fit Test for Multivariate Normality and Comparative Simulation Study. Mathematics 2021, 9, 3003. https://0-doi-org.brum.beds.ac.uk/10.3390/math9233003

AMA Style

Arnastauskaitė J, Ruzgas T, Bražėnas M. A New Goodness of Fit Test for Multivariate Normality and Comparative Simulation Study. Mathematics. 2021; 9(23):3003. https://0-doi-org.brum.beds.ac.uk/10.3390/math9233003

Chicago/Turabian Style

Arnastauskaitė, Jurgita, Tomas Ruzgas, and Mindaugas Bražėnas. 2021. "A New Goodness of Fit Test for Multivariate Normality and Comparative Simulation Study" Mathematics 9, no. 23: 3003. https://0-doi-org.brum.beds.ac.uk/10.3390/math9233003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop