Next Article in Journal
Frequency Domain Filtered Residual Network for Deepfake Detection
Previous Article in Journal
Local Correlation Integral Approach for Anomaly Detection Using Functional Data
Previous Article in Special Issue
Impact Analysis of the External Shocks on the Prices of Malaysian Crude Palm Oil: Evidence from a Structural Vector Autoregressive Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Revisiting the Autocorrelation of Long Memory Time Series Models

School of Mathematics and Statistics, University of Sydney, Camperdown, NSW 2006, Australia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 23 December 2022 / Revised: 31 January 2023 / Accepted: 1 February 2023 / Published: 6 February 2023
(This article belongs to the Special Issue Time Series Analysis and Econometrics with Applications)

Abstract

:
In this article we first revisit some earlier work on fractionally differenced white noise and correct some issues with previously published formulae. We then look at vector processes and derive formula for the Autocorrelation function, which is extended in this work to a larger range of parameter values than considered elsewhere, and compare this with previously published work.

1. Introduction

Long memory of a time series is a special characteristic that we observe when analyzing time series data. A time series process is considered to have long memory if its serial dependence or the autocorrelation function (ACF) decays more slowly than an exponential decay (a time series with an exponentially decaying ACF is known as having short memory). This indicates that in long memory time series, the ACF decays hyperbolically and a significant dependence exists between two points even when they are far apart. This hyperbolic behavior of the ACF forces an unbounded spectrum at the origin and, as a result, the standard theory for short memory time series models, such as auto-regressive moving average (ARMA) models cannot be applicable. One of the earliest researchers to identify the need for long memory models was Hurst [1,2].
In order to model such long memory time series, Granger and Joyeux [3] and Hosking [4] proposed a family of auto-regressive fractionally integrated moving average (ARFIMA) and these proved to be very useful in many time series applications, especially in the areas of geophysics (Haslett and Raftery [5], Lustig et al. [6]), economics (Gil-Alana et al. [7]), and finance (Barkoulas et al. [8], Reschenhofer et al. [9]). To investigate some hidden characteristics of time series, in his paper, Peiris [10] used a similar approach and defined a family of generalized auto-regressive (GAR) models. The ARFIMA model of a process X t is defined by
ϕ ( B ) ( 1 B ) d X t = θ ( B ) ϵ t ,
where ϕ ( z ) = 1 j = 1 p ϕ j z j , θ ( z ) = 1 j = 1 q θ j z j , ϵ t represents a zero-mean uncorrelated process with variance σ 2 , d is a real number which, for the process to be stationary should satisfy d < 1 2 , p and q are non-negative integers and B is the backshift operator, defined as B X t = X t 1 .
The interested reader may compare (1) to a standard Box–Jenkins ARIMA model (Box and Jenkins [11]) where d is a non-negative integer. Where d is allowed to be fractional, (1) may be rearranged to show a factor ( 1 B ) d which can be written as a Taylor series expansion j = 0 ψ j B j with
ψ j = Γ ( j + d ) Γ ( j + 1 ) Γ ( d ) .
When p = q = 0 , (1) is often referred to as fractionally differenced white noise or FDWN.
Section 2 is devoted to highlight the important properties of GAR(1) model and its relationship to ARFIMA. Recent advancements related to GAR(1) model can be found in Hunt et al. [12] and further extensive results on long memory time series are available in Hassler [13].
In this paper, we will explore some issues with the formulae supplied in Granger and Joyeux [3] for the spectral density function of ARFIMA processes, and then move on to extend the current results for multivariate ARFIMA ( 0 , d , 0 ) ACF functions.
Section 2 will briefly examine the GAR model of Peiris [10] which provides a general formula for the ACF of these processes.
Section 3 will examine and discuss some issues with Granger and Joyeux [3] and Section 4 will look at a Vector ARFIMA ( 0 , d , 0 ) process, extending existing results to a wider range of the fractional differencing exponent. Section 5 will conclude the paper.

2. Generalized Auto-Regressive Model of Order 1 (GAR(1))

In his paper, Peiris [10] considered a time series X t generated by a GAR(1) model given by
( 1 α B ) δ X t = Z t , | α | < 1 and δ > 0 ,
where B is the backshift operator and { Z t } W N ( 0 , σ 2 ) is a white noise process.
The restriction δ > 0 in (2) can be removed as | α | < 1 . The stationary solution to (2) is
X t = j = 0 ψ j α j Z t j ,
and the corresponding spectrum f X ( ω ) is
f X ( ω ) = σ 2 2 π 1 2 α cos ω + α 2 δ , π < ω π ,
where ψ j = Γ ( j + δ ) Γ ( j + 1 ) Γ ( δ ) .
It has been shown by Peiris [10] P163, Theorem 3.2 that the ACF at lag k, γ k is given by
γ k = σ 2 Γ ( k + δ ) Γ ( δ ) Γ ( k + 1 ) F ( δ , k + δ ; k + 1 ; α 2 ) , k 0 ,
Using F ( α , β ; γ ; 1 ) = Γ ( γ ) Γ ( γ α β ) Γ ( γ α ) Γ ( γ β ) for γ > α + β , the above reduces to
γ k = σ 2 Γ ( k + δ ) Γ ( 1 2 δ ) Γ ( k + 1 δ ) Γ ( δ ) Γ ( 1 δ ) .
Furthermore, we can use Eulers reflection formula Γ ( z ) Γ ( 1 z ) = π sin π z to give
γ k = σ 2 π sin π δ Γ ( k + δ ) Γ ( k + 1 δ ) Γ ( 1 2 δ ) ,
where
F ( θ 1 , θ 2 ; θ 3 ; θ ) = Γ ( θ 3 ) Γ ( θ 1 ) Γ ( θ 2 ) j = 0 Γ ( θ 1 + j ) Γ ( θ 2 + j ) Γ ( θ 3 + j ) Γ ( j + 1 ) θ j
is the hyper-geometric function.
These general results in (3) and (4) can be used in ARFIMA modeling. The interested reader is advised to refer to Bondon and Palma [14] or Hassler [13] for further details.
Next, consider the model ARFIMA ( 0 , δ , 0 ) also known as fractionally differenced white noise.

3. Fractionally Differenced White Noise—A Discussion

Formally, we define a FDWN process as (1) with p = q = 0 , although this can also be defined as (2) with α = 1 . In this section, to emphasize the fractional nature of the exponent we will use the notation δ rather than d for this.
In this case, Peiris [10] provides a formula for the auto-covariance as
γ k = σ 2 Γ ( k + δ ) Γ ( δ ) Γ ( k + 1 ) F ( δ , k + δ ; k + 1 ; 1 ) = σ 2 Γ ( k + δ ) Γ ( 1 2 δ ) Γ ( k + 1 δ ) Γ ( δ ) Γ ( 1 δ ) .
We can use Eulers reflection formula to give
γ k = σ 2 π sin π δ Γ ( k + δ ) Γ ( k + 1 δ ) Γ ( 1 2 δ ) .
Other authors have also provided results.
In Hosking [4], Theorem 1 looks at FDWN with σ 2 = 1 and δ ( 1 2 , 1 2 ) and provides a formula
γ k = ( 1 ) k Γ ( 1 2 δ ) Γ ( k δ + 1 ) Γ ( 1 k δ ) .
(9) can be shown to be identical to (7) using
( 1 ) k Γ ( 1 k δ ) = Γ ( 1 δ ) ( k + δ 1 ) . . . ( 1 + δ ) δ = Γ ( 1 δ ) Γ ( δ ) Γ ( k + δ ) .
Palma [15] also provides a similar formula for a general σ 2 > 0 (Equation 3.21) but the implication from Section 3.2.1 (but not explicitly stated for the ACF) is that this holds for δ ( 1 , 1 2 ) . As above, a similar result was also reported by Bondon and Palma [14]. In Hassler [13], Proposition 6.4 formally provides this result for δ ( 1 , 1 2 ) .
However, the result due to Granger and Joyeux [3] p. 17 for μ τ (used by Granger and Joyeux [3] to identify the auto-covariance at lag τ ) does not reduce to γ k . We now proceed to explore why this is the case.
In Granger and Joyeux [3] Section 2, the spectrum of the process being studied is given as
f ( ω ) = α ( 1 cos ω ) d .
The assumption behind this is that α may consist of a range of non-long-memory parameters. For instance, for a fractional white noise process, one would expect α = α 1 1 2 π .
However, this is at best misleading.
Suppose f ( ω ) = 1 2 π 1 e i ω 2 d (Brockwell and Davis [16] 13.2.18).
This can be rewritten as
f ( ω ) = 1 2 π 2 d 1 cos ω d .
This can clearly be written as (11) by setting α = α 2 α 2 ( d ) 1 π 2 1 + d , however this is no longer independent of the long memory parameter d. We believe the intention was that α should have been a constant independent of d.
We feel it would be best to write the spectral density as (12) rather than (11), and use α = α 1 . In the more general form used by Granger and Joyeux [3], the spectral density is
f ( ω ) = α 2 d ( 1 cos ω ) d .
This changes the formula for the auto-covariance function. To avoid confusion we denote the auto-covariance function of (13) as μ ˜ τ , to distinguish it from the version documented in Granger and Joyeux [3] labeled as μ τ , and written as
μ τ = α 2 1 + d sin ( π d ) Γ ( τ + d ) Γ τ + 1 d Γ ( 1 2 d ) .
Lemma 1. 
μ τ γ τ , μ ˜ τ = γ τ .
Proof. 
We proceed by evaluating a formula for μ ˜ τ similar to that which Granger and Joyeux [3] obtained for μ τ .
We can write
μ ˜ τ = 0 2 π cos ( τ ω ) f ( ω ) d ω = 0 2 π cos ( τ ω ) α 2 d ( 1 cos ω ) d d ω = α 2 2 d 0 2 π cos ( τ ω ) sin ( ω / 2 ) 2 d d ω ,
where we have used the identity ( 1 cos ω ) = 2 sin ( ω / 2 ) 2 .
Note that, at this point in Granger and Joyeux [3], there appears to be a typographic error where the limits of integration are mistakenly set to be between 0 and π , rather than 0 and 2 π .
Using Gradshteyn and Ryzhik [17] Equation (3), 631.8 with ν = 1 2 d > 0 when d < 1 2 ; a = 2 τ and x = ω / 2 we have
μ ˜ τ = α 2 2 d 0 2 π sin ( ω / 2 ) 2 d cos ( 2 τ ω / 2 ) d ω = α 2 2 d 2 0 π sin ( x ) 2 d cos ( 2 τ x ) d x = α 2 1 2 d π cos ( τ π ) 2 2 d ( 1 2 d ) B 1 d + τ , 1 d τ = 2 α π cos ( τ π ) ( 1 2 d ) B 1 d + τ , 1 d τ
The beta function can be represented as B ( x , y ) = Γ ( x ) Γ ( y ) Γ ( x + y ) , so that
μ ˜ τ = 2 α π cos ( τ π ) Γ ( 2 2 d ) ( 1 2 d ) 1 Γ 1 d + τ Γ 1 d τ .
Now Γ ( x + 1 ) = x Γ ( x ) so Γ ( 2 2 d ) = ( 1 2 d ) Γ ( 1 2 d ) so
μ ˜ τ = 2 α π cos ( τ π ) 1 Γ τ + 1 d Γ 1 d τ Γ ( 1 2 d ) .
Eulers reflection formula is Γ ( z ) Γ ( 1 z ) = π sin π z so Γ ( 1 d τ ) = π sin ( π ( τ + d ) ) 1 Γ ( τ + d ) so that
μ ˜ τ = 2 α cos ( τ π ) sin ( π ( τ + d ) ) Γ ( τ + d ) Γ τ + 1 d Γ ( 1 2 d ) .
Now 2 sin x cos y = sin ( x y ) + sin ( x + y ) , so
2 cos ( τ π ) sin ( π ( τ + d ) ) = sin ( 2 π τ + π d ) + sin ( π d ) = 2 sin ( π d )
So that
μ ˜ τ = 2 α sin ( π d ) Γ ( τ + d ) Γ τ + 1 d Γ ( 1 2 d ) .
To complete the proof, compare (17) with (8) which are equal when α = α 1 . □
Further, compare (17) with the original version given by Granger and Joyeux [3] in (14).
The difference is a factor of 2 d . As a practical comparison, consider Table 1 below.

4. Vector FDWN and Related Results

This section considers an extension of the above results for the vector case. We note that some results have already been published for a particular case in Kechagias and Pipiras [18] Proposition 5.1, but we consider a special case and show an alternative derivation.
Suppose that X t = ( X 1 t , X 2 t , , X m t ) is an m-dimensional vector of time series at time t. Assume that the time series X t follows long memory
D ( B ) X t = η t ,
where
  • D ( B ) = diag ( ( 1 B ) d 1 , , ( 1 B ) d m ) with backshift operator B, 1 < d i < 1 2 ( i = 1 , 2 , , m ) ,
  • η t = ( η 1 t , η 2 t , , η m t ) is an m-dimensional zero-mean covariance stationary vector with variance-covariance matrix Ω = ( ω i 1 i 2 ) . That is, ω i 1 i 2 = E ( η i 1 t η i 2 t ) for all i 1 , i 2 = 1 , 2 , , m .
Let ( 1 B ) d i = j = 0 ψ j i B j , where ψ j i = Γ ( j + d i ) Γ ( j + 1 ) Γ ( d i ) , j = 0 , 1 , for each i = 1 , 2 , , m .
Theorem 1. 
 
(a) X t = [ D ( B ) ] 1 η t and X i t = j = 0 ψ j i η i , t j , i = 1 , 2 , , m , which converges for 1 < d i < 1 2 using arguments from Bondon and Palma [14] and Hassler [13] Definition 3.1 and Proposition 6.2.
(b) Let V = E ( X t X t ) . Then we have:
V = ω 11 j = 0 ψ j 1 2 ω 12 j = 0 ψ j 1 ψ j 2 ω 1 m j = 0 ψ j 1 ψ j m ω 21 j = 0 ψ j 2 ψ j 1 ω 22 j = 0 ψ j 2 2 ω 2 m j = 0 ψ j 2 ψ j m ω m 1 j = 0 ψ j m ψ j 1 ω m 2 j = 0 ψ j m ψ j 2 ω m m j = 0 ψ j m 2 m × m ,
where j = 0 ψ j i 2 = Γ ( 1 d i ) Γ 2 ( 1 d i ) , j = 0 ψ j i 1 ψ j i 2 = Γ ( 1 d i 1 d i 2 ) Γ ( 1 d i 1 ) Γ ( 1 d i 2 ) for all i 1 , i 2 = 1 , 2 , , m .
(c) Let γ ( k ) = E ( X t X t + k ) be the m × m auto-covariance matrix at lag k of X t . Then we have:
γ ( k ) = γ 11 ( k ) γ 12 ( k ) γ 1 m ( k ) γ 21 ( k ) γ 2 m ( k ) γ m 1 ( k ) γ m m ( k ) m × m ,
where γ i 1 i 2 ( k ) = ω i 1 i 2 j = 0 ψ j i 1 ψ j + k , i 2 and j = 0 ψ j i 1 ψ j + k , i 2 = Γ ( k + d i 2 ) Γ ( 1 d i 1 d i 2 ) Γ ( d i 2 ) Γ ( k + 1 d i 1 ) Γ ( 1 d i 2 ) for all i 1 , i 2 = 1 , 2 , , m .
Proof. 
Let γ i 1 i 2 ( k ) = E ( X i 1 t X i 2 t + k ) . Now
γ i 1 i 2 ( k ) = E j = 0 ψ j i 1 η i 1 , t j j = 0 ψ j i 2 η i 2 , t + k j = ω i 1 i 2 j = 0 ψ j i 1 ψ j + k , i 2 = ω i 1 i 2 j = 0 Γ ( j + d i 1 ) Γ ( j + 1 ) Γ ( d i 1 ) Γ ( j + k + d i 2 ) Γ ( j + k + 1 ) Γ ( d i 2 ) = ω i 1 i 2 Γ ( k + d i 2 ) Γ ( k + 1 ) Γ ( d i 2 ) F ( d i 1 , k + d i 2 ; k + 1 ; 1 ) = ω i 1 i 2 Γ ( k + d i 2 ) Γ ( 1 d i 1 d i 2 ) Γ ( d i 2 ) Γ ( k + 1 d i 1 ) Γ ( 1 d i 2 ) .
When i 1 = i 2 = i , (19) reduces to ω i i Γ ( k + d i ) Γ ( 1 2 d i ) Γ ( d i ) Γ ( k + 1 d i ) Γ ( 1 d i ) and when k = 0 these reduce to (b) in the theorem.
(19) can be rewritten using Eulers reflection formula as
γ i 1 i 2 ( k ) = ω i 1 i 2 Γ ( 1 d i 1 d i 2 ) Γ ( d i 2 ) Γ ( 1 d i 2 ) Γ ( k + d i 2 ) Γ ( k + 1 d i 1 ) = ω i 1 i 2 1 Γ ( d i 2 ) Γ ( 1 d i 2 ) Γ ( 1 d i 1 d i 2 ) Γ ( k + d i 2 ) Γ ( k + 1 d i 1 ) = ω i 1 i 2 sin ( π d i 2 ) π Γ ( 1 d i 1 d i 2 ) Γ ( k + d i 2 ) Γ ( k + 1 d i 1 ) .
We can again apply Eulers reflection formula to give
γ i 1 i 2 ( k ) = ω i 1 i 2 sin ( π d i 2 ) sin π ( d i 1 + d i 2 ) Γ ( k + d i 2 ) Γ ( d i 1 + d i 2 ) Γ ( k + 1 d i 1 ) .
When i 1 = i 2 = i , then j = 0 ψ j i ψ j + k , i = Γ ( k + d i ) Γ ( 1 2 d i ) Γ ( d i ) Γ ( k + 1 d i ) Γ ( 1 d i ) can be further reduced to
sin π d i sin 2 π d i Γ ( k + d i ) Γ ( 2 d i ) Γ ( k + 1 d i ) = sin π d i 2 sin π d i cos π d i Γ ( k + d i ) Γ ( 2 d i ) Γ ( k + 1 d i ) = 1 2 cos π d i Γ ( k + d i ) Γ ( 2 d i ) Γ ( k + 1 d i )
and when k = 0
1 2 cos π d i Γ ( d i ) Γ ( 2 d i ) Γ ( 1 d i ) = 1 2 π sin π d i cos π d i Γ 2 ( d i ) Γ ( 2 d i ) .
Remark 1.
  • It is straightforward to show that these formula are a special case of those provided by Kechagias and Pipiras [18] Proposition 5.1 when d i > 0 and ω i i = 1 . Using their notation, we choose Q + = I and Q = 0 (the matrix of zeros).
    Then Kechagias and Pipiras [18] Equation (69) is (20). With these values for Q + and Q Kechagias and Pipiras [18] Proposition 5.1 defines
    γ i 1 i 2 ( k ) = 1 2 π b i 1 i 2 1 γ 1 , i 1 i 2 ( k ) + b i 1 i 2 2 γ 2 , i 1 i 2 ( k ) + b i 1 i 2 3 γ 3 , i 1 i 2 ( k ) + b i 1 i 2 4 γ 4 , i 1 i 2 ( k )
    where
    b i 1 i 2 1 = t = 1 m q i 1 , t q i 2 , t = 0 b i 1 i 2 2 = t = 1 m q i 1 , t q i 2 , t + = 0 b i 1 i 2 3 = t = 1 m q i 1 , t + q i 2 , t + = 1 b i 1 i 2 4 = t = 1 m q i 1 , t + q i 2 , t = 0
    and
    γ 3 , i 1 i 2 ( k ) = 2 Γ ( 1 d i 1 d i 2 ) sin ( π d i 2 ) Γ ( k + d i 2 ) Γ ( k + 1 d i 1 )
    and so (24) is the same as (20).
  • When m = 1 this readily reverts to the univariate case since as noted above (writing d i i = d )
    γ ( k ) = ω i i Γ ( k + d ) Γ ( 1 2 d ) Γ ( d ) Γ ( k + 1 d ) Γ ( 1 d )
    which is the same as (7).

5. Conclusions

Long memory processes exhibit behavior of relatively high correlations between observations even though they might occur far apart in time. These processes can be modeled using ARFIMA processes.
Vector processes can also exhibit long memory and this can happen to different degrees for different components.
In this paper we have explored some issues with a previous formula for the ACF and spectral density of a univariate model, and also looked at extending the applicability of the result for the ACF of a vector ARFIMA(0,d,0) process. Later work may consider extending this to a more general ARFIMA(p,d,q) model.

Author Contributions

Conceptualization, S.P.; methodology, S.P. and R.H.; software, R.H.; validation, S.P. and R.H.; writing—original draft preparation, R.H.; writing—review and editing, S.P. and R.H.. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

This article did not use any research data.

Acknowledgments

This work was initiated while Shelton Peiris was visiting the Tor Vergata University of Rome in September 2022. He acknowledges the hospitality of the faculty of Economics and Tommaso Proietti. The authors greatly acknowledge the comments and suggestions from the referees and the editorial board which have improved the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hurst, H. Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770. [Google Scholar] [CrossRef]
  2. Hurst, H. The problem of long-term storage in Reserviors. Int. Assoc. Sci. Hydrol. 1956, 1, 13–27. [Google Scholar] [CrossRef]
  3. Granger, C.; Joyeux, R. An Introduction to Long Memory Time Series models and Fractional Differencing. J. Time Ser. Anal. 1980, 1, 15–29. [Google Scholar] [CrossRef]
  4. Hosking, J. Fractional differencing. Biometrika 1981, 68, 165–176. [Google Scholar] [CrossRef]
  5. Haslett, J.; Raftery, A. Space-time Modelling with Long-memory Dependence: Assessing Ireland’s Wind Power Resource. J. R. Stat. Soc. Ser. C 1989, 38, 1–50. [Google Scholar] [CrossRef]
  6. Lustig, A.; Charlot, P.; Marimoutou, V. The memory of ENSO revisited by a 2-factor Gegenbauer process. Int. J. Climatol. 2017, 37, 2295–2303. [Google Scholar] [CrossRef]
  7. Gil-Alana, L.; Ozdemir, Z.; Tansel, A. Long Memory in Turkish Unemployment Rates. Emerg. Mark. Financ. Trade 2019, 55, 201–217. [Google Scholar] [CrossRef]
  8. Barkoulas, J.; Labys, W.; Onochie, J. Fractional Dynamics in International Commodity Prices. J. Futur. Mark. 1997, 17, 161–189. [Google Scholar] [CrossRef]
  9. Reschenhofer, E.; Mangat, M.; Zwatz, C.; Guzmics, S. Evaluation of current research on stock return predictability. J. Forecast. 2020, 39, 334–351. [Google Scholar] [CrossRef]
  10. Peiris, S. Improving the quality of forecasting using generalized AR models: An application to statistical quality control. Stat. Methods 2003, 5, 156–171. [Google Scholar]
  11. Box, G.; Jenkins, G. Time Series Analysis: Forecasting and Control; Holden-Day: San Francisco, CA, USA, 1976. [Google Scholar]
  12. Hunt, R.; Peiris, S.; Weber, N. Seasonal Generalized AR models. Commun.-Stat. Theory Methods 2022. [Google Scholar] [CrossRef]
  13. Hassler, U. Time Series Analysis with Long Memory in View; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2019. [Google Scholar] [CrossRef]
  14. Bondon, P.; Palma, W. A class of antipersistent processes. J. Time Ser. Anal. 2007, 28, 261–273. [Google Scholar] [CrossRef]
  15. Palma, W. Long-Memory Time Series Theory and Methods; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  16. Brockwell, P.; Davis, R. Time Series: Theory and Methods, 2nd ed.; Springer Science and Business Media: New York, NY, USA, 1991. [Google Scholar]
  17. Gradshteyn, I.; Ryzhik, I. Table of Integrals, Series, and Products, 8th ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2014. [Google Scholar]
  18. Kechagias, S.; Pipiras, V. Definitions and representations of multivariate long-range dependent time series. J. Time Ser. Anal. 2015, 36, 1–25. [Google Scholar] [CrossRef]
Table 1. Specific Values of the process variance for parameter values σ ϵ = 1 and d = 0.4 .
Table 1. Specific Values of the process variance for parameter values σ ϵ = 1 and d = 0.4 .
Formula Var ( x )
Original, Uncorrected2.731511
Corrected (& Hosking [4])2.070098
Brockwell and Davis [16] 13.2.82.070098
Peiris [10] Thm 3.12.070098
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peiris, S.; Hunt, R. Revisiting the Autocorrelation of Long Memory Time Series Models. Mathematics 2023, 11, 817. https://0-doi-org.brum.beds.ac.uk/10.3390/math11040817

AMA Style

Peiris S, Hunt R. Revisiting the Autocorrelation of Long Memory Time Series Models. Mathematics. 2023; 11(4):817. https://0-doi-org.brum.beds.ac.uk/10.3390/math11040817

Chicago/Turabian Style

Peiris, Shelton, and Richard Hunt. 2023. "Revisiting the Autocorrelation of Long Memory Time Series Models" Mathematics 11, no. 4: 817. https://0-doi-org.brum.beds.ac.uk/10.3390/math11040817

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop