Next Article in Journal
Correction: Ardia, D., et al. Return and Risk of Pairs Trading Using a Simulation-Based Bayesian Procedure for Predicting Stable Ratios of Stock Prices. Econometrics 2016, 4, 14
Next Article in Special Issue
Cointegration and Structure in Norwegian Wage–Price Dynamics
Previous Article in Journal
Acknowledgement to Reviewers of Econometrics in 2019
Previous Article in Special Issue
Partial Cointegrated Vector Autoregressive Models with Structural Breaks in Deterministic Terms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cointegration and Error Correction Mechanisms for Singular Stochastic Vectors

1
Università di Bologna, Department of Economics, 40126 Bologna, Italy
2
Einaudi Institute for Economics and Finance, 00187 Roma, Italy
3
Federal Reserve Board of Governors, Washington, DC 20551, USA
*
Author to whom correspondence should be addressed.
Econometrics 2020, 8(1), 3; https://doi.org/10.3390/econometrics8010003
Submission received: 28 March 2018 / Revised: 30 December 2019 / Accepted: 7 January 2020 / Published: 4 February 2020
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)

Abstract

:
Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q < r . The present paper studies cointegration and error correction representations for an I ( 1 ) singular stochastic vector y t . It is easily seen that y t is necessarily cointegrated with cointegrating rank c r q . Our contributions are: (i) we generalize Johansen’s proof of the Granger representation theorem to I ( 1 ) singular vectors under the assumption that y t has rational spectral density; (ii) using recent results on singular vectors by Anderson and Deistler, we prove that for generic values of the parameters the autoregressive representation of y t has a finite-degree polynomial. The relationship between the cointegration of the factors and the cointegration of the observable variables in a large-dimensional factor model is also discussed.

1. Introduction

An r-dimensional stochastic vector y t such that y t = A 0 u t + A 1 u t 1 + , where the matrices A j are r × q and u t is a q-dimensional white noise, with q < r , is said to be singular. Singular stochastic vectors have been systematically analyzed in a number of papers starting with (Anderson and Deistler 2008a, 2008b). A motivation for studying the consequences of singularity, as argued by these authors, is that the factors’ vector in large-dimensional dynamic factor models (DFM), such as those introduced in Forni et al. (2000); Forni and Lippi (2001), (Stock and Watson 2002a, 2002b), is typically singular. Singularity is also an important feature of dynamic stochastic general equilibrium models (DSGE), see e.g., Sargent (1989), Canova (2007), pp. 230–2. Singularity as it arises in DFMs is presented in some detail below.
DFMs are based on the idea that all the observed variables in an economic system are driven by a few common (macroeconomic) shocks and by idiosyncratic components which may result from measurement errors and sectoral or regional shocks. Formally, each variable in the n-dimensional dataset x i t , i = 1 , 2 , , n , t = 1 , 2 , , T , is decomposed into the sum of a common component χ i t , and an idiosyncratic component ϵ i t : x i t = χ i t + ϵ i t , where χ i t and ϵ j s are orthogonal for all i , j , t , s . In the standard version of the DFM the common components are linear combinations of an r-dimensional vector of common factors F t = ( F 1 t F 2 t F r t ) ,
χ i t = λ i 1 F 1 t + λ i 2 F 2 t + + λ i r F r t = λ i F t .
Now suppose that the observable variables x i t and the common factors F t are I ( 1 ) and that
( 1 L ) F t = C ( L ) u t ,
where u t is a nonsingular q-dimensional white-noise vector1, the common shocks. A number of papers analyzing macroeconomic databases find strong empirical support for the assumption that the vector F t is singular, i.e., that q < r . See, for US datasets, Giannone et al. (2005); Amengual and Watson (2007); Forni and Gambetti (2010); Luciani (2015). For a Euro-area dataset, see Barigozzi et al. (2014).
Such results can be easily understood observing that usually the static Equation (1) is just a convenient representation derived from a “primitive” set of dynamic equations linking the common components χ i t to the common shocks u t . As a simple example, suppose that the variables x i t are driven by a common one-dimensional cyclical process f t , such that ( 1 α L ) f t = u t , where u t is scalar white noise, and that the variables x i t load f t dynamically:
x i t = a i 0 f t + a i 1 f t 1 + ϵ i t .
In this case we can set F 1 t = f t , F 2 t = f t 1 = F 1 , t 1 , λ i 1 = a i 0 , λ i 2 = a i 1 , so that Equations (1) and (2) take the form
x i t = λ i 1 F 1 t + λ i 2 F 2 t + ϵ i t and F 1 t F 2 t = ( 1 α L ) 1 L ( 1 α L ) 1 u t ,
respectively. Here r = 2 and q = 1 so that F t is singular. For a general analysis of the relationship between representation (1) and “deeper” dynamic representations like (3), see e.g., Forni et al. (2009); Stock and Watson (2016).
Now suppose that the factors F t have been estimated. Obtaining u t and the impulse-response functions of the variables x i t with respect to u t (or structural shocks obtained by a linear transformation of u t ) requires the estimation of a VAR for the singular I ( 1 ) vector F t . On the other hand, the latter is necessarily cointegrated with cointegration rank c at least equal to r q (the rank of the spectral density of ( 1 L ) F t does not exceed q at all frequencies and, therefore, at frequency zero).
Singular vectors of factors in an I ( 1 ) DFM and I ( 1 ) singular vectors in DSGE models provide strong motivation for studying singular I ( 1 ) vectors in a general time-series context. The main contributions of the paper are:
(I)
A generalization of Johansen’s proof of the Granger Representation Theorem (from MA to AR), this is Proposition 2. Consider an I ( 1 ) singular vector y t , with dimension r, rank q < r , and cointegrating rank c r q . Assuming that ( 1 L ) y t has an ARMA structure, S ( L ) ( 1 L ) y t = B ( L ) u t and that some simple additional conditions hold, y t has a representation as a vector error correction mechanism (VECM) with c error correction terms:
A ( L ) y t = A * ( L ) ( 1 L ) y t + α ( β y t 1 w ) = B ( 0 ) u t ,
where α and β are both r × c and full rank, β y t w is I ( 0 ) , A ( L ) and A * ( L ) are r × r rational matrices in L. Under the additional assumption that unity is the only zero of B ( L ) , i.e., if z 1 then B ( z ) is full rank, A ( L ) and A * ( L ) are finite-degree matrix polynomials.
(II)
Assuming that the parameters of S ( L ) and B ( L ) may vary in an open subset of R λ , see Section 3.2 for the definition of λ , in Proposition 3 we show that all the assumptions used to obtain (4), and also the assumption that unity is the only possible zero of B ( L ) , hold for generic values of the parameters. This implies that the matrices A ( L ) and A * ( L ) are generically of finite degree, which is obviously not the case for nonsingular vectors.2
The paper is organized as follows. Section 2 is preliminary. We firstly recall recent results for stationary singular stochastic vectors with rational spectral density, see (Anderson and Deistler 2008a, 2008b). Secondly, we discuss cointegration and the cointegrating rank for I ( 1 ) singular stochastic vectors.
In Section 3 we prove our main results. We also obtain the permanent-transitory shock representation in the singular case: y t is driven by r c permanent shocks, i.e., r minus the cointegrating rank, the usual result. However, the number of transitory shocks is c ( r q ) , not c as in the nonsingular case.
Section 3 also contains an exercise carried out with simulated singular I ( 1 ) vectors. We compare the results obtained by estimating an unrestricted VAR in the levels and a VECM. Though limited to a simple example, the results confirm what has been found for nonsingular vectors, that under cointegration the long-run features of impulse-response functions are better estimated using a VECM rather than an unrestricted VAR in the levels (Phillips 1998).
In Section 4 we analyse cointegration of the observable variables x i t in a DFM. Our results on cointegration of the singular vector F t have the implication that p-dimensional subvectors of the n-dimensional common-component vector χ t , with p > r c , are cointegrated. As a consequence, stationarity of the idiosyncratic components would imply that all p-dimensional subvectors of the n-dimensional dataset x t are cointegrated if p > r c . For example, if q = 3 and d = 1 , then all 3-dimensional subvectors in the dataset are cointegrated, a kind of regularity that we do not observe in actual large macroeconomic datasets. This suggests that an estimation strategy robust to the assumption that the idiosyncratic components can be I ( 1 ) has to be preferred (Barigozzi et al. 2019, for this aspect we refer to). Section 5 concludes. Some proofs, a discussion of some non-uniqueness problems arising with singularity and details on the simulations are collected in the Appendix.

2. Stationary and I ( 1 ) Singular Vectors

2.1. Stationary Singular Vectors

As in this paper we only consider representation issues it is convenient to assume that all stochastic processes are defined for t Z . Accordingly, the lag operator L is defined as L y t = y t 1 for t Z (Bauer and Wagner (2012) also study I ( 1 ) and cointegrated processes for t Z ).
We start by introducing results on singular vectors with an ARMA structure from (Anderson and Deistler 2008a, 2008b). Some preliminary definitions are needed.
Definition 1.
(Zeros and Poles)
(A) When considering matrices V ( z ) whose entries are rational functions of z C we always assume that numerator and denominator of each entry have no common roots. If V ( z ) is an r × q matrix of rational functions, we say that z * is a pole of V ( z ) if it is a pole of some entry of V ( z ) .
(B) Suppose that V ( z ) is an r × q matrix whose entries are polynomial functions of z C , with q r . We say that z * C is a zero of V ( z ) if rank ( V ( z * ) ) < q , and that V ( z ) is zeroless if it has no zeros, i.e., rank ( V ( z ) ) = q for all z C .
With a minor abuse of language, we may speak of zeros and poles of the corresponding matrix V ( L ) . When a r × r polynomial matrix S ( L ) has all its zeros outside the unit circle we say that S ( L ) is stable.
All the stationary vector processes considered have an ARMA structure. Precisely, the r-dimensional process y t has an ARMA structure with rank q, q r , if there exist
(i)
a non-singular q-dimensional white-noise process u t ,
(ii)
an r × r stable polynomial matrix S ( z ) , with S ( 0 ) = I r ,
(iii)
an r × q matrix B ( z ) whose rank is q for all z with the exception of a finite subset of C , such that
y t = V ( L ) u t ,
where V ( L ) = S ( L ) 1 B ( L ) .
Suppose that y t has also the representation y t = S ˜ ( L ) 1 B ˜ ( L ) u ˜ t , where u ˜ t is a q ˜ -dimensional nonsingular white noise. Denoting by Σ y ( θ ) the spectral density of y t ,
Σ y ( θ ) = ( 2 π ) 1 V ( e i θ ) Σ u V ( e i θ ) ,
so that the rank of Σ y ( θ ) is q for all θ , with the exception of a finite subset of [ π , π ] . As the spectral density is independent of the ARMA representation, q = q ˜ and B ˜ ( z ) has rank q except for a finite subset of C .
Remark 1.
Let us recall that the equation
S ( L ) ζ t = B ( L ) u t ,
in the unknown vector process ζ t , where S ( L ) is stable, has only one stationary solution, and this is y t = S ( L ) 1 B ( L ) u t . Thus the ARMA process y t can also be defined as the stationary solution of S ( L ) ζ t = B ( L ) u t .
Definition 2.
(Genericity)Suppose that a statement Q depends on p A , where A is an open subset of R λ . We say that Q holds generically in A , or that Q holds for generic values of p A , if the subset N of A where it does not hold is nowhere dense in A , i.e., the closure of N in A has no internal points.
For example, assuming that p A = R , the statement “The roots of the polynomial x 2 + p x + 1 are distinct” holds generically in A .
Definition 3.
(Rational reduced-rank family of filters)Assume that r > q and let G be a set of ordered couples ( S ( L ) , B ( L ) ) , where:
(i)
B ( L ) is an r × q polynomial matrix of degree s 1 0 .
(ii)
S ( L ) is an r × r polynomial matrix of degree s 2 0 . S ( 0 ) = I r .
(iii)
Denoting by p the vector containing the λ = r q ( s 1 + 1 ) + r 2 s 2 coefficients of the entries of B ( L ) and S ( L ) , we assume that p Π , where Π is an open subset of R λ such that for p Π ,(1) S ( z ) is stable,(2) rank ( B ( z ) ) = q with the exception of a finite subset of C .
We say that G is a rational reduced-rank family of filters with parameter set Π.
The notation S p ( L ) , B p ( L ) , though more rigorous, would be heavy and not really necessary. We use it only in Appendix A.1.
Proposition 1.
Assume that r > q .
(I) 
Suppose that V ( L ) is an r × q matrix polynomial in L. If V ( z ) is zeroless then V ( L ) has an r × r finite-degree stable left inverse, i.e., there exists a finite-degree polynomial r × r matrix W ( L ) such that:(a) W ( 0 ) = I r ,(b) det ( W ( z ) ) = 0 implies | z | > 1 ,(c) W ( L ) V ( L ) = V ( 0 ) . Let y t be the stationary solution of S ( L ) ζ t = B ( L ) u t and suppose that B ( L ) is zeroless. Then y t has a finite vector autoregressive representation (VAR) A ( L ) y t = B ( 0 ) u t , where A ( L ) = N ( L ) S ( L ) and N ( L ) is a finite-degree left inverse of B ( L ) .
(II) 
Assume that y t is the stationary solution of S ( L ) ζ t = B ( L ) u t , where ( S ( L ) , B ( L ) ) belongs to a rational reduced-rank family of filters with parameter set Π. For generic values of the parameters in Π, B ( L ) is zeroless so that y t has a finite VAR representation.
For statement (I) see Anderson and Deistler (2008a), Theorem 3. Statement (II) is a modified version of their Theorem 2, see for a proof Forni et al. (2009), p. 1327.

2.2. Fundamentalness

Assume that the r-dimensional vector y t has an ARMA structure, rank q and the moving average representation (5). If rank ( B ( z ) ) = q for | z | < 1 , then u t belongs to the space spanned by y t k , with k 0 , and representation (5), as well as u t , is called fundamental (for these definitions and results see e.g., Rozanov (1967), pp. 43–7). Note that if (5) is fundamental rank ( B ( 0 ) ) = q . Note also that when q = r , the condition that rank ( B ( z ) ) = q for | z | < 1 becomes det ( B ( z ) ) 0 for | z | < 1 .
Remark 2.
Note that in Proposition 1, part (II), we do not assume that u t is fundamental for y t . However, Proposition 1, (II), states that for generic values of p Π the matrix B ( L ) is zeroless and therefore u t is fundamental for y t .

2.3. I ( 1 ) Singular Vectors

To analyze cointegration and the autoregressive representations of singular non-stationary vectors let us first recall the definitions of I ( 0 ) , I ( 1 ) and cointegrated vectors. This requires some preliminary definitions and results.
We denote by L 2 ( Ω , F , P ) the space of the square-integrable functions on the probability space ( Ω , F , P ) . Let z t = ( z 1 t z 2 t z r t ) , z h t L 2 ( Ω , F , P ) , be an r-dimensional stochastic process and consider the difference equation
( 1 L ) ζ t = z t ,
in the unknown r-dimensional process ζ t . A solution of (6) is
ψ ˜ t = z 1 + z 2 + + z t , for t > 0 0 , for t = 0 ( z 0 + z 1 + z t + 1 ) , for t < 0 ,
see e.g., Gregoir (1999), p. 439, Franchi andParuolo (2019). All the solutions of (6) are ψ t = ψ ˜ t + ϕ t , where ϕ t = ( ϕ 1 t ϕ 2 t ϕ r t ) , ϕ h t L 2 ( Ω , F , P ) , is a solution of the homogeneous equation ( 1 L ) ζ t = 0 , so that ϕ t = K , for some r-dimensional stochastic vector K , for all t Z . We say that the process ϕ t = K is a constant stochastic process. Obviously a constant stochastic process ϕ t = K is weakly stationary. Its spectral measure has the jump Σ K at frequency zero. Thus ϕ t has a spectral density (has an absolutely continuous spectral measure) if and only if Σ K = 0 , i.e., if and only if ϕ t ( ω ) = k , where k R r , for ω almost everywhere in Ω .
Definition 4.
(I(0), I(1) and Cointegrated vectors)
I(0).An r-dimensional ARMA y t with spectral density Σ y ( θ ) is I ( 0 ) if Σ y ( 0 ) 0 .
I(1).The r-dimensional vector stochastic process y t is I ( 1 ) if it is a solution ( 1 L ) ζ t = z t where z t is an r-dimensional I ( 0 ) process. The rank of y t is defined as the rank of z t .
Cointegration. 
Assume that the r-dimensional stochastic vector y t is I ( 1 ) and denote by Σ Δ y ( θ ) the spectral density of ( 1 L ) y t . The vector y t is cointegrated with cointegrating rank c, with 0 < c < r , if rank ( Σ Δ y ( 0 ) ) = r c .
If q is the rank of y t and r q , then c = r q + d , where q > d > 0 . Thus in the singular case, r > q , y t is necessarily cointegrated with cointegrating rank at least equal to r q .
If y t is I ( 1 ) and cointegrated with cointegrating rank c, there exist c linearly independent r × 1 vectors c j , j = 1 , , c , such that the spectral density of c j ( 1 L ) y t vanishes at frequency zero. The vectors c j are called cointegrating vectors and the set c j , j = 1 , , c , a complete set of cointegrating vectors. Of course a complete set of cointegrating vectors c j , j = 1 , , c , can be replaced by the set d j , j = 1 , , c , where the vectors d j are c independent linear combinations of the vectors c j .
Lemma 1.
(I) Assume that y t has an ARMA structure and has the rational representation (5): y t = V ( L ) u t . Then y t is I ( 0 ) if and only if V ( 1 ) 0 .
(II) Assume ( 1 L ) y t has an ARMA structure and has the rational representation
( 1 L ) y t = V ( L ) u t .
The process y t is I ( 1 ) if and only if V ( 1 ) 0 .
(III) If y t is I ( 1 ) , cointegrated and has representation (7), the cointegrating rank of y t is c if and only if the rank of V ( 1 ) is r c . Moreover c is a cointegrating vector for y t if and only if c V ( 1 ) = 0 .
(IV) Assume that y t is I ( 1 ) . c is a cointegrating vector for y t if and only if a scalar stochastic variable w L 2 ( Ω , F , P ) can be determined such that c y t w is stationary with an ARMA structure.
Proof. 
(I) is an immediate consequence of Σ y ( 0 ) = ( 2 π ) 1 V ( 1 ) Γ u V ( 1 ) , where Γ u is the nonsingular covariance matrix of u t . (II) and (III) are obtained in the same way from Σ Δ y ( 0 ) = ( 2 π ) 1 V ( 1 ) Γ u V ( 1 ) .
(IV) The process y t solves (6) with z t = V ( L ) u t , so that, defining
μ t = u 1 + u 2 + + u t , for t > 0 0 , for t = 0 ( u 0 + u 1 + u t + 1 ) , for t < 0 ,
we have
y t = V ( L ) μ t + K = V ( 1 ) + ( 1 L ) V ( L ) V ( 1 ) 1 L μ t + K = V ( 1 ) μ t + V * ( L ) u t + K ,
where (i) the entries of V * ( L ) = ( V ( L ) V ( 1 ) ) / ( 1 L ) are rational functions of L with no poles of modulus less or equal to unity, (ii) K is a constant r-dimensional stochastic process. We have:
c y t = c V ( 1 ) μ t + c V * ( L ) u t + c K .
If c is a cointegrating vector of y t we have c V ( 1 ) = 0 , so that
c y t = c V * ( L ) u t + c K .
Setting w = c K , the process c y t w = c V * ( L ) u t has the desired properties. Note that w has the equivalent definition w = c y 0 c V * ( L ) u 0 . Conversely, suppose that w is such that c y t w has an ARMA structure. By (9),
c y t w = c V ( 1 ) μ t + c V * ( L ) u t + c K w ,
so that
E ( c y t w ) 2 + E ( c V * ( L ) u t ) 2 + E ( c K w ) 2 c V ( 1 ) Σ μ T t V ( 1 ) c .
The three terms on the left-hand side are finite and independent of t. As Σ μ t = | t | Σ u and Σ u is positive definite, the right-hand side diverges for | t | unless c V ( 1 ) = 0 . □
Lemma 1 shows that our definitions of I ( 0 ) and I ( 1 ) processes are equivalent to Definitions 3.2, and 3.3 in Johansen (1995), p. 35, with two minor differences: (i) our assumption of rational spectral density, (ii) the time span of the stochastic processes is t = 0 , 1 , in Johansen’s book, t Z in the present paper. Also, under the assumption that ( 1 L ) y t has an ARMA structure, our definition of cointegration is equivalent to that in Johansen (1995), p. 37.

3. Representation Theory for Singular I ( 1 ) Vectors

In Section 3.1 we prove our generalization to singular vectors of the Granger representation theorem (from MA to AR). We closely follow the proof in Johansen (1995), Theorem 4.5, p. 55–57. In Section 3.2 we show that, under a suitable parameterization, the matrix of the autoregressive representation is generically of finite degree.

3.1. The Granger Representation Theorem (MA to AR)

Suppose that r q , c > 0 and r > c r q . Let B ( L ) be an r × q polynomial matrix of degree s 1 0 and S ( L ) an r × r polynomial matrix of degree s 2 0 with S ( 0 ) = I r .
Assumption 1.
S ( L ) is stable.
Assumption 2.
If z * is a zero of B ( z ) (i.e. rank ( B ( z * ) ) < q ) then either z * = 1 or | z * | > 1 .
Assumption 2 implies that the rank of B ( 0 ) is q. The next is a stronger version of Assumption 2:
Assumption 3.
If z * is a zero of B ( z ) then z * = 1 .
Assumption 4.
rank ( B ( 1 ) ) = r c .
Under Assumption 1, let y t be a solution of the equation
( 1 L ) ζ t = S ( L ) 1 B ( L ) u t .
We have
y t = S ( L ) 1 B ( L ) μ t + K ,
where μ t is defined in (8) and K is a constant stochastic process. By Assumption 4, S ( 1 ) 1 B ( 1 ) 0 , so that y t is I ( 1 ) with cointegrating rank c, see Lemma 1, (II) and (III).
Consider the finite Taylor expansion of B ( z ) around z = 1 :
B ( z ) = B ( 1 ) ( 1 z ) B ( 1 ) + ( 1 z ) 2 B ( 1 ) + .
Assumption 4 implies that
B ( 1 ) = ξ η ,
where ξ is r × ( r c ) of rank r c , η is q × ( r c ) of rank r c , see Lancaster and Tismenetsky (1985, p. 97, Proposition 3). The Taylor expansion above can be rewritten as
B ( z ) = ξ η + ( 1 z ) B * + ( 1 z ) 2 E ( z ) ,
where B * = B ( 1 ) and E ( z ) is a polynomial matrix.
Let ξ be an r × c matrix whose columns are orthogonal to all columns of ξ : (i) the columns of ξ are a complete set of cointegrating vectors for B ( L ) u t , (ii) the columns of the matrix S ( 1 ) ξ are a complete set of cointegrating vectors for y t . Regarding (i), using (11) and (12), we have
ξ S ( L ) y t = ξ B ( L ) μ t + ξ S ( 1 ) K = ( ξ B * + ( 1 L ) ξ E ( L ) ) u t + ξ S ( 1 ) K ,
so that ξ S ( L ) y t ξ S ( 1 ) K has an ARMA structure. Regarding (ii), see the proof of Proposition 2.
Assumption 5.
rank ξ B * η = rank ξ B * ξ ξ η = q .
Define S * ( L ) = S ( L ) S ( 1 ) 1 L .
Assumption 6.
ξ ( B * S * ( 1 ) S ( 1 ) 1 ξ η ) 0 .
Remark 3.
Let y t be a solution of (10) so that ( 1 L ) y t is stationary and S ( L ) [ ( 1 L ) y t ] = B ( L ) u t . Assumption 2, and therefore 3, implies that u t is fundamental for ( 1 L ) y t , see Section 2.2.
We are now ready for our main representation result.
Proposition 2.
(I) Weak form. Suppose that Assumptions 1, 2, 4, 5 and 6 hold and let y t be a solution of the difference Equation (10), so that y t = S ( L ) 1 B ( L ) μ t + K , with μ t defined in (8) and K a constant stochastic process. Set β = S ( 1 ) ξ . Then a c-dimensional stochastic vector w can be determined such that (i) β y t w is I ( 0 ) , (ii) y t has the error correction representation
A ( L ) y t = A * ( L ) ( 1 L ) y t + α ( β y t 1 w ) = B ( 0 ) u t ,
where A ( L ) is a rational r × r matrix with no poles in or on the unit circle, A ( 1 ) = I r , A * ( L ) = ( A ( L ) A ( 1 ) L ) ( 1 L ) 1 , α is r × c and full rank, α β = A ( 1 ) .
(II) Strong form. Under Assumptions 1, 3, 4, 5 and 6, statement (I) holds with an r × r stable, finite-degree matrix polynomial A ( L ) .
Proof. 
Multiply both sides of ( 1 L ) S ( L ) y t = B ( L ) u t by the r × r invertible matrix Ξ = ξ ξ . We obtain
( 1 L ) Ξ S ( L ) y t = Ξ B ( L ) u t = 0 c × q ξ ξ η + ( 1 L ) ξ B * ξ B * + ( 1 L ) 2 ξ E ( L ) ξ E ( L ) u t = ( 1 L ) I c 0 0 I r c ξ B * ξ ξ η + ( 1 L ) ξ E ( L ) ξ B * + ( 1 L ) 2 0 c × q ξ E ( L ) u t .
Taking the first c rows in (15),
( 1 L ) ξ S ( L ) y t = ( 1 L ) ξ B * + ( 1 L ) ξ E ( L ) u t .
This implies that
ξ S ( L ) y t = ξ B * + ( 1 L ) ξ E ( L ) u t + w ,
where w is a c-dimensional constant stochastic vector. Comparing with (13), w = ξ S ( 1 ) K . On the other hand,
ξ S ( 1 ) y t w = ( ξ S ( L ) y t w ) ( ξ S ( L ) y t ξ S ( 1 ) y t ) = ( ξ S ( L ) y t w ) ξ S * ( L ) ( 1 L ) y t = ( ξ S ( L ) y t w ) ξ S * ( L ) S ( L ) 1 B ( L ) u t = ξ ( B * S * ( 1 ) S ( 1 ) 1 ξ η ) + ( 1 L ) H ( L ) u t ,
where the last equality has been obtained using (16) and H ( L ) is a suitable polynomial matrix. Thus β y t w = ξ S ( 1 ) y t w has an ARMA structure. Moreover, by Assumption 6, β y t w is I ( 0 ) .
Joining (16) with the last r c rows of (15),
I c 0 0 ( 1 L ) I r c Ξ S ( L ) y t I c 0 ( r c ) × c w = M ( L ) u t ,
where
M ( L ) = ξ B * ξ ξ η + ( 1 L ) ξ E ( L ) ξ B * + ( 1 L ) 2 0 c × q ξ E ( L ) .
By (15) and (19),
B ( L ) = Ξ 1 ( 1 L ) I c 0 0 I r c M ( L ) . 3
By Assumption 5, M ( z ) has no zero at z = 1 , see (19). On the other hand, (i) if z * is a zero of M ( z ) then z * is a zero of B ( z ) , (ii) if z * is a zero of B ( z ) , z * 1 , then z * is a zero of M ( z ) . Therefore, Assumption 3 implies that M ( z ) is zeroless and viceversa. Under Assumption 2, the zeros of M ( z ) lie outside the unit circle. In order to conclude the proof we need inverting M ( L ) in (18).
(I) Under Assumption 3, Proposition 1, part (I), states that there exists an r × r stable, finite-degree polynomial matrix N ( L ) = I r + N 1 L + + N p L p , for some p, such that: (i) N ( 0 ) = I r , (ii) N ( L ) M ( L ) = M ( 0 ) .
(II) Under Assumption 2, by a standard procedure we remove all the zeros of M ( z ) which lie outside the unit circle4, then use Proposition 1, part (I), to left-invert the residual zeroless polynomial, thus obtaining an r × r rational matrix N ( L ) such that (i) N ( L ) has no poles in or on the unit circle (possible poles of N ( L ) are the zeros of M ( L ) , which lie outside the unit circle), (ii) N ( 0 ) = I r , (iii) N ( L ) M ( L ) = M ( 0 ) . See also Deistler et al. (2010).
Defining
A ( L ) = Ξ 1 N ( L ) I c 0 0 ( 1 L ) I r c Ξ S ( L ) = Ξ 1 N ( L ) ξ ( 1 L ) ξ S ( L )
and using M ( 0 ) = Ξ B ( 0 ) , we have
A ( L ) y t Ξ 1 N ( 1 ) I c 0 ( r c ) × c w = B ( 0 ) u t ,
with A ( 0 ) = I r . Defining A * ( L ) = ( A ( L ) A ( 1 ) L ) ( 1 L ) 1 ,
A * ( L ) ( 1 L ) y t + A ( 1 ) y t 1 Ξ 1 N ( 1 ) I c 0 ( r c ) × c w = B ( 0 ) u t .
Defining
α = Ξ 1 N ( 1 ) I c 0 ( r c ) × c ,
we see that A ( 1 ) = α β and
A * ( L ) ( 1 L ) y t + α ( β y t 1 w ) = B ( 0 ) u t .
 □
Some remarks are in order.
Remark 4.
(I) Under our assumption of an ARMA structure, Assumption 1 corresponds to Definition 3.1 in Johansen’s book, see p. 34. Assumption 2 is Johansen’s Assumption 1 (see p. 14), adapted for singularity. Assumption 3 has no counterpart in Johansen’s nonsingular framework. In Section 3.2 we show that under the parameterization adopted in Definition 5, Assumption 3 holds generically.
(II) Simplifying the model by taking S ( L ) = I r , Assumption 5 generalizes to the singular case Johansen’s assumption that ξ C * η is full rank (see Theorem 4.5, p. 55; C * corresponds to our B * ). For, assuming that r = q , multiplying the matrix in Assumption 5 by the nonsingular matrix η η , we obtain that Assumption 5 holds if and only if ξ B * η is full rank. Assumption 5 is used in the proof of Proposition 2 to invert the matrix M ( L ) , which remains on the right-hand side after the removal of the unit roots, see Equation (18), which is the same rôle played by Johansen’s assumption in his proof.
(III) Under S ( L ) = I r , assumption 6 simplifies to ξ B * 0 . If d > 0 Assumption 6 is a consequence of Assumption 5. For, if d > 0 then r c = q d < q . On the other hand, r c is the number of rows of η , so that Assumption 5 holds only if Assumption 6 holds. In particular, if r = q and c = d > 0 , Assumption 6 is redundant. However if r > q and d = 0 , so that the rank of η is q, then Assumption 5 holds even if ξ B * = 0 . Assumption 6 is necessary in Proposition 2 to prove that the error correction term is I ( 0 ) , not only stationary.
Remark 5.
Uniqueness issues arise with autoregressive representations of singular vectors. For example, suppose that c = r q > 0 , so that d = 0 . Representation (14) has an ( r q ) -dimensional error correction term β y t w . On the other hand, in this case B ( 1 ) has full rank q, so that Proposition 1(I)applies and, in spite of cointegration, y t has an autoregressive representation in differences
D ( L ) S ( L ) ( 1 L ) y t = B ( 0 ) u t .
In Appendix B.1 we sketch a proof of the statement that in general, y t has VECM representations with a number of error correction terms ranging from d to c. However, as we show in Appendix B.2, different autoregressive representations of y t produce the same impulse-response functions. Both in this and the companion paper Barigozzi et al. (2019) the number of error correction terms in the error correction representation for reduced-rank I(1) vectors is always the maximum c. It is worth reporting that, in our experiments with simulated data, the best results in estimation of singular VECMs are obtained using c as the number of error correction terms.
Remark 6.
Assume for simplicity that S ( L ) = I r . From Equation (17):
e t = β y t w = ξ y t w = ξ B * + ( 1 L ) H ( L ) u t .
If r = q , Assumption 5 implies that ξ B * has rank c, so that no c-dimensional vector d 0 can be determined such that some of the coordinates of d e t is stationary but not I ( 0 ) . Thus, according to the definition introduced in Franchi andParuolo (2019), p. 1181, the error term e t is a “non-cointegrated I ( 0 ) process.” When r > q and c q , i.e., r 2 q d , elementary examples can be produced in which e t is an I ( 0 ) but not a non-cointegrated I ( 0 ) process (one is given in Appendix A.2). Thus Assumption 6 only implies that e t is I ( 0 ) . Of course, under c q , the assumption that ξ ( B * S * ( 1 ) S ( 1 ) 1 ξ η ) has rank c, an enhancement of Assumption 6, implies that e t is a non-cointegrated I ( 0 ) process. On the other hand, if c > q , i.e., r > 2 q d , e t cannot be a non-cointegrated I ( 0 ) process.

3.2. Generically, A ( L ) Is a Finite-Degree Polynomial

Suppose that the couple ( S ( L ) , B ( L ) ) is parameterized as in Definition 3. It easy to see that B ( 1 ) has generically rank q, so that generically the cointegrating rank of y t is r q . In particular, if r = q cointegration is non generic.
It is quite easy to see that this paradoxical result only depends on the choice of a parameter set that is unfit to study cointegration. Our starting point here is that a specific value of c between r q and r 1 has a motivation in economic theory or in statistical inference, and must be therefore built in the parameter set. Thus in Definition 5 below the family of filters is redefined so that generically the cointegrating rank is equal to a given c between r q and r 1 .
Definition 5.
(Rational reduced-rank family of filters with cointegrating rank c)Assume that r > q , c > 0 and r > c r q . Let G be a set of couples ( S ( L ) , B ( L ) ) , where:
(i) 
The matrix B ( L ) has the parameterization
B ( L ) = ξ η + ( 1 L ) B * + ( 1 L ) 2 E ( L ) ,
where ξ and η are r × ( r c ) and q × ( r c ) respectively, B * is an r × q matrix and E ( L ) is an r × q matrix polynomial of degree s 1 0 .
(ii) 
S ( L ) is an r × r polynomial matrix of degree s 2 0 . S ( 0 ) = I r .
(iii) 
Denoting by p the vector containing the λ = ( r c ) ( r + q ) + r q ( s 1 + 2 ) + r 2 s 2 coefficients of the matrices S ( L ) , ξ , η , B * and E ( L ) , we assume that p Π , where Π is an open subset of R λ such that for p Π :(1) S ( z ) is stable,(2) rank ( B ( z ) ) = q with the exception of a finite subset of C ,(3) rank ( B ( 1 ) ) = rank ( ξ η ) = r c .
We say that G is a rational reduced-rank family of filters with cointegrating rank c.
Proposition 3.
Assume that r > q . Let y t be a I ( 1 ) solution of Equation (10), where ( S ( L ) , B ( L ) ) belongs to a rational reduced-rank family of filters with cointegrating rank c. For generic values of the parameters in Π, Assumptions 1, 3, 4, 5 and 6 hold. Thus the Strong Form of Proposition 2 holds and y t has an error correction representation
A ( L ) y t = A * ( L ) ( 1 L ) y t + α ( β y t 1 w ) = B ( 0 ) u t ,
where A ( L ) is a finite-degree polynomial matrix.
Proof. 
Part (iii) of Definition 5 implies that Assumptions 1 and 4 hold for all p Π . The sets where Assumptions 5 and 6 do not hold are the intersections of the open set Π with the algebraic varieties
( a ) rank ξ B * η < q , ( b ) ξ ( B * S * ( 1 ) S ( 1 ) 1 ξ η ) = 0
(the variety described by (a) is obtained by equating to zero the determinant of all the q × q submatrices of the r × q matrix between brackets). It is easy to see that the varieties (a) and (b) are not trivial, i.e., that their dimension is lower than λ . Thus Assumptions 5 and 6 hold generically. The same result holds for Assumption 3. The points of Π where it is not fulfilled belong to a lower-dimensional algebraic variety. This is proved in Appendix A.1, see in particular Lemma A4. □
Remark 7.
It is easy to see that, assuming that c q , rank ( ξ ( B * S * ( 1 ) S ( 1 ) 1 ξ η ) = c holds generically in Π. Thus, in that case, the error term β y t w is generically a non-cointegrated I ( 0 ) process, see Remark 6.
Remark 8.
A general comment on genericity results is in order. Theorems like Proposition 3 or Proposition 1, part (II), show that the subset where some statement does not hold belong to some algebraic variety of lower dimension (see the proof of Proposition 3 in particular), and is therefore negligible from a topological point of view. This suggests the working hypothesis that such subset is negligible from an economic or statistical point of view as well. If, for example, economic theory produces a singular vector y t with cointegrationg rank c, we may find reasonable to conclude that y t has representation (14) with a finite autoregressive polynomial. However, a greater degree of certainty is obtained by checking that the parameters of ( S ( L ) , B ( L ) ) , that are implicit in the theory, do not necessarily lie in one of the three algebraic varieties described in the proof of Proposition 3.
Definition 5 does not assume that B ( L ) has no zeros inside the unit circle. Thus we have not assumed that u t is fundamental for ( 1 L ) F t , see Section 2.2. However, Proposition 3 shows that for generic values of the parameters in Π , the assumptions of Proposition 2, strong form, hold, Assumption 3 in particular, so that B ( L ) has no zeros of non-unit modulus and therefore inside the unit circle. Thus:
Proposition 4.
Assume that r > q . Let y t be a solution of Equation (10), where ( S ( L ) , B ( L ) ) belongs to a rational reduced-rank family of filters with cointegrating rank c. For generic values of the parameters in Π, u t is fundamental for ( 1 L ) y t .
Remark 9.
Note that Propositions 3 and 4 do not hold in the nonsingular case, where no genericity argument can be used to rule out non-unit zeros of B ( L ) , either inside or outside the unit circle. In particular, fundamentalness of u t for ( 1 L ) y t is not generic if r = q .

3.3. Permanent and Transitory Shocks

Let η be a q × d matrix whose columns are independent and orthogonal to the columns of η , and let
η ¯ = η ( η η ) 1 , η ¯ = η ( η η ) 1 .
Defining v 1 t = η u t , and v 2 t = η u t , we have
u t = η ¯ v 1 t + η ¯ v 2 t = η ¯ η ¯ v 1 t v 2 t
We have
B ( L ) u t = B ( L ) η ¯ η ¯ v 1 t v 2 t = ( 1 L ) G 1 ( L ) v 1 t + ξ + ( 1 L ) G 2 ( L ) v 2 t .
where G 1 ( L ) = B * + ( 1 L ) E ( L ) η ¯ , and G 2 ( L ) = B * + ( 1 L ) E ( L ) η ¯ . All the solutions of the difference equation ( 1 L ) y t = S ( L ) 1 C ( L ) u t are
y t = S ( L ) 1 G 1 ( L ) v 1 t + G 2 ( L ) v 2 t + T t + K ,
where K is a constant stochastic process, and
T t = ξ ( v 21 + v 22 + + v 2 t ) , for t > 0 0 , for t = 0 ξ ( v 20 + v 2 , 1 + + v 2 , t + 1 ) , for t < 0 .
As ξ is full rank, we see that y t is driven by the q d = r c permanent shocks v 2 t , and by the d temporary shocks v 1 t . In representation (21), the component T t is the common-trend of Stock and Watson (1988). Note that the number of permanent shocks is obtained as r minus the cointegrating rank, as usual. However, the number of transitory shocks is only d = c ( r q ) , as though r q transitory shocks had a zero coefficient.

3.4. VECMs and Unrestricted VARs in The Levels

Several papers have addressed the issue of whether and when an error correction model or an unrestricted VAR in the levels should be used for estimation in the case of nonsingular cointegrated vectors: Sims et al. (1990) have shown that the parameters of a cointegrated VAR are consistently estimated using an unrestricted VAR in the levels; on the other hand, Phillips (1998) shows that if the variables are cointegrated, the long-run features of the impulse-response functions are consistently estimated only if the unit roots are explicitly taken into account, that is within a VECM specification. The simulation exercise described below provides evidence in favour of the VECM specification in the singular case.
(I)
We generate y t using a specification of (14) with r = 4 , q = 3 , d = 2 , so that c = r q + d = 3 . The 4 × 4 matrix A ( L ) is of degree 2. The impulse-response functions are identified by assuming that the upper 3 × 3 submatrix of B ( 0 ) is lower triangular (see Appendix C for details). We replicate the generation of y t 1000 times for T = 100 , 500 , 1000 , 5000 .
(II)
For each replication, we estimate a (misspecified) VAR in differences (DVAR), a VAR in the levels (LVAR) and a VECM, as in Johansen (1988, 1991), assuming known c, the degree of A ( L ) and that of A * ( L ) . For the VAR in differences the impulse-response functions for ( 1 L ) y t are cumulated to obtain impulse-response function for y t . The root mean square error between estimated and actual impulse-response functions is computed for each replication using all 12 impulse-responses and averaged over all replications.
The results are shown in Table 1. We see that the RMSE of both the VECM and the LVAR decreases as T increases. However, for all values of T, the RMSE of the VECM stabilizes as the lag increases, whereas it deteriorates for the LVAR, in line with the claim that the long-run response of the variables are better estimated with the VECM. The performance of the misspecified DVAR is uniformly poor with the exception of lag zero.

4. Cointegration of the Observable Variables in a DFM

Consider again the factor model x i t = χ i t + ϵ i t , rewritten here as
x t = χ t + ϵ t , χ t = Λ F t ,
where Λ is n × r , with n > r . The relationship between cointegration of the factors F t and cointegration of the variables x i t is now considered.
Let us recall that the the common factors F j t are assumed to be orthogonal to the idiosyncratic components ϵ k s for all i , j , t , s , i.e., E χ t ϵ s = 0 n × n . for all t , s , see the Introduction. The other assumptions on model (22) are asymptotic, see e.g., Forni et al. (2000); Forni and Lippi (2001); (Stock and Watson 2002a, 2002b), and put no restriction on the matrix Λ and the vector ϵ t for a given finite n. In particular, the first r eigenvalues of the matrix Λ Λ must diverge as n , but this has no implications on the rank of the matrix Λ corresponding to, say, n = 10 . However, as we see in Proposition 5 (iii), if the idiosyncratic components are I ( 0 ) , then, independently of Λ , all p-dimensional subvectors of x t are cointegrated for p > q d , which is at odds with what is observed in the macroeconomic datasets analyzed in the empirical Dynamic Factor Model literature. This motivates assuming that ϵ t is I ( 1 ) . In that case, see Proposition 5 (i), cointegration of x t requires that both the common and the idiosyncratic components are cointegrated. Some results are collected in the statement below.
Proposition 5.
Let x t ( p ) = χ t ( p ) + ϵ t ( p ) = Λ ( p ) F t + ϵ t ( p ) be a p-dimensional subvector of x t , p n . Denote by c χ p and c ϵ p the cointegrating rank of χ t ( p ) and ϵ t ( p ) respectively. Both range from p, stationarity, to 0, no cointegration.
(i) 
x t ( p ) is cointegrated only if χ t ( p ) and ϵ t ( p ) are both cointegrated.
(ii) 
If p > q d then χ t ( p ) is cointegrated. If p q d and rank ( Λ ( p ) ) < p then χ t ( p ) is cointegrated.
(iii) 
Let V χ R p and V ϵ R p be the cointegrating spaces of χ t ( p ) and ϵ t ( p ) respectively. The vector x t ( p ) is cointegrated if and only if the intersection of V χ and V ϵ contains non-zero vectors. In particular, (a) if p > q d and c ϵ > q d then x ( p ) is cointegrated, (b) if p > q d and ϵ t ( p ) is stationary then x ( p ) is cointegrated.
Proof. 
Because χ i t and ϵ j s are orthogonal for all i , j , t , s , the spectral densities of ( 1 L ) x t ( p ) , ( 1 L ) χ t ( p ) , ( 1 L ) ϵ t ( p ) fulfill:
Σ Δ x ( p ) ( θ ) = Σ Δ χ ( p ) ( θ ) + Σ Δ ϵ ( p ) ( θ ) θ [ π , π ] .
Now, (23) implies that
λ p Σ Δ x ( p ) ( 0 ) λ p Σ Δ χ ( p ) ( 0 ) + λ ( p ) Σ Δ ϵ ( p ) ( 0 ) ,
where λ p ( A ) denotes the smallest eigenvalue of the hermitian matrix A; this is one of the Weyl’s inequalities, see Franklin (2000), p. 157, Theorem 1. Because the spectral density matrices are non-negative definite, the right hand side in (24) vanishes if and only if both terms on the right hand side vanish, i.e., the spectral density of Δ x t ( p ) is singular at zero if and only if the spectral densities of Δ χ t ( p ) and Δ ϵ t ( p ) are singular at zero. By definition 4, (i) is proved.
Without loss of generality we can assume that S ( L ) = I r . By substituting (21) in (22), we obtain
x t = Λ G 1 ( L ) v 1 t + G 2 ( L ) v 2 t + T t + K + ϵ t ,
where on the right hand side the only non-stationary terms are T t and possibly ϵ t . By recalling that T t = ξ s = 1 t v 2 s where ξ is of dimension r × ( q d ) and rank q d , and by defining G t = Λ [ G 1 ( L ) v 1 t + G 2 ( L ) v 2 t + K ] and T t = s = 1 t v 2 s , we can rewrite (25) as
x t = Λ ξ T t + G t + ϵ t .
For x t ( p ) :
x t ( p ) = χ t ( p ) + ϵ t ( p ) = Λ ( p ) ξ T t + G t ( p ) + ϵ t ( p ) ,
where Λ ( p ) and G t ( p ) have an obvious definition. Of course cointegration of the common components χ t ( p ) is equivalent to cointegration of Λ ( p ) ξ T t , which in turn is equivalent to rank ( Λ ( p ) ξ ) < p . Statement (ii) follows from
rank Λ ( p ) ξ min rank ( Λ ( p ) ) , rank ( ξ ) .
The first part of (iii) is obvious. Assume now that p > q d . If c χ p + c ϵ p = dim ( V χ ) + dim ( V ϵ ) = p ( q d ) + c ϵ p > p , i.e., if c ϵ p > q d , then the intersection between V χ and V ϵ is non-trivial, so that x t ( p ) is cointegrated. □

5. Summary and Conclusions

The paper studies representation theory for singular I ( 1 ) stochastic vectors, the factors of an I ( 1 ) Dynamic Factor Model in particular. Singular I ( 1 ) vectors are cointegrated, with a cointegrating rank c equal to r q , the dimension of y t minus its rank, plus d , with 0 d < q .
If ( 1 L ) y t has rational spectral density, under assumptions that generalize to the singular case those in Johansen (1995), we show that y t has an error correction representation with c error terms, thus generalizing the Granger representation theorem (from MA to AR) to the singular case. Important consequences of singularity are that generically: (i) the autoregressive matrix polynomial of the error correction representation is of finite degree, (ii) the white noise vector driving ( 1 L ) y t is fundamental.
We find that y t is driven by r c permanent shocks and d = c ( r q ) transitory shocks, not c as in the nonsingular case.
Using simulated data generated by a simple singular VECM, confirms previous results, obtained for nonsingular vectors, showing that under cointegration the long-run features of impulse-response functions are better estimated using a VECM rather than a VAR in the levels.
In Section 4 we argue that stationarity of the idiosyncratic components in a DFM produce an amount of cointegration for the observable variables x i t that is not observed in the datasets that are standard in applied Dynamic Factor Model literature. Thus the idiosyncratic vector in those datasets is likely to be I ( 1 ) , so that an estimation strategy robust to the assumption that some of the idiosyncratic variables ϵ i t are I ( 1 ) should be preferred.
The results in this paper are the basis for estimation of I ( 1 ) Dynamic Factor Models with cointegrated factors, which is developed in the companion paper (Barigozzi et al. 2019).

Author Contributions

All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Dietmar Bauer, Manfred Deistler, Massimo Franchi, Martin Wagner, three anonymous referees and the Editors of this Special Issue gave important suggestions for improvements. We also thank the participants to the Workshop on Estimation and Inference Theory for Cointegrated Processes in the State Space Representation, Technische Universität Dortmund, January 2016. Part of this paper was written while Matteo Luciani was chargé de recherches F.R.S.- F.N.R.S., and he gratefully acknowledges their financial support. Of course we are responsible for any remaining errors.

Disclaimer

The views expressed in this paper are those of the authors and do not necessarily reflect those of the Board of Governors or the Federal Reserve System.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs

Appendix A.1. Assumption 3 Holds Generically

Proving that Assumption 3 holds generically is equivalent to proving that M ( z ) is generically zeroless, see the argument below Equation (20).
We need some preliminary results. Lemma A1, though quite easy, is not completely standard and is therefore carefully stated and proved below. Regarding notation, to avoid possible misunderstandings, let us recall that vectors and matrices are always denoted by boldface symbols, while light symbols denote scalars, see Lemmas A1 and A2 in particular.
Lemma A1.
Let A j , j = 1 , , s , be scalar polynomials defined on R λ , let p R λ and Q ( p ) be the statement
A j ( p ) = 0 , for j = 1 , , s ,
for example the statement that all the q × q minors of M ( 1 ) vanish, i.e., that rank ( M ( 1 ) ) < q . Let Π be an open subset of R λ . If Q is false for one point p * R λ , then Q is generically false in Π.
Proof. 
Let N be the closure in Π (in the topology of Π ) of the subset of Π where Q is true. Suppose that Q is not generically false in Π . Then the interior of N in Π , call it N , is not empty. As Π is open, N is open both in the topology of Π and of R λ . On the other hand a polynomial function defined on R λ vanishes on an open set if and only if it vanishes on the whole R λ , which contradicts the existence of a point in R λ where Q is false. □
Lemma A2.
Consider the scalar polynomials
A ( z ) = a 0 z n + a 1 z n 1 + + a n , B ( z ) = b 0 z m + b 1 z m 1 + + a m ,
with a 0 0 and b 0 0 , and let α i , i = 1 , , n and β j , j = 1 , , m , be the roots of A and B, respectively. Then: (i)
a 0 m b 0 n i , j ( α i β j ) = R ( a 0 , a 1 , , a n ; b 0 , b 1 , , b m ) ,
where R is a polynomial function which is called the resultant of A and B. (ii) The resultant vanishes if and only if A and B have a common root. (iii) Suppose that the coefficients a i and b j are polynomial functions of p Π , where Π is an open subset of R λ . If there exists a point p * R λ such that a 0 ( p * ) 0 , b 0 ( p * ) 0 , and R ( p * ) 0 , then generically in Π the polynomials A and B have no common roots.
Proof. 
For (i) and (ii) see van der Waerden (1953, pp. 83-8). Statement (iii) is an obvious consequence of (ii) and Lemma A1. □
Lemma A3.
Recall that a zero of M ( z ) is a complex number z * such that rank ( M ( z * ) ) < q . If M ( z ) has two q × q submatrices whose determinants have no common roots, then M ( z ) is zeroless.
Proof. 
If z * is a zero of M ( z ) , then z * is a zero of all the q × q submatrices of M ( z ) . □
For the statement and proof of our last result it is convenient to make explicit the dependence of the matrix M ( z ) and its submatrices on the vector p . Thus we use M p ( z ) , etc. The parameters of the matrix S ( L ) play no role here. Hence, with no loss of generality, we assume s 2 = 0 , so that λ = ( r c ) ( r + q ) + r q ( s 1 + 2 ) . Lemmas A2–A4 below imply that Assumption 3 holds generically in Π .
Lemma A4.
Let M 1 p ( z ) , M 2 p ( z ) , be all the q × q submatrices of M p ( z ) and let L i p be the leading coefficient of det M i p ( z ) and R i j p is the resultant of det M i p ( z ) and det M j p ( z ) . There exist i, j, p * R λ such that
L i p * L j p * 0
and
R i j p * 0 .
Proof. 
Assume that r = q + 1 . To each p Π there corresponds the matrix
M p ( z ) = ξ B * ξ ξ η + ( 1 z ) ξ E ( z ) ξ B * + ( 1 z ) 2 0 c × q ξ E ( z ) .
 □
Of course, the definition of M p ( z ) makes sense for all p R λ , see Equation (19). Let M 1 p ( z ) and M 2 p ( z ) be the matrices obtained from M p ( z ) by removing the first and the last row respectively. We have:
degree [ det ( M 1 p ( z ) ) ] ( q d ) ( s 1 + 2 ) + d ( s 1 + 1 ) = d 1 , degree [ det ( M 2 p ( z ) ) ] ( q d 1 ) ( s 1 + 2 ) + ( d + 1 ) ( s 1 + 1 ) = d 2 .
We will construct a point p * R λ such that: (A) the coefficient of z d 1 in det ( M 1 p * ( z ) ) and the coefficient of z d 2 in det ( M 2 p * ( z ) ) (the leading coefficients) do not vanish, (B) the resultant of det ( M 1 p * ( z ) ) and det ( M 2 p * ( z ) ) does not vanish.
Let us firstly define a family of matrices, denoted by M ̲ ( z ) , obtained by specifying η , ξ , ξ , B * and E ( z ) in the following way:
η ̲ = 0 ( q d ) × d I q d , ξ ̲ = I q d 0 c × ( q d ) , ξ ̲ = K H , B ̲ * = H 0 ( q + 1 ) × ( q d ) , E ̲ ( z ) = E 1 ( z ) E 2 ( z ) E 3 ( z ) ,
where:
K = 0 1 × ( q d ) 1 0 1 × d , H = 0 d × ( q + 1 d ) I d , E 1 ( z ) = k 1 ( z ) h 1 ( z ) 0 0 ( q d ) × d h q d 1 ( z ) 0 k q d ( z ) , E 2 ( z ) = e ( z ) 0 1 × ( q 1 ) , E 3 ( z ) = f 1 ( z ) g 1 ( z ) 0 0 d × ( q d 1 ) 0 f d ( z ) g d ( z ) ,
the entries e, k i , h i , f i and g i being scalar polynomials of degree s 1 .
We denote by q 1 the vector including the coefficients of the polynomials f i , i = 1 , , d and k i , i = 1 , , ( q d ) , a total of q ( s 1 + 1 ) coefficients, by q 2 the vector including the coefficients of the polynomials e, g i , i = 1 , , d and h i , i = 1 , , ( q d 1 ) , a total of q ( s 1 + 1 ) coefficients, by q 0 the vector including the zeros and the ones in the definition of ξ ̲ , η ̲ , B ̲ * , E ̲ , and define q = ( q 0 q 1 q 2 ) , which is a λ -dimensional parameter vector. We put no restriction on q 1 and q 2 , so that both can take any value in R ν , with ν = q ( s 1 + 1 ) . Note that q does not necessarily belong to Π . We have:
M ̲ q ( z ) = 0 1 × d 0 1 × ( q d ) I d 0 d × ( q d ) 0 ( q d ) × d I q d + ( 1 z ) E 2 ( z ) E 3 ( z ) 0 ( q d ) × q + ( 1 z ) 2 0 1 × q 0 d × q E 1 ( z ) .
The matrix M ̲ q ( z ) has zero entries except for the diagonal joining the positions ( 1 , 1 ) and ( q , q ) , and the diagonal joining ( 2 , 1 ) and ( q + 1 , q ) . The matrices M ̲ 1 q ( z ) and M ̲ 2 q ( z ) are upper- and lower-triangular, respectively, and
det ( M ̲ 1 q ( z ) ) = [ 1 + ( 1 z ) f 1 ( z ) ] [ ( 1 + ( 1 z ) f d ( z ) ] × [ 1 + ( 1 z ) 2 k 1 ( z ) ] [ 1 + ( 1 z ) 2 k q d ( z ) ] = L 1 , d 1 q z d 1 + + L 1 , 0 q det ( M ̲ 2 q ( z ) ) = ( 1 z ) 2 q d 1 e ( z ) [ g 1 ( z ) g d ( z ) ] [ h 1 ( z ) h q d 1 ( z ) ] = L 2 , d 2 q z d 1 + + L 2 , 0 q .
Note that det ( M ̲ 1 q ( z ) ) does not depend on q 2 , while det ( M ̲ 2 q ( z ) ) does not depend on q 1 . Thus we use the notation δ 1 q 1 ( z ) = det ( M ̲ 1 q ( z ) ) , δ 2 q 2 ( z ) = det ( M ̲ 2 q ( z ) ) , M 1 , d 1 q 1 = L 1 , d 1 q , M 2 , d 2 q 2 = L 2 , d 2 q . Now:
(i)
Let q 2 * R ν be such that none of the leading coefficients of the polynomials e, g i and h i vanishes. Of course M 2 , d 2 q 2 * = d 2 0 .
(ii)
Let z ˇ be a root of δ 2 q 2 * ( z ) . If z ˇ = 1 then z ˇ is not a root of δ 1 q 1 ( z ) for all q 1 R ν . Suppose that z ˇ is a root of g j ( z ) , for some j. As the parameters of the polynomials f i and k i are free to vary in R ν , then, generically in R ν , δ 1 q 1 ( z ˇ ) 0 . Iterating for all roots of δ 2 q 2 * ( z ) , generically in R ν , δ 1 q 1 ( z ) and δ 2 q 2 * ( z ) have no roots in common. Moreover, generically in R ν , M 1 , d 1 q 1 = d 1 0 . Thus, there exists q 1 * such that (a) M 1 , d 1 q 1 * = d 1 0 , (b) δ 1 q 1 * ( z ) and δ 2 q 2 * ( z ) have no roots in common.
(iii)
Now let p * = ( q 0 q 1 * q 2 * ) , so that
det ( M 1 p * ( z ) ) = δ 1 q 1 * ( z ) ) , det ( M 2 p * ( z ) ) = δ 2 q 2 * ( z ) .
Using (i) and (ii), (A) the leading coefficients of det ( M 1 p * ( z ) ) and det ( M 2 p * ( z ) ) do not vanish, (B) det ( M 1 p * ( z ) ) and det ( M 2 p * ( z ) ) have no root in common so that their resultant does not vanish. This proves the proposition for r = q + 1 .
Generalizing this result to r > q + 1 is easy. Let us define the family N ̲ ( z ) in the following way: (a) specify η , ξ , E 1 ( z ) and E 3 ( z ) as in the definition of M ̲ ( z ) , (b) then let
K = 0 ( r q ) × ( r d 1 ) 0 1 × ( r q 1 ) 1 0 ( r q ) × d , H = 0 d × ( r d ) I d , ξ ̲ = K H , D ̲ = H I r × ( q d ) , E 2 ( z ) = 0 ( r q ) × q e ( z ) 0 1 × ( q 1 ) .
We have:
N ̲ ( z ) = 0 ( r q ) × d 0 ( r q ) × ( q d ) I d 0 d × ( q d ) 0 ( q d ) × d I q d + ( 1 z ) E 2 ( z ) E 3 ( z ) 0 ( q d ) × q + ( 1 z ) 2 0 ( r q ) × q 0 d × q E 1 ( z ) .
It is easy to see that the ( q + 1 ) × q lower submatrix of N ̲ ( z ) is identical to the matrix M ̲ q ( z ) in (A1).

Appendix A.2. if R>Q and C≤Q, Assumptions 5 and 6 Do Not Imply That e t Is a Non-Cointegrated I(0) Process.

Let r = 3 , q = 2 , S ( L ) = I 3 ,
ξ = 1 0 0 , η = 0 1 , ξ = 0 0 1 0 0 1 , B * = a b 1 0 1 0 .
In this case c = 2 and d = 1 , so that c = q (see Remark 6). We have
ξ B * η = 1 0 1 0 0 1 .
We see that Assumptions 5 and 6 hold. However, rank ( ξ B * ) = 1 , so that e t , though being I ( 0 ) , is not a non-cointegrated I ( 0 ) process. On the other hand, if the ( 3 , 2 ) entry of B * is 1 instead of 0, e t is non-cointegrated.

Appendix B. Non Uniqueness

In Proposition 3 we prove that a singular I ( 1 ) vector with cointegrating rank c has a finite error correction representation with c error terms. On the other hand, as we have seen in Remark 5, when c = r q the singular vector y t has also an autoregressive representation in the differences, i.e., a representation with zero error terms. In Appendix B.1 we give an example hinting that y t has error correction representations with any number of error terms between d and c. However, in Appendix B.2 we show that all such representations produce the same impulse-response functions.

Appendix B.1. Alternative Representations with Different Numbers of Error Terms

Let S ( L ) = I r and consider the following example, with r = 3 , q = 2 , c = 2 , so that d = 1 :
ξ = 1 1 1 η = 1 2 ξ = 1 1 0 0 1 1
We have,
( 1 L ) ξ ξ y t = 1 L 0 0 0 1 L 0 0 0 1 b 11 * b 21 * b 12 * b 22 * b 21 * b 31 * b 22 * b 32 * 3 6 + ( 1 L ) 1 1 1 E ^ ( L ) u t ,
where ( 1 L ) E ^ ( L ) gathers the second and third terms in M ( L ) . If the assumptions of Proposition 2 hold, we obtain an error correction representation with error terms
ξ y t = y 1 t y 2 t y 2 t y 3 t .
However, we also have
( 1 L ) ξ ξ y t = 1 L 0 0 0 1 0 0 0 1 × b 11 * b 21 * b 12 * b 22 * ( 1 L ) ( b 21 * b 31 * ) ( 1 L ) ( b 22 * b 32 * ) 3 6 + ( 1 L ) E ˇ ( L ) u t = 1 L 0 0 0 1 0 0 0 1 M ˇ ( L ) u t .
Under suitable assumptions on the coefficients b i j * and E ˇ ( L ) , assuming in particular that the matrix
b 11 * b 21 * b 12 * b 22 * 3 6
is nonsingular, the matrix M ˇ ( L ) is zeroless and has therefore a finite-degree left inverse. Proceeding as in Proposition 2, we obtain an alternative error correction representation with just one error term, namely y 1 t y 2 t .
This example should be sufficient to convey the idea that y t admits error correction representations with a minimum d and a maximum c = r q + d of error terms.
The problem of error correction representations, with different numbers of error terms, has been recently addressed in Deistler and Wagner (2017). An implication of their main result (see Theorem 1, p. 41) is that if y t has the error correction representation
A ˜ ( L ) y t = A ˜ * ( L ) ( 1 L ) y t + A ˜ ( 1 ) y t 1 = B ˜ u ˜ t ,
and rank ( A ˜ ( 1 ) ) < c (the number of error terms is not the maximum), then A ˜ ( L ) and B ˜ are not left coprime.
The consequences of Deistler and Wagner’s paper have not yet been developed. In Propositions 2 and 3 we have only considered representations with c error terms. On non-uniqueness of autoregressive representations for singular vectors with rational spectral density see also Chen et al. (2011); Anderson et al. (2012); Forni et al. (2015).

Appendix B.2. Uniqueness of Impulse-Response Functions

Suppose that the assumptions of Proposition 2, weak form, hold. Let y t be a solution of Equation (10), so that
( 1 L ) y t = S ( L ) 1 B ( L ) u t ,
and suppose that y t has the autoregressive representation
A ˜ ( L ) y t = B ˜ u ˜ t ,
where A ˜ ( L ) is a rational matrix with poles outside the unit circle, A ˜ ( 0 ) = I r , u ˜ t is a nonsingular q-dimensional white noise, B ˜ is a full rank r × q matrix5. We have
A ˜ ( L ) [ ( 1 L ) y t ] = ( 1 L ) B ˜ u ˜ t .
The assumption that B ˜ is full rank and the argument used e.g., in Brockwell and Davis (1991), p. 111, Problem 3.8, imply that u ˜ t is fundamental for ( 1 L ) y t . Thus u ˜ t = Q u t , where Q is a nonsingular q × q matrix (see Rozanov (1967), p. 57), and B ˜ u ˜ t = [ B ˜ Q ] u t .
On the other hand, from (A2) and (A4):
A ˜ ( L ) S ( L ) 1 B ( L ) u t = ( 1 L ) [ B ˜ Q ] u t .
As u t is nonsingular, A ˜ ( L ) S ( L ) 1 B ( L ) = ( 1 L ) [ B ˜ Q ] . Setting L = 0 we have B ˜ Q = B ( 0 ) , so that (A3) becomes
A ˜ ( L ) y t = B ( 0 ) u t
while (A5) becomes
A ˜ ( L ) S ( L ) 1 B ( L ) u t = ( 1 L ) B ( 0 ) u t .
The impulse-response function of y t to u t resulting from (A6) is H ( L ) B ( 0 ) , where H ( L ) A ˜ ( L ) = I r . Multiplying both sides of (A7) by H ( L ) we obtain
S ( L ) 1 B ( L ) = ( 1 L ) H ( L ) B ( 0 ) ,
so that H ( L ) B ( 0 ) is obtained by cumulating S ( L ) 1 B ( L ) and is therefore independent of A ˜ ( L ) .

Appendix C. Data Generating Process for the Simulations

The simulation results of Section 3.4 are obtained using the following specification of (14):
A ( L ) y t = A * ( L ) ( 1 L ) y t + α β y t 1 = C ( 0 ) u t = G H u t ,
where r = 4 , q = 3 , c = 3 , the degree of A ( L ) is 2, so that the degree of A * ( L ) is 1. A ( L ) is generated using the factorization
A ( L ) = U ( L ) M ( L ) V ( L ) ,
where U ( L ) and V ( L ) are r × r matrix polynomials with all their roots outside the unit circle, and
M ( L ) = ( 1 L ) I r c 0 0 I c
(see Watson 1994). To get a VAR(2) we set U ( L ) = I r U 1 L , and V ( L ) = I r , and then, by rewriting M ( L ) = I r M 1 L , we get A 1 = M 1 + U 1 , and A 2 = M 1 U 1 .
Regarding the generation of the data, the diagonal entries of the matrix U 1 are drawn from a uniform distribution between 0.5 and 0.8 , while the extra–diagonal entries are drawn from a uniform distribution between 0 and 0.3 . U 1 is then multiplied by a scalar so that its largest eigenvalue is 0.6 . The matrix G is generated as in Bai and Ng (2007): (1) G ˜ is an r × r diagonal matrix of rank q where g ˜ i i is drawn from the uniform distribution between 0.8 and 1.2 , (2) G ˇ is obtained by orthogonalizing an r × r uniform random matrix, (3) G is equal to the first q columns of the matrix G ˇ G ˜ 1 / 2 . Lastly, the orthogonal matrix H is such that the upper 3 × 3 submatrix of G H is lower triangular. The results are based on 1000 replications. The matrices U 1 , G and H are generated only once (the numerical values are available on request) so that the set of impulse responses to be estimated is the same for all replications, whereas the vector u t is redrawn from N ( 0 , I 4 ) at each replication.

References

  1. Amengual, Dante, and Mark W. Watson. 2007. Consistent estimation of the number of dynamic factors in a large N and T panel. Journal of Business and Economic Statistics 25: 91–96. [Google Scholar] [CrossRef]
  2. Anderson, Brian DO, and Manfred Deistler. 2008a. Generalized linear dynamic factor models–A structure theory. Paper presented at IEEE Conference on Decision and Control, Cancun, Mexico, December 9–11. [Google Scholar]
  3. Anderson, Brian DO, and Manfred Deistler. 2008b. Properties of zero-free transfer function matrices. SICE Journal of Control, Measurement and System Integration 1: 284–92. [Google Scholar] [CrossRef] [Green Version]
  4. Anderson, Brian DO, Manfred Deistler, Weitian Chen, and Alexander Filler. 2012. Autoregressive models of singular spectral matrices. Automatica 48: 2843–49. [Google Scholar] [CrossRef] [Green Version]
  5. Bai, Jushan, and Serena Ng. 2007. Determining the number of primitive shocks in factor models. Journal of Business and Economic Statistics 25: 52–60. [Google Scholar] [CrossRef] [Green Version]
  6. Banerjee, Anindya, Massimiliano Marcellino, and Igor Masten. 2014. Forecasting with factor-augmented error correction models. International Journal of Forecasting 30: 589–612. [Google Scholar] [CrossRef] [Green Version]
  7. Banerjee, Anindya, Massimiliano Marcellino, and Igor Masten. 2017. Structural FECM: Cointegration in large–scale structural FAVAR models. Journal of Applied Econometrics 32: 1069–86. [Google Scholar] [CrossRef] [Green Version]
  8. Barigozzi, Matteo, Antonio M. Conti, and Matteo Luciani. 2014. Do euro area countries respond asymmetrically to the common monetary policy? Oxford Bulletin of Economics and Statistics 76: 693–714. [Google Scholar] [CrossRef]
  9. Barigozzi, Matteo, Marco Lippi, and Matteo Luciani. 2019. Large-dimensional dynamic factor models: Estimation of impulse-response functions with I(1) cointegrated factors. arXiv arXiv:1602:02398. [Google Scholar]
  10. Bauer, Dietmar, and Martin Wagner. 2012. A State Space Canonical Form For Unit Root Processes. Econometric Theory 28: 1313–49. [Google Scholar] [CrossRef]
  11. Brockwell, Peter J., and Richard A. Davis. 1991. Time Series: Theory and Methods, 2nd ed. New York: Springer. [Google Scholar]
  12. Canova, Fabio. 2007. Methods for Applied Macroeconomics. Princeton: Princeton University Press. [Google Scholar]
  13. Chen, Weitian, Brian DO Anderson, Manfred Deistler, and Alexander Filler. 2011. Solutions of Yule-Walker equations for singular AR processes. Journal of Time Series Analysis 32: 531–38. [Google Scholar] [CrossRef]
  14. Deistler, Manfred, Brian DO Anderson, A. Filler, Ch Zinner, and W. Chen. 2010. Generalized linear dynamic factor models: An approach via singular autoregressions. European Journal of Control 16: 211–24. [Google Scholar] [CrossRef]
  15. Deistler, Manfred, and Martin Wagner. 2017. Cointegration in singular ARMA models. Economics Letters 155: 39–42. [Google Scholar] [CrossRef] [Green Version]
  16. Forni, Mario, and Luca Gambetti. 2010. The dynamic effects of monetary policy: A structural factor model approach. Journal of Monetary Economics 57: 203–16. [Google Scholar] [CrossRef]
  17. Forni, Mario, Domenico Giannone, Marco Lippi, and Lucrezia Reichlin. 2009. Opening the Black Box: Structural Factor Models versus Structural VARs. Econometric Theory 25: 1319–47. [Google Scholar] [CrossRef] [Green Version]
  18. Forni, Mario, Marc Hallin, Marco Lippi, and Lucrezia Reichlin. 2000. The Generalized Dynamic Factor Model: Identification and Estimation. The Review of Economics and Statistics 82: 540–54. [Google Scholar] [CrossRef]
  19. Forni, Mario, Marc Hallin, Marco Lippi, and Paolo Zaffaroni. 2015. Dynamic factor models with infinite-dimensional factor spaces: One-sided representations. Journal of Econometrics 185: 359–71. [Google Scholar] [CrossRef] [Green Version]
  20. Forni, Mario, and Marco Lippi. 2001. The Generalized Dynamic Factor Model: Representation Theory. Econometric Theory 17: 1113–41. [Google Scholar] [CrossRef] [Green Version]
  21. Franchi, Massimo, and Paolo Paruolo. 2019. A general inversion theorem for cointegration. Econometric Reviews 38: 1176–201. [Google Scholar] [CrossRef] [Green Version]
  22. Franklin, J. N. 2000. Matrix Theory, 2nd ed. New York: Dover Publications. [Google Scholar]
  23. Giannone, Domenico, Lucrezia Reichlin, and Luca Sala. 2005. Monetary policy in real time. In NBER Macroeconomics Annual 2004. Edited by Mark Gertler and Kenneth Rogoff. Cambridge: MIT Press, chp. 3. pp. 161–224. [Google Scholar]
  24. Gregoir, Stéphane. 1999. Multivariate Time Series With Various Hidden Unit Roots, Part I. Econometric Theory 15: 435–68. [Google Scholar] [CrossRef]
  25. Johansen, Søren. 1988. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12: 231–54. [Google Scholar] [CrossRef]
  26. Johansen, Søren. 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59: 1551–80. [Google Scholar] [CrossRef]
  27. Johansen, Søren. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, 1st ed. Oxford: Oxford University Press. [Google Scholar]
  28. Lancaster, Peter, and Miron Tismenetsky. 1985. The Theory of Matrices, 2nd ed. New York: Academic Press. [Google Scholar]
  29. Luciani, Matteo. 2015. Monetary policy and the housing market: A structural factor analysis. Journal of Applied Econometrics 30: 199–218. [Google Scholar] [CrossRef] [Green Version]
  30. Phillips, Peter C.B. 1998. Impulse response and forecast error variance asymptotics in nonstationary VARs. Journal of Econometrics 83: 21–56. [Google Scholar] [CrossRef] [Green Version]
  31. Rozanov, Yu. A. 1967. Stationary Random Processes. San Francisco: Holden-Day. [Google Scholar]
  32. Sargent, Thomas J. 1989. Two Models of Measurements and the Investment Accelerator. Journal of Political Economy 97: 251–87. [Google Scholar] [CrossRef]
  33. Sims, Christopher, James H. Stock, and Mark W. Watson. 1990. Inference in linear time series models with some unit roots. Econometrica 58: 113–44. [Google Scholar] [CrossRef]
  34. Stock, James H., and Mark W. Watson. 1988. Testing for common trends. Journal of the American Statistical Association 83: 1097–107. [Google Scholar] [CrossRef]
  35. Stock, James H., and Mark W. Watson. 2002a. Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association 97: 1167–79. [Google Scholar] [CrossRef] [Green Version]
  36. Stock, James H., and Mark W. Watson. 2002b. Macroeconomic forecasting using diffusion indexes. Journal of Business and Economic Statistics 20: 147–62. [Google Scholar] [CrossRef] [Green Version]
  37. Stock, James H., and Mark W. Watson. 2016. Dynamic factor models, factor-augmented vector autoregressions, and structural vector autoregressions in macroeconomics. In Handbook of Macroeconomics. Edited by John B. Taylor and Harald Uhlig. Amsterdam: North Holland, Elsevier, vol. 2A, chp. 8. pp. 415–525. [Google Scholar]
  38. Van der Waerden, Bartel Leendert. 1953. Modern Algebra, 2nd ed. New York: Frederick Ungar, vol. I. [Google Scholar]
  39. Watson, Mark W. 1994. Vector autoregressions and cointegration. In Handbook of Econometrics. Edited by Robert F. Engle and Daniel L. McFadden. Amsterdam: North Holland, Elsevier, vol. 4, chp. 47. pp. 2843–915. [Google Scholar]
1
Usually orthonormality is assumed. This is convenient but not necessary in the present paper.
2
To our knowledge, the present paper is the first to study cointegration and error correction representations for I ( 1 ) singular vectors, the factors of I ( 1 ) dynamic factor models in particular. An error correction model in the DFM framework is studied in (Banerjee et al. 2014, 2017). However, their focus is on the relationship between the observable variables and the factors. Their error correction term is a linear combination of the variables x i t and the factors F t , which is stationary if the idiosyncratic components are stationary (so that the x’s and the factors are cointegrated). Because of this and other differences their results are not directly comparable to those in the present paper.
3
In the square case, r = q, Assumption 3 holds if and only if M(z) is unimodular.
4
If z * is a zero of M ( z ) , multiply M ( z ) by an invertible r × r matrix Q z * such that z * is a zero of, say, the first row of Q z * M ( z ) . Then multiply by the r × r diagonal matrix with ( z z * ) 1 in position ( 1 , 1 ) and unity elsewhere on the main diagonal. Iterating, all the zeros of M ( z ) are removed.
5
Multiplying both sides of (A3) by ( 1 L ) and using (A2), we obtain A ˜ ( L ) S ( L ) 1 B ( L ) u t = ( 1 L ) B ˜ u ˜ t . Comparing the spectral densities of right- and left-hand terms, it is easy to prove that u ˜ t must be a q-dimensional, nonsingular white noise and the rank of B ˜ must be q.
Table 1. Monte Carlo Simulations. VECM: r = 4 , q = 3, c = 3 .
Table 1. Monte Carlo Simulations. VECM: r = 4 , q = 3, c = 3 .
LagsDVARLVARVECM LagsDVARLVARVECM
T = 100 00.060.050.05 T = 500 00.020.020.02
40.260.180.1740.230.070.07
200.300.370.22200.250.140.09
400.300.450.22400.250.210.09
800.300.570.22800.250.320.09
T = 1000 00.020.020.02 T = 5000 00.010.010.01
40.230.050.0540.220.020.02
200.250.090.07200.250.030.03
400.250.130.07400.250.040.03
800.250.220.07800.250.060.03
Root mean squared errors at different lags, when estimating the impulse-response functions of the simulated variables y t to the shocks u t . Estimation is carried out using three different autoregressive representations: a VAR for ( 1 L ) y t (DVAR), a VAR for y t (LVAR), and a VECM with c = r q + d error terms (VECM). The results are based on 1000 replications. For the data generating process see Appendix C. The RMSEs are obtained averaging over all replications and all 4 × 3 responses.

Share and Cite

MDPI and ACS Style

Barigozzi, M.; Lippi, M.; Luciani, M. Cointegration and Error Correction Mechanisms for Singular Stochastic Vectors. Econometrics 2020, 8, 3. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8010003

AMA Style

Barigozzi M, Lippi M, Luciani M. Cointegration and Error Correction Mechanisms for Singular Stochastic Vectors. Econometrics. 2020; 8(1):3. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8010003

Chicago/Turabian Style

Barigozzi, Matteo, Marco Lippi, and Matteo Luciani. 2020. "Cointegration and Error Correction Mechanisms for Singular Stochastic Vectors" Econometrics 8, no. 1: 3. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8010003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop