Next Article in Journal
Fisher’s z Distribution-Based Mixture Autoregressive Model
Next Article in Special Issue
Forecasting FOMC Forecasts
Previous Article in Journal
An Empirical Model of Medicare Costs: The Role of Health Insurance, Employment, and Delays in Medicare Enrollment
Previous Article in Special Issue
Are Soybean Yields Getting a Free Ride from Climate Change? Evidence from Argentine Time Series Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Selecting a Model for Forecasting

by
Jennifer L. Castle
1,*,
Jurgen A. Doornik
2 and
David F. Hendry
2
1
Magdalen College and Climate Econometrics, University of Oxford, High Street, Oxford OX1 4AU, UK
2
Nuffield College, Climate Econometrics and Institute for New Economic Thinking at the Oxford Martin School, University of Oxford, Nuffield College, New Road, Oxford OX1 1NF, UK
*
Author to whom correspondence should be addressed.
Submission received: 9 November 2018 / Revised: 16 June 2021 / Accepted: 17 June 2021 / Published: 25 June 2021
(This article belongs to the Special Issue Celebrated Econometricians: David Hendry)

Abstract

:
We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoretical results are derived for a three-variable static model, but generalized to include dynamics and many more variables in the simulation experiment. The results show that the trade-off for selecting variables in forecasting models in a stationary world, namely that variables should be retained if their noncentralities exceed unity, still applies in settings with structural breaks. This provides support for model selection at looser than conventional settings, albeit with many additional features explaining the forecast performance, and with the caveat that retaining irrelevant variables that are subject to location shifts can worsen forecast performance.

1. Introduction

There are many approaches to formulating models when the sole objective is forecasting, from the very parsimonious through to large systems. However, there is little agreement on which performs best on a forecasting criterion: see Makridakis and Hibon (2000) and Fildes and Ord (2002) for evidence from forecast competitions. Clements and Hendry (2001) suggest that this lack of agreement is the result of intermittent distributional shifts that affect alternative formulations in different ways. We address this puzzle by analysing the selection of models in the pursuit of optimal mean square forecast error (MSFE) in settings with structural breaks.1
We focus on regression models that are linear in the parameters, and consider model selection that is controlled by the nominal significance level for statistical significance. Loose significance levels have been shown to be optimal to select regression models for stationary processes if evaluating on a one-step-ahead MSFE. Shibata (1980) showed that the Akaike information criterion (AIC, see Akaike 1973) is an asymptotically efficient selection method when the data generating process (DGP) is an infinite-order process; also see Ing and Wei (2003). Many other criteria have been proposed that aim to have optimal properties in certain settings but information criteria alone are not a sufficient principle for selecting models as they do not ensure congruence, so a misspecified model could be selected: see Bontemps and Mizon (2003). We explore general-to-specific (Gets) model selection in the simulation exercise to narrow down the class of forecasting models to undominated models. This yields well-specified encompassing models in sample, albeit nonstationarities may preclude those benefits continuing over the forecast horizon.
The theoretical analysis commences with a bivariate conditional model that is part of a three-variable system in which the selection decision is whether to retain or exclude one of the regressors. This is empirically relevant as demonstrated by UK inflation, where autoregressive (AR) forecasting models are augmented with the unemployment rate. The bivariate model is analysed first in a stationary setting. This is extended to a nonstationary settings where location shifts occur at or near the forecast origin. The static setting still requires forecasts of the conditioning variables, and alternative forecasting devices are considered, including the two extremes of the class of robust forecasting devices proposed by Castle et al. (2015), the sample mean and the random walk. The results confirm that regressors should be retained for forecasting if their noncentralities exceed unity, regardless of whether or not there is a structural break, or of the forecasting device used. These analytic results map to a selection significance level of 16% in the bivariate case, much looser than conventional significance levels used. The results closely match that of AIC, which can be interpreted as a likelihood ratio χ 2 test for a pair of nested models with one degree of freedom and a penalty of two, and also gives a significance level of approximately 16%: see Pötscher (1991) and Leeb and Pötscher (2009).
A key source of forecast failure is an induced shift in the equilibrium mean of the variable being forecast, irrespective of whether those conditioning variables are included in the forecasting model; see the taxonomy in Hendry and Mizon (2012). Consequently, the simulation exercise evaluates a wide range of settings including larger models, break types and magnitudes at or near the forecast origin, and the method of forecasting. We consider a range of significance levels from the very tight (0.001), eliminating almost all potentially irrelevant variables, to the very loose (0.50), enabling retention of relevant variables even if they are only marginally significant. The results enable evaluation of the costs when either omitting relevant variables, or from incorrectly retaining irrelevant variables. Overall, the results support looser than conventional significance levels for selecting forecasting models, with a 10% target significance level often producing superior forecasts.
This paper is structured as follows. Section 2 outlines the aims of this paper, then Section 3 formulates the model framework that is analysed. Section 4 considers the choice of selection significance level for forecasting in a stationary DGP. Section 5 analyses selection in a nonstationary DGP where a location shift occurs out of sample in one of the regressors, and investigates the consequences of that variable’s inclusion or exclusion in the forecasting model. Section 6 considers the impacts on selection of in-sample shifts using different forecasting devices. The analytic results are summarized in Section 7. Section 8 and Section 9 present simulation design and evidence on the performance of the various approaches, examining the preferred significance level to minimize MSFE across experimental designs. Section 10 concludes this paper. Appendix A provides analytical calculations and Supplementary Tables are given in Appendix B.

2. Empirical Motivation

An empirical example of inflation forecasting motivates our interest in structural breaks and their roles in forecast accuracy and the selection of regressors. Two popular models within this large literature include single-equation forecasting models based on past inflation and so-called ‘Phillips curve forecasts’. The former usually consist of univariate models such as autoregressive integrated moving average (ARIMA) models. In the latter, the univariate model is augmented with an activity variable such as the unemployment rate or output gap; see Stock and Watson (2009).
The framework considered below, although static, can be applied to these two models where the econometrician wishes to determine whether to augment a univariate forecasting model with the contemporaneous unemployment rate. This ‘exogenous’ variable is subject to breaks in the form of location shifts, which may occur at or near the forecast horizon. Figure 1 records2 the quarterly observations on the annual percentage inflation in UK consumer price index, π t , and the UK unemployment rate as a percentage, U t , along with a broken mean obtained by step indicator saturation (SIS, see Castle et al. 2015) at a nominal significance level α = 0.1 % .
The analytics derived below correspond to a Phillips curve formulation (model M 1 ), a univariate AR model ( M 2 ) and selection applied to the unemployment rate using a significance level of 0.16 ( M 3 ). Using model-specific coefficients μ , β i , γ i and error term ν i , the three models are:
M 1 : Δ π t = μ + i = 1 4 β i Δ π t i + i = 0 4 γ i U t i + ν t , M 2 : Δ π t = μ + i = 1 4 β i Δ π t i + ν t , M 3 : Δ π t = μ + i = 1 4 β i Δ π t i + i = 0 4 γ i * U t i + ν t ,
where Δ π t = π t π t 1 . Selection using Autometrics at α = 0.16 is denoted by * , e.g., γ 0 * = 0 implies that the contemporaneous unemployment rate is not selected. Dynamics are included to account for any autocorrelation. The forecasting models are estimated over the period 2000Q1–2013Q4, producing one-quarter-ahead inflation forecasts for the period 2014Q1–2017Q4 evaluated on MSFE. Selection at 16% results in U t 1 being retained, with a p-value of 0.149, so would not be retained under a commonly used 5% significance level. Longer lags of the unemployment rate were not retained.
Table 1 reports the square root of the MSFEs (RMSFE) for one-step-ahead forecasts over the sample that was held back. Three cases are considered corresponding to the analytics below: (a) known U t , (b) forecast U ^ t using the in-sample mean, and (c) forecast U ^ t using U t 1 . Method (a) is infeasible; method (c) is the random walk forecast. When U t is known, model M 3 outperforms M 1 and M 2 , although the differences are not statistically significant. As this is infeasible, the random walk forecast combined with selection matches the RMSFE of knowing U t . This shows that selection can be beneficial. The next four sections formalize the framework to establish the optimal significance level for selection.

3. The Analytic Design

In this section, we specify the analytic design, consisting of a three-variable DGP and two different models for that DGP. In later sections, we introduce a third model that involves selection. Together, these mimic the models M 1 , M 2 , and M 3 that were introduced above.
The DGP is a static vector autoregression (VAR) for variables y , x 1 , x 2 with coefficients β i , μ i and error terms ϵ , η 1 , η 2 structured as:
1 β 1 β 2 0 1 0 0 0 1 y t x 1 , t x 2 , t = β 0 μ 1 μ 2 + ϵ t η 1 , t η 2 , t .
Using y t = ( y t : x 1 , t : x 2 , t ) and μ = ( μ y : μ 1 : μ 2 ) , assuming normality, we can write (1) as:
y t IN 3 μ , Σ .
IN 3 denotes a three-dimensional independent normal distribution, here with mean μ and variance Σ . Without loss of generality we set the variance of x 1 and x 2 to one, V [ x i , t ] = σ i i 2 = 1 , and the correlation between x 1 and x 2 to ρ :
Σ = σ ϵ 2 0 0 0 1 ρ 0 ρ 1 .
Unless otherwise noted, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 use the following parameter values in calculations: β 0 = 5 , β 1 = 1 , σ ϵ 2 = 1 , μ 1 = μ 2 = 2 , ρ = 0.5 , M = 10 5 , T = 50 and (when there is a break in μ 2 ) δ = 4 .
Although a static DGP may seem restrictive, the main role of adding dynamics to this three-variable VAR would be to slow adjustments to location shifts. Such dynamics are considered in the simulation exercise in Section 9. The analytic design ensures the assumptions required for valid application of a single t-test are satisfied. In practice, selection from a carefully designed general model including long lags and saturation estimators should deliver approximately martingale-difference normal residuals. While it may be more intuitive to lag the exogenous regressors in the DGP for forecasting purposes, none of the results would change. The current set up naturally leads to analyses of the forecasting models for the contemporaneous exogenous regressors, allowing a comparison of alternative devices and an assessment of open models, see Hendry and Mizon (2012).
Throughout, we assume that the sampling variation of estimates of μ i can be neglected, and use the population values to focus on the impacts of location shifts. Then (1) implies E y t = μ y = β 0 + β 1 μ 1 + β 2 μ 2 with:
y t = μ y + β 1 x 1 , t μ 1 + β 2 x 2 , t μ 2 + ϵ t .
Considering the conditional model (4), we compare M 1 , which includes both weakly exogenous regressors, and M 2 , which excludes x 2 :
M 1 : y t = β 0 + β 1 x 1 , t + β 2 x 2 , t + ϵ t ,
M 2 : y t = ϕ 0 + γ 1 x 1 , t + ν t ,
where Appendix A.1 summarises ϕ 0 , γ 1 , ν t and σ ν 2 .
The choice between M 1 and M 2 will depend on a test of significance of x 2 , t . The usual Student’s t-statistic for β 2 is
t β = β ^ 2 s . e . β ^ 2 t T k , ψ β ,
where t T k , ψ β indicates a singly noncentral Student’s t-distribution with ψ β nonzero under the alternative hypothesis. Here, T k is the degrees of freedom, and
ψ β 2 = T β 2 2 1 ρ 2 σ ϵ 2
is the squared noncentrality parameter under the alternative.

4. Selection in a Stationary DGP

We start by analysing the forecast errors of the two models that were introduced, denoted M 1 and M 2 , in the absence of breaks. The analysis is then augmented in Section 4.2 by introducing selection of regressors in M 3 , and the influence of the significance level on the selection decision in Section 4.3. In this section, we assume that there are no breaks in the DGP.

4.1. Known Future Values of Regressors

The one-step-ahead forecast errors from M 1 are denoted ϵ ^ and those from M 2 ϵ ˜ . The mean square forecast errors are written as MSFE 1 and MSFE 2 respectively. We look at the conditions for MSFE 2 MSFE 1 . An estimated intercept is always retained which maintains comparability between M 1 and M 2 .
When there are no breaks, the parameter estimates for M 1 are unbiased, E [ ϵ ^ T + 1 | T ] = 0 , so:
MSFE 1 = E ϵ ^ T + 1 | T 2 = σ ϵ 2 1 + 3 T ,
which is the unconditional MSFE formula for the impact of estimating 3 parameters under the assumption of correct model specification. For M 2 , despite the misspecification when β 2 0 , E [ ϵ ˜ T + 1 | T ] = 0 and the mean square forecast error is:
MSFE 2 = E ϵ ˜ T + 1 | T 2 = σ ν 2 1 + 2 T ,
where σ ν 2 = σ ϵ 2 1 + T 1 ψ β 2 σ ϵ 2 . There is one less parameter to estimate, traded off against a larger equation variance (see Appendix A.2 for derivations).
If the objective is to minimize MSFE, M 2 should be used to forecast when MSFE 2 MSFE 1 , which requires:
σ ν 2 1 + 2 T σ ϵ 2 1 + 3 T .
From (7), this occurs when ψ β 2 T / T + 2 .
Figure 2 records the one-step-ahead values of MSFE 1 and MSFE 2 for known x i , T + 1 , i = 1 , 2 , for the DGP given by (1) and (2). We let β 2 vary along the horizontal axis to get a range of noncentralities in the set ψ β = 0 , 4 using (7).
The results confirm that x 2 should be retained if its noncentrality exceeds approximately 1. The result converges to 1 as T , because the information content of the regressor outweighs the parameter estimation cost for one-step forecasts, regardless of the correlation between x 1 and x 2 .

4.2. Selecting Regressors

Although M 1 and M 2 provide the extremes of always/never retaining x 2 , in practice, selection will be applied. From (5), x 2 , t will be omitted if t β 2 = 0 2 < c α 2 . Using the approximation that:
t β 2 = 0 = β ^ 2 s . e . β ^ 2 T 1 ρ 2 β ^ 2 σ ϵ ,
implies:
β ^ 2 2 < c α 2 σ ϵ 2 T 1 ρ 2 .
Thus, retention of x 2 , t will depend on α and ψ β 2 for a given draw.
Forecasts in repeated sampling will be based on a mixture of M 1 and M 2 depending on whether x 2 , t is retained in each draw. The MSFE of the selected model, called M 3 , will be a weighted average of the MSFEs of M 1 and M 2 , with the weights given by the probability that x 2 , t is retained:
MSFE 3 = p α ψ β MSFE 1 + 1 p α ψ β MSFE 2
= MSFE 1 + 1 p α ψ β MSFE 2 MSFE 1
MSFE 1 + σ ϵ 2 T 1 1 p α ψ β ψ β 2 1 ,
where ψ β 2 is given by (7), with:
p α ψ β = Pr t β 2 = 0 2 c α 2 .
From the last term in (13), it is clear that MSFE 3 MSFE 1 whenever ψ β 2 1 . Moreover, p α ψ β will be low when ψ β 2 1 , so M 2 will usually be selected. Note that p α ψ β = α when β 2 = 0 . However, MSFE 3 is a highly nonlinear function of ψ β 2 entering directly and indirectly, as well as of α which also influences p α ψ β nonlinearly.
Figure 3 records the ratio of MSFE 3 to MSFE 1 , for a range of ψ β 2 , which from (13) is given by:
MSFE 3 MSFE 1 1 + T + 3 1 1 p α ψ β ψ β 2 1 .
Selection delivers a 1.8% improvement in MSFE relative to M 1 under the null when ψ β 2 = 0 with α = 0.05 or tighter, but for looser α , e.g., at 0.5, p α ψ β = 0.5 when x 2 , t is irrelevant so the benefits of selection are halved. Selection is most costly at intermediate noncentralities under the alternative, where, e.g., the largest increase in MSFE relative to M 1 is 3% at α = 0.05 for T = 50 , but is over 9% for α = 0.001 at its peak. The hump shape reflects the nonlinear trade-off as the noncentrality of x 2 , t increases from the cost of omitting x 2 , t rising as its signal is stronger, but the probability of retaining x 2 , t also increases. While the magnitude of the maximal loss may seem small for intermediate values of α , this example considers the selection of just one regressor. In practice, selection is applied when there are multiple potential regressors, and the loss associated with selection at a given significance level is cumulated across all potential regressors, as seen in the simulation results below.
The selection rule that x 2 , t should be retained if ψ β 2 > 1 is evident α , but unfortunately the forecaster does not know ψ β 2 . If it was known, the optimal α is 0 for ψ β 2 < 1 and 1 for ψ β 2 > 1 . We next look at the choice of α to minimize cost in terms of improvements in MSFEs for an unknown ψ β 2 .

4.3. The Choice of Significance Level

Equation (11) must hold for x 2 to be excluded at the chosen significance level. On average, that inequality requires:
E β ^ 2 2 = V β ^ 2 + β 2 2 = β 2 2 + σ ϵ 2 T 1 ρ 2 < c α 2 σ ϵ 2 T 1 ρ 2 ,
assuming unbiasedness. Equating that inequality for β 2 2 with ψ β 2 < 1 from (10) gives the boundary for the critical value c α in which selection results in a smaller MSFE due to the omission–estimation trade-off:
β 2 2 = σ ϵ 2 c α 2 1 T 1 ρ 2 σ ϵ 2 T 1 ρ 2 .
This implies that c α 2 = 2 at the boundary, or an approximate significance level of α = 0.16 .
The theoretical probability of retaining x 2 for β 2 > 0 at α = 0.16 using E [ t β ^ 2 ] = ψ β is:
Pr t β ^ 2 c α = Pr t β ^ 2 ψ β c α ψ β .
This gives the retention probabilities recorded in Table 2.
These results are close to the implied significance level for the AIC in Campos et al. (2003). This can have a cumulative effect, as shown in Figure 4 which records values of the term 1 p α ψ β where there are five independent regressors, all with the same ψ β 2 . The probability of retaining all five variables is low even at loose significance levels unless the noncentralities are large. At ψ β 2 = 9 the gap between α = 0.05 and α = 0.16 is 29%, demonstrating large benefits to a looser significance level for the retention of relevant regressors. The trade-off is that more irrelevant variables will be retained, and this can be costly if those variables are subject to breaks, which we next explore.

5. An Out-of-Sample Shift in the Regressors

The analysis of the previous section is augmented by the introduction of a break in Section 5.1. This break is immediately after the estimation sample, while in Section 6 it is applied to the last in-sample observation. We distinguish between whether the future values of the regressors are known (Section 5.2) or unknown (Section 5.4). The role of selection is studied again (Section 5.3), and we look at the random walk as a device to forecast future values of the regressors in Section 5.5. Forecasting devices based on full in-sample information and a random walk are the extremes of the class in Castle et al. (2015), but there is no information in sample regarding the break to help either device.

5.1. Specification of the Out-of-Sample Shift

Consider a mean shift of size δ in x 2 at T + 1 with the forecast origin at T, so the shift coincides with the one-step-ahead forecast. The DGP has the same structure as (1)–(3) with the parameters ( β 1 β 2 ) of the conditional model constant:
x 1 , t = μ 1 + η 1 , t t = 1 , , T + 1 , x 2 , t = μ 2 + η 2 , t t = 1 , , T , μ 2 + δ + η 2 , t t = T + 1 .
Since (15) entails:
y T + 1 = β 0 + β 1 x 1 , T + 1 + β 2 x 2 , T + 1 + ϵ T + 1 = μ y + β 2 δ + β 1 x 1 , T + 1 μ 1 + β 2 x 2 , T + 1 μ 2 δ + ϵ T + 1 ,
then β 2 δ 0 induces a location shift in the relationship between y T + 1 and its in-sample determinants unless the future x 2 , T + 1 is known at time T. As shown in all forecast-error taxonomies (see e.g., Clements and Hendry 1998), shifts in the equilibrium mean are the most pernicious source of forecast failure, whereas changes in the parameters of mean-zero variables have only a variance impact. Omitting x 2 , T + 1 from (16) as in M 2 will create the same location shift. Thus, there is little loss of generality by only considering shifts in the regressors.
We first evaluate the trade-off to omitting x 2 , t for known future exogenous regressors, emulating the above results as the break which occurs in the forecast period is modeled in the known x 2 , T + 1 .

5.2. Known Future Values of Regressors

The one-step-ahead forecasts for M 1 given (15), in which values of x T + 1 are assumed to be known at T, are unbiased when the parameter estimates are unbiased. The mean square forecast error of M 1 (see Appendix A.3 for derivations) is:
MSFE 1 = E ϵ ^ ¯ T + 1 | T + 1 2 = σ ϵ 2 1 + 1 T 1 ρ 2 δ 2 + 2 ρ ,
which does not depend on ψ β 2 . Comparison with (8) highlights the effects of the location shift: δ 2 enters the MSFE despite the shift being ‘known’ given x 2 , T + 1 , and MSFE 1 is no longer independent of ρ . (17) also reveals the additional costs of including an irrelevant regressor which shifts out of sample as δ 2 enters even when β 2 = 0 , although it is scaled by T 1 ρ 2 so larger samples mitigate its effect.
For M 2 (which omits the regressor x 2 , t ), the expectation of the forecast error is E [ ϵ ˜ ¯ T + 1 | T + 1 ] = β 2 δ , so the forecasts are biased by the shift in the omitted variable. The one-step-ahead MSFE for M 2 is:
MSFE 2 = E ϵ ˜ ¯ T + 1 | T + 1 2 = σ ϵ 2 + β 2 2 1 ρ 2 + δ 2 + 2 T 1 σ ϵ 2 1 + T 1 ψ β 2 ,
where β 2 2 enters directly so the MSFE is a function of ψ β 2 , unlike for M 1 . Comparison with (9) reveals the role that ρ and δ 2 play. When β 2 = 0 , so M 2 is the correct model, (18) collapses to (9).
Assuming a criterion of minimizing one-step-ahead MSFE, using (10), MSFE 2 MSFE 1 requires:
δ 2 ψ β 2 1 + ψ β 2 1 ρ 2 1 + 2 T 1 ρ < 0 ,
which depends on estimation uncertainty and therefore does not simplify neatly. However, the solution is close to 1 for reasonable values of ρ . For example, when ρ = 0.5 , T = 50 and δ = 4 , then ψ β 2 < 0.983 , or | ψ β | < 0.991 , results in a smaller MSFE 2 compared to MSFE 1 .
Figure 5 demonstrates the close approximation to a trade-off at ψ β = 1 which holds regardless of the break. Thus, even knowing there is a shift in x 2 does not affect the choice of forecasting model between including or omitting x 2 : always (never) include for ψ β 2 1 ( ψ β 2 < 1 ).

5.3. Selecting Regressors

Following Section 4.2, a t-test for statistical significance will be conducted on x 2 , t in sample and a decision to retain or exclude x 2 , t will be made at c α for a given draw. Hence, MSFE 3 will be a weighted average of MSFE 1 and MSFE 2 , using (12):
MSFE 3 = MSFE 1 + 1 p α ψ β σ ϵ 2 T 1 ψ β 2 1 + δ 2 1 ρ 2 δ 2 + 2 ρ 1 ρ 2 .
The term in square brackets is scaled by T 1 . As before, the difference between MSFE 1 and MSFE 3 diminishes as the sample size increases. When ψ β 2 = 0 , the first term in square brackets in (20) drops out and the benefits of selection relative to MSFE 1 are evident as the second term must be negative. The magnitude of δ 2 affects both MSFE 1 and MSFE 2 but, from (20), the first δ 2 term is multiplied by ψ β 2 whereas the second offsetting term is not, so the effect of the location shift is exacerbated if ψ β 2 > 1 .
Figure 5 compares the MSFEs of M 1 from (17), M 2 from (18), and M 3 using (20) at three illustrative values of α . The profiles of the MSFEs mirror the analytical results for the no break case. Selection outperforms the estimated DGP for ψ β 2 < 1 despite a break, and remains close to the MSFE 1 at α = 0.16 for ψ β 2 > 1 .

5.4. Unknown Future Values of Regressors

Now consider when the future values of the regressors are unknown. We use two devices to obtain forecasts of x i , T + 1 , i = 1 , 2 : the in-sample mean or a random walk. The random walk is biased for unanticipated location shifts but does not result in systematic bias following a location shift, whereas the in-sample mean is persistently biased following a location shift unless updated. The two devices comprise the two extremes of using either the full in-sample data or only the last observation to produce the forecasts of the weakly exogenous regressors.3
Although the link between y and the x i stays constant, forecasts when the x i , T + 1 are unknown will fail if the shift at T + 1 is not anticipated, inducing a shift in y T + 1 . This will lead to forecast failure as the in-sample mean μ y shifts to ( μ y + β 2 δ ) at T + 1 but would be forecast to be μ y .
The forecasts based on in-sample estimates from (15) when μ 1 and μ 2 are not zero are given by:
x ¯ 1 , T + 1 | T = μ ^ 1 = 1 T t = 1 T x 1 , t = μ 1 + η ¯ 1 ,
x ¯ 2 , T + 1 | T = μ ^ 2 = 1 T t = 1 T x 2 , t = μ 2 + η ¯ 2 ,
so will miss the unknown break. When the break occurs in x 2 , the MSFEs will worsen for β 2 0 . As before, we consider the sampling variation in estimating the means as small compared to the impact of shifts, so we approximate by taking T sufficiently large that μ ^ i μ i .
Replacing the unknown x i , T + 1 by μ i leads to forecasting y T + 1 by the in-sample mean for both M 1 and M 2 , see Appendix A.4. Both face the same forecast bias, E [ ϵ ^ ^ T + 1 | T ] = E [ ϵ ˜ ˜ T + 1 | T ] = β 2 δ which is the same bias as M 2 with known regressors. Parameter estimation adds terms of O p T 1 . Hence, ignoring O p T 1 terms, MSFE 1 = MSFE 2 :
E ϵ ^ ^ T + 1 | T 2 = E ϵ ˜ ˜ T + 1 | T 2 = β 2 2 δ 2 + σ ϵ 2 + β 1 2 + β 2 2 + 2 ρ β 1 β 2 .
When β 2 = 0 , the MSFE is σ ϵ 2 + β 1 2 , so is inflated relative to the known regressors case as x 1 , T + 1 must also be forecast. However, the in-sample mean forecast is the best forecast device for x 1 , T + 1 in this setting (in terms of minimum MSFE) as x 1 , T + 1 is stationary and not subject to a location shift. Selection will have little or no noticeable impact when MSFE 2 MSFE 1 , as this will also result in MSFE 3 MSFE 1 .
Figure 6 records the MSFEs for M 1 and M 2 when there is a break in x 2 at T + 1 , comparing known and unknown regressors using the in-sample mean to forecast x i , T + 1 , i = 1 , 2 in the unknown regressor case, i.e., the figure records (17), (18) and (23), (solid/dashed/dotted lines). Simulation outcomes are checked to capture O p T 1 effects but they are negligible so are not recorded. Figure 6 includes the random walk forecasts and the M 1 and M 2 results for the known regressor case are repeated from Figure 5 to facilitate comparison.
The simulation outcomes where parameters are estimated closely match the analytic results. For known regressors for MSFE 1 , the break in μ 2 does not affect the MSFE as it is captured in x 2 , T + 1 : even at δ = 4 for T = 100 , MSFE 1 = 1.23 for the parameters given in the figure which is only slightly greater than σ ϵ 2 . However, when x T + 1 is unknown both M 1 and M 2 are affected by the break in x 2 , T + 1 . Simulation outcomes again closely match the theory for the unknown break case, and show that the choice of whether to retain or exclude x 2 , t is not important in a forecasting context. The unanticipated break dominates any forecast error resulting from model misspecification. Increasing the sample size does mitigate the MSFE costs but the MSFE premium relative to known regressors is maintained for all ψ β 2 . Increasing the number of relevant exogenous regressors that shift will increase the MSFE at ψ β 2 = 0 , shifting the MSFE trajectories up.
These results show that in this static setting of location shifts, if the break occurs in the forecast period and is unknown and unpredictable, then the retention of x 2 is irrelevant (other than parameter estimation uncertainty), as neither M 1 nor M 2 capture the shift which dominates the MSFE. Parsimony, or lack thereof, neither helps nor hinders much in this setting. Moreover, selection does not substantively affect the outcome as MSFE 3 MSFE 1 .

5.5. Forecasting Regressors with a Random Walk

We now consider using a random walk to forecast the exogenous variables:
x ¯ ¯ 1 , T + 1 | T = x 1 , T ,
x ¯ ¯ 2 , T + 1 | T = x 2 , T .
Such a device is not robust in this setting as the forecasts are made before the shift, and robustness refers to forecasting properties that are insensitive to a feature in the DGP, such as after a location shift.
Although the last in-sample observation is an imprecise measure of the out-of-sample mean, it is unbiased when there are no location shifts (as there are no dynamics in the DGP), so E x 1 , T = μ 1 and E x 2 , T = μ 2 , and hence E Δ x 1 , T + 1 = 0 and E Δ x 2 , T + 1 = δ .
The forecasts from M 1 will be biased by the bias in the random walk forecast of x 2 , T + 1 , so (see Appendix A.5 for derivations) neglecting the small impact of η i , T on β i β ^ i :
E ϵ ¯ ¯ T + 1 | T = β 2 δ ,
and the resulting mean square forecast error is:
MSFE 1 = E ϵ ¯ ¯ T + 1 | T 2 = β 2 2 δ 2 + 2 β 1 2 + β 2 2 + 4 ρ β 1 β 2 + σ ϵ 2 1 + 2 T 1 .
Comparison with (23) highlights the additional cost of using the random walk relative to the in-sample mean when neither forecasting device can predict the break, since:
E [ ϵ ^ ^ T + 1 | T 2 ] E ϵ ¯ ¯ T + 1 | T 2 = β 1 2 + β 2 2 + 2 ρ β 1 β 2 + 2 σ ϵ 2 T 1 .
The in-sample mean of x 1 is the optimal forecast of x 1 , T + 1 given its in-sample stationarity, so irrespective of the value of β 2 , the in-sample mean forecasts dominate when the shift is during the forecast period. When β 2 = 0 , (26) collapses to σ ϵ 2 + 2 β 1 2 , ignoring O p T 1 terms, compared to σ ϵ 2 + β 1 2 for the in-sample mean forecasts. A random walk doubles the error variance, so can be costly if there are no breaks or if the break occurs after the forecast origin. As for the in-sample mean case, the MSFE of M 1 is a function of the break.
The forecast bias for M 2 is the same as that for M 1 by the same argument, although MSFE 2 (reported in Appendix A.5) does deviate from that for M 1 as ψ β 2 increases. This is due to the correlation parameter ρ which is picking up part of the omitted variable x 2 , T + 1 in M 2 and has more effect as ψ β 2 increases. When β 2 = 0 , MSFE 2 σ ϵ 2 + 2 β 1 2 , which is the same as for M 1 . Despite small but increasing deviations as ψ β 2 increases, MSFE 2 follows a similar trajectory to MSFE 1 . The misspecification is less relevant for the random walk forecasts of the marginal processes relative to the effect of the break, similar to the results for the in-sample mean forecasts.

5.6. Selecting Forecasted Regressors

In practice, selection will be applied to determine whether to include x 2 , t or not. Then, from (12), we can obtain the MSFE 3 as:
MSFE 3 = MSFE 1 + 1 p α ψ β σ ϵ 2 T 1 ψ β 2 1 + ρ 2 1 ρ 2 + T 1 + 1 .
The trade-off between parameter estimation uncertainty and including x 2 is essentially the same as in the known variable case: if x 2 has a noncentrality of zero, so β 2 = ψ β 2 = 0 , then the one-step MSFE is minimized by excluding x 2 from the forecasting model. It should be included if ψ β 2 > 1 . However, depending on the values of ρ and T, the switch point can be smaller than ψ β 2 = 1 , although the impact is likely to be small given the scale factor σ ϵ 2 T 1 . Even though the random walk forecast is highly uncertain by using just one observation, if the variable that breaks is quite significant then it pays to include that variable when using the random walk forecast.
Figure 6 also records the MSFEs for the random walk forecasts using the same parameter values. The increase in MSFE over the in-sample mean forecasts is evident. Both MSFE 1 and MSFE 2 follow similar trajectories, although they do start to diverge for large ψ β 2 , with MSFE 3 at α = 0.16 close to MSFE 1 .

6. An In-Sample Shift in the Regressors

In contrast to the previous section, the break is assumed to occur at T, which is the last observation available for estimation. Now there is information available regarding the break when the forecasts are made. Such a framework would also be relevant in sequential forecasting. We consider forecasting using in-sample means. In common with the previous section, we study selection (Section 6.3 and Section 6.5), the random walk device to forecast the regressors (Section 6.4), and finally using the random walk to forecast y (Section 6.6).

6.1. Specification of the In-Sample Shift

The DGP is adapted from (15) but the shift in μ 2 occurs at T, rather than T + 1 :
x 1 , t = μ 1 + η 1 , t t = 1 , , T + 1 , x 2 , t = μ 2 + η 2 , t t = 1 , , T 1 , μ 2 + δ + η 2 , t t = T , T + 1 .

6.2. Forecasting Regressors Using In-Sample Means

The relationship of interest, i.e., the conditional equation for y T + 1 , remains constant. However, the in-sample mean μ y is shifted to ( μ y + β 2 δ ) at T. Although the only DGP parameter to shift is μ 2 to μ 2 + δ , sample calculations will be altered as now E x ¯ 2 = μ 2 + T 1 δ (see Appendix A.6 for derivations).
The impact on the estimated in-sample mean of x 2 , t will be small from the break, unless δ is very large, so by using the in-sample means for their future unknown values, the forecasted mean of y T + 1 for M 1 will still be close to μ y , and the resulting forecast error bias is:
E ϵ ^ ^ T + 1 | T + 1 β 2 δ 1 T 1 .
This is unbiased when β 2 = 0 , but could be badly biased if β 2 δ is large. The MSFE for M 1 is:
MSFE 1 = E ϵ ^ ^ T + 1 | T + 1 2 = β 2 2 δ 2 1 T 1 2 + β 1 2 + β 2 2 + σ ϵ 2 .
This is very similar to the MSFE 1 in (23) for an out-of-sample break using the in-sample means to forecast the exogenous regressors, and hence MSFE 2 and MSFE 3 as well, although the correlation between the two regressors does not enter.
When β 2 = 0 , both (23) and (28) collapse to σ ϵ 2 + β 1 2 . The dampening of the squared location shift by 1 T 1 2 slightly improves the MSFE for the in-sample shift relative to an out-of-sample shift at larger ψ β 2 , as shown in Figure 7.
For a break out of sample, we find the analytic results for M 2 are identical to those for M 1 (see Section 5.4). For the in-sample break, the forecast error and MSFE for M 2 does differ to that of M 1 (see Appendix A.6 for analytic results). This is because the in-sample location shift affects ρ which introduces a term similar to the squared location shift scaled by T in (28). Therefore, MSFE 1 MSFE 2 unless β 2 = 0 , with M 2 incurring a larger MSFE cost as ψ β 2 increases due to misspecification, although the divergence is small even for small T, and disappears asymptotically.

6.3. Selecting Regressors

Selection follows from (12) and hence:
MSFE 3 MSFE 1 + 1 p α ψ β σ ϵ 2 β 1 2 ρ 2 β 2 2 + 2 T 1 σ ν 2 + β 2 2 δ 2 .
The cost of omitting x 2 rises with β 2 2 δ 2 , although increases in β 2 will raise ψ β 2 and hence raise the probability of retaining x 2 , albeit unconnected with the magnitude of δ 2 . As the location shift is scaled by T, MSFE 3 MSFE 1 as T .

6.4. Forecasting Regressors Using a Random Walk

From the previous analysis in Section 6.2, knowledge of the break at T brought little benefit when using in-sample means as forecasts. However, the random walk should do better when the break occurs at T as opposed to T + 1 . As before:
x ˜ ˜ 1 , T + 1 | T = x 1 , T and x ˜ ˜ 2 , T + 1 | T = x 2 , T ,
but now E x 1 , T = μ 1 and E x 2 , T = μ 2 + δ , and hence E Δ x 1 , T + 1 = 0 and E Δ x 2 , T + 1 = 0 as well.
Given the unbiased forecasts of the exogenous regressors, it follows that the forecasts for M 1 are unbiased (see Appendix A.7) when the parameter estimates are unbiased. The MSFE for M 1 is:
MSFE 1 = E ϵ ^ ¯ T + 1 | T 2 = 2 β 1 2 + β 2 2 + 4 ρ β 1 β 2 + σ ϵ 2 1 + 2 T + δ 2 T 1 ρ 2 .
When β 2 = 0 , the MSFE is similar to that of the out-of-sample break case, where the random walk is costly as forecasts of both x 1 , T + 1 and x 2 , T + 1 are inefficient. However, (29) does depend on the magnitude of the shift independently of β 2 , unlike (26). MSFE 1 is a function of ψ β 2 , increasing as ψ β 2 increases, unlike in the known regressor case. But it does so more slowly than for breaks out of sample, or breaks in sample using the in-sample mean. As ψ β 2 increases, the break at T in μ 2 has a larger effect on the dependent variable, and hence the benefits of using a random walk forecast of x 2 , T + 1 are larger.
M 2 will suffer when β 2 0 as the forecasts will be biased. The MSFE for M 2 is:
MSFE 2 = E ϵ ¯ ˜ T + 1 | T 2 = β 2 2 δ 2 + ρ 2 + 1 + 2 β 1 2 + 4 ρ β 1 β 2 + σ ϵ 2 1 + T 1 + T 2 ψ β 2 ,
so no robustness in the sense of reducing bias is achieved unless β 2 = 0 . When β 2 = 0 , MSFE 2 < MSFE 1 , but the bias from not including a random walk, and hence unbiased, forecast of x 2 , T + 1 quickly outweighs parameter estimation costs as ψ β 2 increases.
Solving for MSFE 2 < MSFE 1 results in:
ψ β 2 < 1 ρ 2 + δ 2 1 ρ 2 T 1 1 + δ 2 .
The break term dominates and offsets on the numerator and denominator, leading to a trade-off at ≈1 with deviations scaled by T 1 . For ρ = 0.5 , T = 100 and δ = 4 , MSFE 2 dominates when ψ β = 1.05 . Interestingly, the cut-off is slightly above 1 for this case, compared to slightly below 1 for the known breaks out-of-sample case, but the results still imply that a selection significance level of approximately 16% would be optimal to trade-off the cost of estimating an additional parameter.
Figure 8 records the MSFEs from M 1 (29), M 2 (30) and three values of M 3 (A4) for the analytic results. There is a clear trade-off at ψ β 2 1 , just as in the known breaks case.

6.5. Selecting Forecasted Regressors

The final step is to compute the MSFE for M 3 for the random walk forecast, reported in Appendix A.7. Just as regression models are usually selected, that will occur for any forecasting devices designed to minimize systematic bias. As with Figure 5, selection between M 1 and M 2 can be advantageous even for these forecasting devices as seen in Figure 8. Selection outperforms M 1 for ψ β 2 < 1 , and remains close to the MSFE 1 at α = 0.05 and α = 0.16 , again in all cases matching or outperforming always using M 2 .
A comparison with the MSFE for the in-sample mean forecasts, also recorded in Figure 8, suggests a possible forecast improvement. If the regressor that breaks at T is known, combining the in-sample mean forecast for M 1 with the random walk forecast for M 2 will improve forecast performance (shifting the MSFE curves for the random walk forecast down by approximately 1). As the number of regressors increases, the forecasting method for each contemporaneous regressor will have a cumulative impact. However, as the break occurs in sample, methods to detect breaks at the forecast origin such as impulse indicator saturation (IIS) could be used to guide the forecaster to the most appropriate forecasting device.4 Selection between forecasting devices that minimize systematic bias versus those that trade-off bias and variance requires pre-testing and would only help for in-sample shifts; see, e.g., Chu et al. (1996).
Thus, selection can be valuable for forecasting to the extent that it retains relevant regressors that shift (here, x 2 ), and also if it eliminates irrelevant regressors that shift, as considered in Section 9.

6.6. Forecasting the Dependent Variable Using a Random Walk

If a break is suspected, an alternative to the approaches considered so far is to use a knowingly misspecified model of the conditional DGP. One possibility is to use a random walk forecast for y, with the advantage that y T is known and avoids the need to forecast x 1 , T + 1 and x 2 , T + 1 . Hendry and Mizon (2012) derive a forecast-error taxonomy for open models that demonstrates the numerous additional forecast errors that arise from forecasting regressors offline in open models. They show that, in some cases, it can pay to use a misspecified model rather than to forecast the regressors offline. The forecast device is:
y ˜ ˜ T + 1 | T = y T .
Then
y T = μ y + β 2 δ + β 1 η 1 , T + β 2 η 2 , T + ϵ T
is a noisy one-observation estimator of μ y + β 2 δ . The outturn at T + 1 is:
y T + 1 = μ y + β 2 δ + β 1 Δ η 1 , T + 1 + β 2 Δ η 2 , T + 1 + ϵ T + 1 + β 1 η 1 , T + β 2 η 2 , T .
The forecast error is given by:
ϵ ˜ ˜ T + 1 | T = y T + 1 y ˜ ˜ T + 1 | T = β 1 Δ η 1 , T + 1 + β 2 Δ η 2 , T + 1 + Δ ϵ T + 1 ,
which is unbiased and has a MSFE of:
MSFE 4 = E ϵ ˜ ˜ T + 1 | T 2 = 2 β 1 2 + β 2 2 + 4 ρ β 1 β 2 + 2 σ ϵ 2 .
This is independent of δ so should perform relatively the best when δ 2 is large, although performs worse than random walk forecasts for x 1 , T + 1 and x 2 , T + 1 when ψ β 2 is small; see Figure 8. The forecasts are invariant to omitting x 2 since this random walk forecast is independent of the regressors, which is a major advantage and negates the role of selection. However, there is a cost when the model is correctly specified. The results in the simulation below suggest that such an approach should be viewed as complementary, with forecast pooling across selected conditional models and misspecified robust devices designed to mitigate bias frequently outperforming individual methods.

7. Summary of Analytic Results and the Impact of Selection

The theoretical analysis has established four results.
  • Regressors should be retained if ψ β 1 . This is established for DGPs that are stationary or with a break out of sample for known regressors and a break in sample for random walk forecasts.
  • For the two-regressor case, ψ β = 1 maps to α 0.16 . Selection delivers improvements to the one-step-ahead MSFE for ψ β < 1 and can be close to the correct model specification for ψ β > 1 , with the largest deviations occurring at intermediate values of ψ β .
  • If there are breaks out of sample and contemporaneous regressors need to be forecast, the break dominates the MSFE and selection plays almost no role. Similar results are found even if the break occurs at the end of the sample, but the in-sample mean is used to forecast the regressors.
  • Random walk forecasts are costly if there are no breaks (forecasting x 1 , T + 1 ) or if the breaks are unpredictable (a break at T + 1 and forecasting T + 1 | T ). However, they improve MSFE when the break is predictable (break at T and forecasting T + 1 | T ).
Table 3 summarises the results for specific parameters using T = 50 ( T = 100 is in Table A1 in Appendix B). For each scenario, the ratio of MSFE j / MSFE 1 for j = 2 , 3 is reported. MSFE 2 has no selection, and is therefore listed as α = 0 , while three values of α are used for MSFE 3 . The squared noncentralities ψ β 2 = 0 , 1 , 4 , 9 , 16 capture the full hump shape seen in the figures above.
M 2 is the correct model in the column labelled ψ β 2 = 0 , so the ratio of MSFE 2 / MSFE 1 measures the cost of over-specification. The gains can be substantial in some cases, almost 30% for a break out of sample with known regressors, but in other cases including x 2 , t is not at all costly despite its irrelevance. Tighter selection for M 3 is close to M 2 as x 2 , t will be omitted more frequently, but even at α = 0.16 the ratio for M 3 is close to the ratio for M 2 , suggesting that selection is not costly.
Moving to the next column highlights the ψ β = 1 trade-off, with all cases almost exactly equal to one. A cut-off slightly lower than one was found in (19), which is reflected in the ratio marginally greater than one. Conversely, (31) found a cut-off slightly larger than one, resulting in a ratio slightly below one, but the differences are small.
Next, consider the columns labelled ψ β 2 = 4 , 9 , and 16. M 1 is the correct model so the objective is to minimize the ratio. In some cases M 2 performs poorly, but M 3 at α = 0.16 is frequently very close to 1, i.e., MSFE 1 . Selection forecast performance tends to be worse at ψ β 2 = 4 , but as the signal for x 2 increases, the probability of retaining x 2 increases so the selected model is closer to M 1 . The benefits of selection vary by case. For example, for a break at T using in-sample means, selection at α = 0.16 delivers a 2.4% improvement relative to M 2 for ψ β = 4 , compared to a halving of the ratio for the random walk. In almost every setting, MSFE 3 is close to MSFE 1 so the costs of selection are usually small, irrespective of the noncentrality. In that sense, model selection acts to reduce the risk relative to the worst model. Conversely, the costs of unmodeled shifts are very large, up to almost 8-fold greater than the baseline stationary MSFE 1 .
These results show that even facing breaks, the well-known trade-off for selecting variables in forecasting models, namely that variables should be retained if their noncentralities exceed 1, still applies, resulting in much looser significance levels than typically used. The problem with such an approach is that when many β 2 , i = 0 but are subject to location shifts, M 1 , which erroneously includes x 2 , t in the model, will perform worse. Loose significance levels increase the chance that irrelevant variables with ψ β = 0 are retained by being adventitiously significant for that draw. To evaluate this effect, the next section undertakes a simulation study of selection in models with ten irrelevant and five relevant exogenous regressor variables confronting a variety of shifts.

8. Simulation Design

We generalize the above analysis using Monte Carlo analysis, formalizing the DGP and models that are estimated. We consider larger models with dynamics, evaluating for a range of strategies to forecast future values of the regressors, different significance levels, and different configurations of out-of-sample breaks. The next section then evaluates the simulation results.

8.1. Data Generation Process

The DGP is for a scalar dependent variable y t , and N regressors x t = ( x 1 , t , , x N , t ) . There are n regressors that are relevant, i.e., have a nonzero coefficient in the DGP for y t , and N n that are irrelevant with coefficient zero.
We wish to introduce breaks either in relevant, or irrelevant, or both types of regressors. For convenience we assume that the regressors are ordered by increasing significance (i.e., squared noncentrality ψ β i 2 ). The DGP for y is an AR(1) with regressors:
y t * = β 0 + β y y t 1 * + j = 1 N β j x j , t + ϵ t , ϵ t IN [ 0 , σ ϵ 2 ] , t = Q + 1 , , 0 , 1 , , T + H .
The regressors are independent of each other and (in sample) have a common autoregressive coefficient λ and mean δ / 1 λ . We allow for a break in observations T + 1 and T + 2 , using subscript I if the break applies to xs that are irrelevant in (32) (i.e., have a coefficient of zero) and R for those that are relevant:
x j , t = δ + λ x j , t 1 + η j , t , η j , t IN [ 0 , 1 ] , j = 1 , , N , t = Q + 1 , , T , T + 3 , , x j , t = δ I + λ I x j , t 1 + η j , t , η j , t IN [ 0 , 1 ] , j = 1 , , N n , t = T + 1 , T + 2 , x j , t = δ R + λ R x j , t 1 + η j , t , η j , t IN [ 0 , 1 ] , j = N n + 1 , , t = T + 1 , T + 2 .
Throughout, we set σ ϵ 2 = 1 , β 0 = 5 , β y = 0.5 , δ = 2 , N = 15 . Fifty initial observations are discarded ( Q = 50 ). We set observation zero equal to twenty in each replication, giving the generated data as:
y t = y t * + 20 y 0 * .
The remaining coefficients in (32) are specified through their noncentralities. We run three alternative experiments:
ψ ( 1 ) : ψ β = 0 , 0 , 0 , 0 , 0 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , ψ ( 2 ) : ψ β = 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0.5 , 1 , 1.5 , 2 , 3 , 4 , ψ ( 4 ) : ψ β = 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 4 , 4 , 4 .
Then β j = ψ β j ( T N 2 ) V [ x j , t ] 1 / 2 , using the in-sample variances computed over t = 1 , , T . This ensures that the t-values in the estimates of (32) will be equal to ψ β j on average. Note that the noncentralities in each specification sum to twelve, and have n = 10 , 6 , 3 respectively.
With common coefficients δ and λ , the regressors are exchangeable in analytical calculations. The unconditional process of each x j , in the absence of any break, has mean x ¯ = δ / ( 1 λ ) and variance ( 1 λ 2 ) 1 . When δ = 2 and λ = 0.75 , the steady state for y t * is then y ¯ = 10 + 2 × 8 × 12 × { 83 / ( 1 0.775 2 ) } 1 / 2 10 + 16 × 0.87 = 23.9 , using total noncentrality of 12, x ¯ = 8 , T = 100 , N = 15 . The degrees-of-freedom adjustment counts N, the intercept, and the lagged dependent variable.
Breaks in the process for the target variable y are introduced through breaks in the regressors. During the break, δ R = 0.3 δ Δ , so δ drops by 2.3 . Keeping λ unchanged, the equilibrium changes from x ¯ = 8 to x ¯ Δ = 1.2 , which is a shock of six unconditional standard errors. The impact on y t depends on the coefficients β j . To quantify this, it is convenient to assume that the processes are at their unconditional means, after which we follow the shocks through the dynamic system, ignoring the disturbances. The impact on x when the coefficients change from ( δ , λ ) = ( 2 , 0.75 ) to ( δ Δ , λ Δ ) is given in Table 4.
The process reverts to the original coefficients at T + 3 , aiming to capture qualitatively aspects of a sustained but temporary structural break, such as the Great Financial Crisis or the COVID-19 pandemic. The impact of the break on y j , T + 1 | T is 0.87 times the new x. For ( 0.3 , 0.95 ) this is a change of 0.6 , well below y’s conditional standard error of unity.
Table 5 lists the break settings we consider. The upward break in slope (a) pushes the process towards a unit root, while the downward break in slope (b) makes it almost white noise. Figure 9 plots the second half of y t for one replication of the DGP and for each of the five specifications of the break. This is for T = 100 and after discarding the initial observations. The break lasts for two observations in the forecast period, after which the DGP reverts to the settings without break. Figure 9 illustrates the low impact of the break in mean and slope when ( δ Δ , λ Δ ) = ( 0.3 , 0.95 ) .
The design (33) allows for breaks in relevant variables, in irrelevant variables, or in both. In the last case: δ R = δ I = δ Δ and λ R = λ I = λ Δ . Breaks in irrelevant variables do not affect y, but can have an impact on forecasts if the irrelevant variables are used in the forecasts’ construction. However, when forecasting for T + 1 | T , such breaks have no impact at all, because the future x T + 1 s are not yet known.

8.2. Models and Forecast Devices

We generate Q + T + H observations from DGP (32)–(34), discarding the initial Q. The starting point for modeling is the general unrestricted model (GUM):
y t = β 0 + β y y t 1 + j = 1 N β j * x j , t + j = 1 N γ j * x j , t 1 + ϵ t , for t = 1 , , T .
An asterisk indicates that model selection is used, so the intercept and lagged y are not selected over but are always retained. Model selection is only performed once for each replication, but the selected model is re-estimated by ordinary least squares (OLS) each time that we forecast given data up to T + h 1 :
y t = β 0 + β y y t 1 + β ^ j * 0 β j x j , t + γ ^ j * 0 γ j x j , t 1 + ε t , for t = h , , T + h 1 .
Only one-step-ahead forecasts are generated and evaluated:
y ^ T + h | T + h 1 = β ^ 0 + β ^ y y T + h 1 + β ^ j * 0 β ^ j x ˜ j , T + h + γ ^ j * 0 γ ^ j x j , T + h 1 for h = 1 , , H .
The out-of-sample values x ˜ j , T + h of the regressors in (38) are unknown when forming the forecasts. We consider a range of forecast devices that can supply these missing values:
inf: 
future outcomes: x ˜ j , T + h = x j , T + h ;
avg: 
the in-sample average: x ˜ j , T + h = t = h T + h 1 x j , t / T ;
arx: 
an AR(1) for each regressor: x ˜ j , T + h = μ ^ j + ρ ^ j x j , T + h 1 , estimated by OLS for each horizon from:
x j , t = μ j + ρ j x j , t 1 + u j , t , t = h , , T + h 1 ;
rwx: 
the random walk forecast: x ˜ j , T + h = x j , T + h 1 ;
rdx: 
a random walk with differencing (Hendry 2006), using differenced estimates from (39):
x ˜ j , T + h = x j , T + h 1 + ρ ^ j Δ x j , T + h 1 .
cax: 
Cardt forecast of x ˜ j , T + h .
In addition, several alternatives that ignore the regressors are considered:
rwy: 
a random walk forecast: y ^ T + h = y T + h 1 ;
ary: 
an AR(1) forecast: y ^ T + h = γ ^ 0 + γ ^ 1 y T + h 1 , estimated by OLS for each horizon;
cay: 
Cardt forecasts of y ^ T + h .
Model selection is performed using Autometrics (Doornik 2009) for a range of target significance levels α = 0.001 , 0.01 , 0.05 , 0.1 , 0.16 , 0.32 . Forecasting from a re-estimated GUM (37) without selection is also considered (i.e., α = 1 ). Dropping all regressors (i.e., α = 0 ) leaves the AR(1) model for y t .
The devices that forecast the regressors supply plug-in values to allow forecasting with the GUM (36), as well as the reductions (37) of the GUM, at a range of nominal significance levels. Device inf uses future outcomes, making it infeasible for stochastic variables. Note that all devices using regressors benefit from some knowledge that is not available in practice, namely that the DGP is nested in the GUM, and the GUM is not misspecified. The fact that the regressors are exchangeable and break at the same time in the same way may also help: finding just one that matters could already improve the forecasts.
Cardt is a slightly improved version of Card (calibrated average of rho and delta methods), see Doornik et al. (2020a), which performed very well in the M4 forecast competition of Makridakis et al. (2020). Cardt averages forecasts from a differenced, autoregressive, and a moving average model. These are then treated as future observations in a calibration model with richer autoregressive structure. The full procedure is documented in Castle et al. (2021). Cardt pays particular attention to seasonality, which is irrelevant here. We use Cardt to make four forecasts, then use the first of these. The method will take logarithms by default. Switching that off makes little difference in these experiments. Cardt is used in daily COVID-19 forecasts of Doornik et al. (2020b).

8.3. Selecting Regressors

The noncentrality ψ β in the DGP affects the probabilities of retaining a variable in the model selection procedure. Table 6 shows the probability of retaining one or all relevant regressors assuming independent t-tests. While the probability of retaining one variable may be quite large, the joint probability of retaining all can be extremely low. Thus, even using a significance level of 16%, many relevant variables will be omitted if their noncentralities are small. However, their contribution to explaining the dependent variable is also small and breaks in such variables will have a smaller effect.
The fraction of relevant variables that is retained in the Monte Carlo experiment is denoted the potency, and the fraction of irrelevant variables that is retained is denoted the gauge. We always retain the intercept and lagged y, so the GUM (36) has 2 N possible variables to select over, of which n are relevant. For m = 1 , , M replications we define the indicator function 1 { · } and:
gauge m = 1 2 N n j = 1 N n 1 { β ^ j , m 0 } + j = 1 N 1 { γ ^ j , m 0 } , potency m = 1 n j = N n + 1 N 1 { β ^ j , m 0 } .
This is then averaged over all replications.
Table 7 shows that the empirical gauge matches the theoretical probabilities in Table 6 when using Autometrics for selection: the gauge is higher than α but not by much. Potencies are close to the powers of one-off t-tests with the same noncentralities, up to α = 0.1 , beyond that they fall behind. Consequently, it is appropriate to use Autometrics to investigate the theoretical results by simulating a more general setting, without concern that the selection algorithm will influence the results relative to the single t-test approach analyzed above.

9. Simulation Evidence

Simulation evidence is presented using the design of Section 8.1 and forecast devices of Section 8.2. All experiments use M = 10,000 and are implemented in Ox 9 (Doornik 2018) and PcGive (Hendry and Doornik 2018). We start with out-of-sample forecasts in Section 9.1, when the break is unanticipated. Then Section 9.2 compares breaks in relevant and irrelevant variables, Section 9.3 looks at forecasts after the break, Section 9.4 considers selection, Section 9.5 introduces pooled forecasts, and Section 9.6 summarizes.

9.1. Forecasting before the Break

The top half of Table 8 is for the case without breaks, when forecasting T + 1 | T is similar to forecasting T + 2 | T + 1 , etc. The table reports the ratio of the MSFE for devices inf, avg, arx, rwx respectively to the MSFE of ary for a range of significance levels α . Selection at α = 0 implies dropping all the regressors, leaving an AR(1) in y, denoted ary. The bottom row of each half gives the MSFE of ary. Not selecting at all ( α = 1 ) coincides with the GUM.
Without a break, knowing the future value of regressors, device inf, is only useful when they are significant. Using the sample mean avg never improves one-step forecasting relative to ary. This also holds when there is a break, and is even more pronounced for T + 2 | T + 1 and T + 3 | T + 2 (not shown). We see that MSFE a r y increases when there are more highly significant variables. There is an improvement over ary from forecasting the regressors with arx at strict significance levels for ψ ( 4 ) . In this stationary DGP without breaks, arx dominates rwx: it is better to model the regressors by an autoregression (the true model) than taking the last known value.
The bottom half of Table 8 is for the cases with an out-of-sample break in the relevant variables only. The ratios for the five break settings (in mean, in slope, and in mean and slope, for (a) and (b)) are averaged. Now it really would help to know the future. There is only a small penalty for including irrelevant regressors, as their influence is swamped by the break. Except for the sample means, both feasible methods perform on a par with ary. The infeasible device is best with loose selection, as was found theoretically.

9.2. Selection and Location of the Break

The design of the experiments allows for three locations of the break. Table 9 gives the mean square forecast errors for a break in mean and slope (b), listing three cases.
Break in relevant regressors 
( δ R = 0.3 , λ R = 0.05 , δ I = δ , λ I = λ )
The break shows up in y through the relevant variables. Inclusion of irrelevant variables in the forecasting model is not costly relative to the impact of the break. Loose selection is preferred, because it includes more relevant variables. For T + 1 | T selection has no impact because the break is not observed (except for known regressors). Including regressors in arx and rwx gives a substantial improvement over ary.
Break in irrelevant regressors 
( δ I = 0.3 , λ I = 0.05 , δ R = δ , λ R = λ )
There is no break in y, so any inclusion of irrelevant variables is costly, as their break offsets the small estimated coefficients. The more irrelevant variables included, the stronger this effect. The autoregression in y is almost always preferred.
Break in all regressors 
( δ R = δ I = 0.3 , λ R = λ I = 0.05 )
The y variable is identical to that of a break in relevant variables only. Selection is now a trade-off between including variables that matter and help with forecasting, and irrelevant variables that make forecasts worse. Including regressors in arx and rwx gives a substantial improvement over ary.

9.3. Forecasting after theBreak

We now dispense of inf for its infeasibility, and avg because it has the highest MSFE in all experiments. Table 10 reports the ratio of the MSFE for all other devices to that of ary. For the devices that forecast regressor values, results are reported after selection at 10 % .
When there is no break, only arx is able to gain on ary, and then only for the design with significant regressors (but stricter selection would help; see Table 8). Otherwise, and always for the break in irrelevant variables only, the AR(1) in y has the smallest mean square forecast error. This matches an oft-found outcome. This model is misspecified, ignoring all information from the exogenous regressors, but misspecification need not entail forecast failure. Indeed, the costs of forecasting the exogenous regressors can outweigh their inclusion. However, the DGP design is also an AR(1) in y so this forecasting device has the advantage of correctly specifying the dynamics. It may not perform so well if the DGP contains more complex dynamics.
The AR(1) in y performs poorly when relevant regressors break. Now we see substantial gains in Table 10 from modeling the regressors, even shortly after the break has finished (the break is active for T + 1 and T + 2 ).
Device rdx improves on rwx when the process shifts towards a unit root, but not otherwise. Cardt behaves quite similar to the random walk forecasts in this DGP: cax is close to rwx in most cases. Cardt on y is usually a small improvement on rwy in the cases with a break.
The AR(1) for x always improves on ary in the cases with break. In the first period with an observed break, T + 2 , it is the worst of the methods that forecast regressors, while in subsequent periods it is the best of these. But note that at T + 3 the naive random walk forecast of y and Cardt are better still.

9.4. Is Selection Costly When Forecasting?

Comparing selection to using the GUM to forecast regressors, we find that selection is always advantageous. Table 11 gives the average MSFE ratio relative to ary, where the average is taken over the three noncentrality settings, and different break cases. The top panel of the table combines cases where there is no change in y, either because nothing breaks, or for the break in mean and slope for irrelevant variables only. In that case ary tends to dominate, so tight selection is advantageous. The exception is highly significant regressors in a stationary setting.
The bottom panel of Table 11 averages over the five cases where all variables break. There we often see a U-shaped effect of selection, with a loose selection best. This is particularly so at T + 2 | T + 1 , as was found in the theoretical results.
The bottom row in each panel of Table 11 gives the result when the specification of the DGP is known but its parameters need estimated. The entries under inf have the most information: the DGP as well as the future values of the regressors. Moving to the other columns shows the cost of not knowing the latter.

9.5. Forecast Combinations

Many investigations of forecasting have shown that combined forecasts can outperform the individual forecasts. The main candidates here are arx in combination with a random walk style forecast of y. Although there are many other possibilities, we restrict ourselves to:
apool 
(arx + rwy)/2;
cpool 
(arx + cay)/2.
In both cases arx is used in the model that is selected from the GUM at 10 % .
To summarize the results, we consider again the MSFE relative to ary, with a three-way average across noncentralities, break types and horizons T + 2 , T + 3 , T + 4 . Table 12 illustrates that in this setting pooling can be advantageous as well. It is even competitive with the infeasible device.

9.6. Summary of the Simulation Results

We can infer some general results from the experiments. First, using the in-sample mean to forecast the exogenous regressors is always dominated by other approaches.
Next, when the break occurs out of sample, so forecasts are computed for T + 1 , all methods struggle, and incorporating regressors is worse than simply using the AR(1) for y. Moving to the case when the break occurs in sample, so the forecasts are computed for T + 2 when the break occurs at T + 1 , the random walk forecasts of the regressors is preferred when the break occurs in the relevant or all regressors. Looser significance levels tend to do well here. If the breaks occur in the irrelevant regressors, including even one can already be poisonous, and the AR(1) in y performs best.
There are substantial differences in the forecast performance of the two robust devices rwx and rdx. The former is the random walk for the regressor, and works best, except if the break drives the process towards a unit root. In that case, the differenced AR(1) for x gives a higher weight to the previous value. However, when the type of break is unknown, represented by the average performance here, the simple random walk dominates.
Table 12, rather arbitrarily, averages over all experiments and horizons. It shows that pooling provides some protection against different states of nature, just inching ahead of the autoregression in y. After that come the methods that ignore regressors, followed by using an AR(1), random walk, or Cardt, to forecast the regressors. However, if we know that a break has happened in the regressors, we should switch to modeling them, at least until the break is out of the system again.
The variation in MSFEs across α is very small for intermediate values of α relative to the variation in MSFEs across break types and DGP designs. For moderate α the selection significance level does not have a large impact on forecast performance. This is an encouraging finding showing that forecast performance is relatively unaffected by the precise choice of significance level for selection when using Autometrics, despite a range of noncentralities and numbers of relevant and irrelevant exogenous variables.

10. Conclusions

This paper investigates the choice of significance level and its associated critical value when selecting forecasting models, both analytically in a static bivariate setting where there are location shifts at the forecast origin, and in more general simulation experiments. The theory suggests that variables should be retained if their noncentralities exceed 1, which translates to c α 2 = 2 at the boundary. This result holds regardless of whether location shifts affect the variable about which a retention decision is made. Undertaking selection at such loose significance levels implies that fewer relevant variables will be excluded when they contribute to forecast accuracy, but that more variables will be retained by chance because they happen to be in a draw that results in statistical significance at the proposed critical value. Although retaining irrelevant variables that are subject to location shifts usually worsens forecast performance, their coefficient estimates will be driven towards zero when updating estimates as the horizon moves forward.
Although the static design is simple, it produces several generic analytical results. Those results hold regardless of whether the regressors are contemporaneous or lagged, although the timing of location shifts is fundamental. Dynamics will slow adjustment to new equilibria, but this would not change the essence of the results. The inflation forecasts illustrated the analytic results, with a loose selection significance level of 16% being preferred for both the known regressors and the random walk forecasts for unknown regressors case.
The simulation evidence examines a wide range of experimental designs and despite the disparate outcomes, they provide some guidance for forecasting. The ideal scenario is obviously to have complete knowledge of the DGP, such that the empirical modeller knows the number and magnitude of both relevant and irrelevant regressors, and their future values, and hence whether and where breaks are likely to occur. In practice, no-one has the benefit of omniscience, and once the future values of regressors need to be forecast, selecting from a GUM that nests the DGP may cost little, relative to knowing the precise specification of the DGP.
The simulation results suggest that if the model is being used primarily for one-step-ahead forecasting with the aim of minimizing MSFE, selection at looser than standard selection significance levels may well help, and doing so will rarely hinder forecast performance. The results provide some support for selecting models at around 10% when there are approximately 15 regressors, many of which are irrelevant. This is close to the 16% derived theoretically in this paper when the number of irrelevant regressors is small. The simulation results also highlight the degree of complexity in pinning down the optimal selection rule for forecasting, with results depending on all aspects of the experimental design. A take-away for the forecaster is that pooling works well across many settings, suggesting a combination of a robust device which minimizes systematic bias and model-based forecast based on univariate methods as a good insurance policy. Moreover, methods that did not nest the DGP, such as the direct AR(1) forecast of the dependent variable and Cardt, also performed well, both matching commonly found empirical outcomes. However, if we know that a break has happened, one-step forecasts are improved by incorporating forecasts of the regressors.

Author Contributions

Conceptualization, J.L.C., J.A.D. and D.F.H.; Methodology, J.L.C., J.A.D. and D.F.H.; Software, J.A.D.; Formal Analysis, J.L.C., J.A.D. and D.F.H.; Writing and Original Draft Preparation, J.L.C., J.A.D. and D.F.H.; Writing Review and Editing, J.L.C., J.A.D. and D.F.H. All authors have read and agreed to the published version of the manuscript.

Funding

Financial support from the Robertson Foundation (award 9907422), the Institute for New Economic Thinking (grant 20029822), and the ERC (grant 694262, DisCont) is gratefully acknowledged.

Data Availability Statement

Data available from stated sources.

Acknowledgments

We thank participants at the 2018 International Symposium of Forecasting, the 7th Rhenish Multivariate Time Series Econometrics Meeting in Koblenz, the 20th OxMetrics Users Conference, and the 2nd Forecasting at Central Banks Conference at the Bank of England for helpful comments, as well as members of the Economics Department Econometrics Lunch group at Oxford University, Michael P. Clements, Andrew B. Martinez, Felix Pretis, and Sophocles Mavroeidis. We thank Michael McCracken for suggesting comparisons with bagging which we will investigate in future research. We are especially grateful to Neil Ericsson and two anonymous referees for their careful reading and many helpful comments.

Conflicts of Interest

Doornik and Hendry have developed Autometrics, which is included in the OxMetrics software package, and have a share in the returns.

Appendix A. Analytic Calculations

Appendix A.1.

Derivations for the equations reported in Section 3.
The DGP given in (1)–(3) results in
T β ^ 1 β 1 β ^ 2 β 2 N 2 0 0 , σ ϵ 2 σ 11 2 σ 22 2 1 ρ 2 σ 22 2 ρ σ 11 σ 22 ρ σ 11 σ 22 σ 11 2 ,
with:
T μ y μ ^ y N 0 , σ ϵ 2 ,
where we subsequently set σ 11 = σ 22 = 1 without loss of generality.
M 2 in (6) partials out x 2 , t . From (2) we can write in deviations from means for t = 1 , , T :
x 2 , t μ 2 = ρ x 1 , t μ 1 + e t ,
such that e t = η 2 , t ρ η 1 , t , so γ 1 = β 1 + β 2 ρ and ϕ 0 = μ y γ 1 μ 1 . Hence M 2 is:
y t = μ y + β 1 + β 2 ρ x 1 , t μ 1 + β 2 e t + ϵ t = γ 0 + γ 1 x 1 , t μ 1 + ν t ,
with γ 0 = μ y . The error for M 2 is given by:
ν t = β 2 η 2 , t ρ η 1 , t + ϵ t ,
where
σ ν 2 = σ ϵ 2 + β 2 2 1 ρ 2 = σ ϵ 2 1 + T 1 ψ β 2 σ ϵ 2 .
Also
T γ ˜ 0 γ 0 γ ˜ 1 γ 1 N 2 0 0 , σ ν 2 1 0 0 1 .

Appendix A.2.

Derivations for the equations reported in Section 4.
The one-step-ahead forecast error from M 1 is:
ϵ ^ T + 1 | T = y T + 1 y ^ T + 1 | T = μ y μ ^ y + β 1 β ^ 1 x 1 , T + 1 μ 1 + β 2 β ^ 2 x 2 , T + 1 μ 2 + ϵ T + 1 .
When there are no breaks, the parameter estimates are unbiased, E ϵ ^ T + 1 | T = 0 so the MSFE of M 1 is:
E ϵ ^ T + 1 | T 2 = σ ϵ 2 1 + 1 T + 2 T 1 ρ 2 2 ρ 2 T 1 ρ 2 = σ ϵ 2 1 + 3 T .
The one-step-ahead forecast error from M 2 in which x 2 , t is omitted is:
ϵ ˜ T + 1 | T = y T + 1 y ˜ T + 1 | T = β 2 η 2 , T + 1 + ϵ T + 1 + γ 0 γ ˜ 0 + β 1 γ ˜ 1 η 1 , T + 1 .
Therefore, despite the misspecification, E ϵ ˜ T + 1 | T = 0 and the MSFE is:
E ϵ ˜ T + 1 | T 2 = E β 2 η 2 , T + 1 + ϵ T + 1 + γ 0 γ ˜ 0 + β 1 γ ˜ 1 η 1 , T + 1 2 = σ ν 2 1 + 2 T .

Appendix A.3.

Derivations for the equations reported in Section 5.2.
The regression equation itself stays constant so:
y T + 1 = μ y + β 2 δ + β 1 x 1 , T + 1 μ 1 + β 2 x 2 , T + 1 μ 2 δ + ϵ T + 1 .
Consequently, using β ^ 0 = μ y β ^ 1 μ 1 β ^ 2 μ 2 to match the formulation of M 2 , the forecast for M 1 is:
y ^ ¯ T + 1 | T + 1 = μ y + β ^ 2 δ + β ^ 1 x 1 , T + 1 μ 1 + β ^ 2 x 2 , T + 1 μ 2 δ ,
and the one-step-ahead forecast error for M 1 is:
ϵ ^ ¯ T + 1 | T + 1 = y T + 1 y ^ ¯ T + 1 | T + 1 = β 2 β ^ 2 δ + β 1 β ^ 1 η 1 , T + 1 + β 2 β ^ 2 η 2 , T + 1 + ϵ T + 1 ,
and a one-step-ahead MSFE of:
E ϵ ^ ¯ T + 1 | T + 1 2 = σ ϵ 2 1 + δ 2 + 2 ρ T 1 ρ 2 .
Next consider the one-step-ahead forecast for M 2 , given γ 0 = μ y and γ 1 = β 1 + β 2 ρ :
y ˜ ¯ T + 1 | T + 1 = γ ˜ 0 + γ ˜ 1 x 1 , T + 1 μ 1 .
The one-step-ahead forecast error is given by:
ϵ ˜ ¯ T + 1 | T + 1 = y T + 1 y ˜ ¯ T + 1 | T + 1 = β 2 δ + γ 0 γ ˜ 0 + γ 1 γ ˜ 1 η 1 , T + 1 β 2 ρ η 1 , T + 1 + β 2 η 2 , T + 1 + ϵ T + 1 ,
and the one-step-ahead MSFE for M 2 is:
E ϵ ˜ ¯ T + 1 | T + 1 2 = σ ϵ 2 + β 2 2 1 ρ 2 + δ 2 + 2 T 1 σ ν 2 .

Appendix A.4.

Derivations for the equations reported in Section 5.4.
For β ^ 0 = μ y β ^ 1 μ 1 β ^ 2 μ 2 , replacing the unknown x i , T + 1 by μ i leads to forecasting y T + 1 by the in-sample mean:
y ^ ^ T + 1 | T = μ y ,
so the forecast error for M 1 is:
ϵ ^ ^ T + 1 | T = y T + 1 y ^ ^ T + 1 | T = β 2 δ + β 1 η 1 , T + 1 + β 2 η 2 , T + 1 + ϵ T + 1 ,
and the forecast error bias is:
E ϵ ^ ^ T + 1 | T = β 2 δ .
The MSFE 1 is:
E ϵ ^ ^ T + 1 | T 2 = β 1 2 + β 2 2 1 + δ 2 + 2 ρ β 1 β 2 + σ ϵ 2 .
Parameter estimation adds terms of O p T 1 .
Similarly, for M 2 , from (6) forecasting x 1 , T + 1 by μ 1 leads to:
y ˜ ˜ T + 1 | T = μ y ,
and hence for ‘known’ μ y the forecast error is:
ϵ ˜ ˜ T + 1 | T = β 2 δ + β 1 η 1 , T + 1 + β 2 η 2 , T + 1 + ϵ T + 1 = ϵ ^ ^ T + 1 | T ,
with
E ϵ ˜ ˜ T + 1 | T = β 2 δ ,
and MSFE 2 is given by (23). Hence, ignoring O p T 1 terms, MSFE 2 = MSFE 1 .

Appendix A.5.

Derivations for the equations reported in Section 5.5.
From (A2) the regression equation for y T + 1 can also be written as:
y T + 1 = μ y + β 2 δ + β 1 Δ x 1 , T + 1 + β 2 Δ x 2 , T + 1 δ + ϵ T + 1 + β 1 η 1 , T + β 2 η 2 , T .
Furthermore, the forecast for M 1 using (24) and (25) is:
y ¯ ¯ T + 1 | T = μ y + β ^ 1 x 1 , T μ 1 + β ^ 2 x 2 , T μ 2 ,
so the forecast error for M 1 is:
ϵ ¯ ¯ T + 1 | T = y T + 1 y ¯ ¯ T + 1 | T = β 2 δ + β 1 Δ x 1 , T + 1 + β 2 Δ x 2 , T + 1 δ + β 1 β ^ 1 η 1 , T + β 2 β ^ 2 η 2 , T + ϵ T + 1 .
Consequently, neglecting the small impact of η i , T on β i β ^ i :
E ϵ ¯ ¯ T + 1 | T = β 2 δ ,
and hence MSFE 1 is:
E ϵ ¯ ¯ T + 1 | T 2 = 2 β 1 2 + β 2 2 2 + δ 2 + 4 ρ β 1 β 2 + σ ϵ 2 1 + 2 T 1 .
Next, we compute the equivalent bias and MSFE for M 2 , noting γ 1 = β 1 + β 2 ρ , so that the forecast is given by:
y ¯ ˜ T + 1 | T = γ ˜ 0 + γ ˜ 1 x 1 , T μ 1 .
As γ ˜ 0 = γ 0 = μ y , the forecast error for M 2 using the random walk is:
ϵ ¯ ˜ T + 1 | T = y T + 1 y ¯ ˜ T + 1 | T = β 2 δ + β 1 Δ η 1 , T + 1 + β 2 Δ η 2 , T + 1 + ϵ T + 1 + β 1 γ ˜ 1 η 1 , T + β 2 η 2 , T ,
where, as before:
E ϵ ¯ ˜ T + 1 | T = β 2 δ .
Neglecting the small impact of η 1 , T on γ ˜ 1 the MSFE for M 2 is:
E ϵ ¯ ˜ T + 1 | T 2 = 2 β 1 2 + β 2 2 3 + ρ 2 + δ 2 + 4 ρ β 1 β 2 + σ ϵ 2 1 + T 1 + T 2 ψ β 2 .

Appendix A.6.

Derivations for the equations reported in Section 6.2.
The conditional DGP for the forecast observation is:
y T + 1 = β 0 + β 1 x 1 , T + 1 + β 2 x 2 , T + 1 + ϵ T + 1 = μ y + β 2 δ + β 1 x 1 , T + 1 μ 1 + β 2 x 2 , T + 1 μ 2 δ + ϵ T + 1 ,
where the in-sample mean μ y is shifted to ( μ y + β 2 δ ) at T. Sample calculations will be altered as now E x ¯ 2 = μ 2 + T 1 δ from:
x ¯ 2 = 1 T t = 1 T x 2 , t = μ 2 + T 1 δ + η ¯ 2 ,
and neglecting terms of T 2 or smaller:
σ 22 * 2 σ 22 2 + T 1 δ 2 ,
with σ 12 * = σ 12 implying that:
ρ * = σ 12 σ 11 σ 22 * .
The intercept is again included with β ^ 0 = μ y β ^ 1 μ 1 β ^ 2 μ 2 to match the formulation of M 2 .
y ^ ^ T + 1 | T + 1 β ^ 0 + β ^ 1 μ 1 + β ^ 2 μ 2 + T 1 δ = μ y + β ^ 2 T 1 δ ,
and hence neglecting terms of T 2 or smaller, the forecast error for M 1 is:
ϵ ^ ^ T + 1 | T + 1 = y T + 1 y ^ ^ T + 1 | T + 1 β 2 δ 1 T 1 + β 1 η 1 , T + 1 + β 2 η 2 , T + 1 + ϵ T + 1 ,
so the forecast error bias is given by:
E ϵ ^ ^ T + 1 | T + 1 β 2 δ 1 T 1 .
The MSFE for M 1 is:
E ϵ ^ ^ T + 1 | T + 1 2 β 2 2 δ 2 1 T 1 2 + β 1 2 + β 2 2 + σ ϵ 2 .
Omitting x 2 from the forecasting equation leads to a forecast error of:
ϵ ¯ ^ T + 1 | T + 1 = y T + 1 y ¯ ^ T + 1 | T + 1 β 2 δ + γ 0 γ ˜ 0 + γ 1 γ ˜ 1 η 1 , T + 1 + v T + 1 ,
with an MSFE for M 2 given by:
E ϵ ¯ ^ T + 1 | T + 1 2 β 2 2 δ 2 + σ ϵ 2 + σ ν 2 1 + 2 T ,
where σ ν 2 is given in (A1).

Appendix A.7.

Derivations for the equations reported in Section 6.4 and Section 6.5.
Following a similar strategy as the previous analysis, including the intercept for comparability where β ^ 0 = μ y β ^ 1 μ 1 β ^ 2 μ 2 , then the forecast for M 1 is:
y ^ ¯ T + 1 | T + 1 = β ^ 0 + β ^ 1 x ˜ ˜ 1 , T + 1 | T + β ^ 2 x ˜ ˜ 2 , T + 1 | T = μ y + β ^ 2 δ + β ^ 1 η 1 , T + 1 + β ^ 2 η 2 , T + 1 ,
so that the forecast error for M 1 is:
ϵ ^ ¯ T + 1 | T = y T + 1 y ^ ¯ T + 1 | T = β 2 β ^ 2 δ + β 1 Δ η 1 , T + 1 + β 2 Δ η 2 , T + 1 + ϵ T + 1 + β 1 β ^ 1 η 1 , T + β 2 β ^ 2 η 2 , T ,
with E ϵ ^ ¯ T + 1 | T = 0 when the parameter estimates are unbiased. The MSFE for M 1 is:
E ϵ ^ ¯ T + 1 | T 2 = 2 β 1 2 + β 2 2 + 2 ρ β 1 β 2 + σ ϵ 2 1 + T 1 2 + δ 2 1 ρ 2 .
Next we compute the random walk forecast for M 2 so γ 1 = β 1 + β 2 ρ and γ 0 = μ y , leading to the forecast given by:
y ¯ ˜ T + 1 | T = γ ˜ 0 + γ ˜ 1 x 1 , T μ 1 ,
and the forecast error for M 2 is:
ϵ ¯ ˜ T + 1 | T = y T + 1 y ¯ ˜ T + 1 | T = β 2 δ + β 1 Δ η 1 , T + 1 + β 2 Δ η 2 , T + 1 + ϵ T + 1 + β 1 γ ˜ 1 η 1 , T + β 2 η 2 , T ,
which is now biased for β 2 δ 0 . The MSFE for M 2 is:
E ϵ ¯ ˜ T + 1 | T 2 = 2 β 1 2 + β 2 2 δ 2 + 1 + ρ 2 + 4 ρ β 1 β 2 + σ ϵ 2 1 + T 1 + T 2 ψ β 2 .
From (12):
MSFE 3 = MSFE 1 + 1 p α ψ β β 2 2 δ 2 + ρ 2 1 + σ ϵ 2 δ 2 T 1 ρ 2 T 1 + T 2 ψ β 2 .

Appendix B

Table A1. Ratio of MSFE to that of MSFE 1 . T = 100 , otherwise as Table 3.
Table A1. Ratio of MSFE to that of MSFE 1 . T = 100 , otherwise as Table 3.
MSFE Relative to MSFE 1
Model ψ β 2 = 0 ψ β 2 = 1 ψ β 2 = 4 ψ β 2 = 9 ψ β 2 = 16
Section 4.1 and Section 4.2 No shift with known future regressors
α = 0 ( M 2 ) 0.9901.0001.0301.0791.149
α = 0.001 0.9901.0001.0261.0481.035
α = 0.05 0.9911.0001.0141.0121.003
α = 0.16 0.9921.0001.0081.0041.001
Section 5.2 and Section 5.3 Out-of-sample shift with known future regressors
α = 0 ( M 2 ) 0.8271.0081.5512.4573.724
α = 0.001 0.8271.0081.4971.8951.651
α = 0.05 0.8361.0071.2671.2171.056
α = 0.16 0.8551.0051.1521.0811.013
Section 5.4 Out-of-sample shift with mean forecast of future regressors
α = 0 ( M 2 ) 1.0001.0001.0001.0001.000
α = 0.001 1.0001.0001.0001.0001.000
α = 0.05 1.0001.0001.0001.0001.000
α = 0.16 1.0001.0001.0001.0001.000
Section 5.5 Out-of-sample shift with random walk forecast of future regressors
α = 0 ( M 2 ) 0.9971.0021.0131.0241.033
α = 0.001 0.9971.0021.0121.0151.008
α = 0.05 0.9971.0021.0061.0041.001
α = 0.16 0.9971.0011.0041.0011.000
Section 6.2 and Section 6.3 In-sample shift with mean forecast of future regressors
α = 0 ( M 2 ) 1.0101.0091.0081.0071.007
α = 0.001 1.0101.0091.0081.0051.002
α = 0.05 1.0101.0081.0041.0011.000
α = 0.16 1.0081.0061.0021.0001.000
Section 6.4 and Section 6.5 In-sample shift with random walk forecast of future regressors
α = 0 ( M 2 ) 0.9310.9941.1551.3861.661
α = 0.001 0.9310.9941.1401.2371.158
α = 0.05 0.9340.9951.0751.0581.014
α = 0.16 0.9420.9961.0431.0211.003

Note

1
Clements and Hendry (1993) argue that the generalized forecast error second moment should be used to evaluate forecast performance instead of MSFE. In this case the results would be equivalent, because we focus on one-step-ahead forecasts.
2
UK quarterly consumer price index (CPI) is given by ONS series D7BT, which is the quarterly average of the monthly index. Annual inflation percentage is defined as π t = 100 Δ 4 log D 7 BT t . UK Unemployment is the quarterly average of ONS series MGUK, LFS ILO unemployment rate (UK, All, Aged 16 and over, %, NSA).
3
Intermediate alternatives such as sub-sample estimation, recursive or rolling estimation could also be used.
4
Castle et al. (2012) demonstrate the ability of IIS to detect breaks in the form of location shifts at any point in the sample.

References

  1. Akaike, Hirotogu. 1973. Information theory and an extension of the maximum likelihood principle. In Second International Symposium of Information Theory. Edited by Boris N. Petrov and Frigyes Csaki. Budapest: Akademiai Kiado, pp. 267–81. [Google Scholar]
  2. Bontemps, Christophe, and Grayham E. Mizon. 2003. Congruence and encompassing. In Econometrics and the Philosophy of Economics. Edited by Bernt P. Stigum. Princeton: Princeton University Press, pp. 354–78. [Google Scholar]
  3. Campos, Julia, David F. Hendry, and Hans-Martin Krolzig. 2003. Consistent model selection by an automatic Gets approach. Oxford Bulletin of Economics and Statistics 65: 803–19. [Google Scholar] [CrossRef]
  4. Castle, Jennifer L., Jurgen A. Doornik, and David F. Hendry. 2012. Model selection when there are multiple breaks. Journal of Econometrics 169: 239–46. [Google Scholar] [CrossRef] [Green Version]
  5. Castle, Jennifer L., Jurgen A. Doornik, and David F. Hendry. 2021. Forecasting principles from experience with forecasting competitions. Forecasting 3: 138–65. [Google Scholar] [CrossRef]
  6. Castle, Jennifer L., Jurgen A. Doornik, David F. Hendry, and Felix Pretis. 2015. Detecting location shifts during model selection by step-indicator saturation. Econometrics 3: 240–64. [Google Scholar] [CrossRef] [Green Version]
  7. Castle, Jennifer L., Michael P. Clements, and David F. Hendry. 2015. Robust approaches to forecasting. International Journal of Forecasting 31: 99–112. [Google Scholar] [CrossRef] [Green Version]
  8. Chu, Chia-Shang, Maxwell Stinchcombe, and Halbert White. 1996. Monitoring structural change. Econometrica 64: 1045–65. [Google Scholar] [CrossRef] [Green Version]
  9. Clements, Michael P., and David F. Hendry. 1993. On the limitations of comparing mean squared forecast errors (with discussion). In Journal of Forecasting. vol. 12, pp. 617–37, Reprinted in Mills, Terence C., ed. 1999. Economic Forecasting. Cheltenham: Edward Elgar Publishing. [Google Scholar]
  10. Clements, Michael P., and David F. Hendry. 1998. Forecasting Economic Time Series. Cambridge: Cambridge University Press. [Google Scholar]
  11. Clements, Michael P., and David F. Hendry. 2001. Explaining the results of the M3 forecasting competition. International Journal of Forecasting 17: 550–54. [Google Scholar]
  12. Doornik, Jurgen A. 2009. Autometrics. In The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry. Edited by Jennifer L. Castle and Neil Shephard. Oxford: Oxford University Press, pp. 88–121. [Google Scholar]
  13. Doornik, Jurgen A. 2018. Object-Oriented Matrix Programming Using Ox, 8th ed. London: Timberlake Consultants Press. [Google Scholar]
  14. Doornik, Jurgen A., Jennifer L. Castle, and David F. Hendry. 2020a. Card forecasts for M4. International Journal of Forecasting 36: 129–34. [Google Scholar] [CrossRef]
  15. Doornik, Jurgen A., Jennifer L. Castle, and David F. Hendry. 2020b. Short-term forecasting of the coronavirus pandemic. International Journal of Forecasting. in press. [Google Scholar] [CrossRef] [PubMed]
  16. Fildes, Robert, and Keith Ord. 2002. Forecasting competitions—Their role in improving forecasting practice and research. In A Companion to Economic Forecasting. Edited by Michael P. Clements and David F. Hendry. Oxford: Blackwells, pp. 322–53. [Google Scholar]
  17. Hendry, David F. 2006. Robustifying forecasts from equilibrium-correction models. Journal of Econometrics 135: 399–426. [Google Scholar] [CrossRef]
  18. Hendry, David F., and Grayham E. Mizon. 2012. Open-model forecast-error taxonomies. In Recent Advances and Future Directions in Causality, Prediction, and Specification Analysis. Edited by Xiaohong Chen and Norman R. Swanson. New York: Springer, pp. 219–40. [Google Scholar]
  19. Hendry, David F., and Jurgen A. Doornik. 2018. Empirical Econometric Modelling—PcGive 15 Volume I. London: Timberlake Consultants Press. [Google Scholar]
  20. Ing, Ching-Kang, and Ching-Zong Wei. 2003. On same-realization prediction in an infinite-order autoregressive process. Journal of Multivariate Analysis 85: 130–55. [Google Scholar] [CrossRef] [Green Version]
  21. Leeb, Hannes, and Benedikt M. Pötscher. 2009. Model selection. In Handbook of Financial Time Series. Edited by Torben Andersen, Richard A. Davis, Jens-Peter Kreiss and Thomas Mikosch. Berlin: Springer, pp. 889–926. [Google Scholar]
  22. Makridakis, Spyros, and Michele Hibon. 2000. The M3-competition: Results, conclusions and implications. International Journal of Forecasting 16: 451–76. [Google Scholar] [CrossRef]
  23. Makridakis, Spyros, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2020. The M4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting 36: 54–74. [Google Scholar] [CrossRef]
  24. Pötscher, Benedikt M. 1991. Effects of model selection on inference. Econometric Theory 7: 163–85. [Google Scholar] [CrossRef]
  25. Shibata, Ritei. 1980. Asymptotically efficient selection of the order of the model for estimating parameters of a linear process. Annals of Statistics 8: 147–64. [Google Scholar] [CrossRef]
  26. Stock, James, and Mark W. Watson. 2009. Phillips curve inflation forecasts. In Understanding Inflation and the Implications for Monetary Policy. Edited by Jeff Fuhrer, Yolanda Kodrzycki, Jane Sneddon Little and Giovanni Olivei. Cambridge: MIT Press, pp. 99–202. [Google Scholar]
Figure 1. (a) Quarterly average of CPI 12 month inflation rates for the UK (percent per annum); (b) quarterly UK unemployment rate in percent, with SIS detected mean shifts at α = 0.1 % .
Figure 1. (a) Quarterly average of CPI 12 month inflation rates for the UK (percent per annum); (b) quarterly UK unemployment rate in percent, with SIS detected mean shifts at α = 0.1 % .
Econometrics 09 00026 g001
Figure 2. MSFE 1 (solid lines computed from (8), circles by simulation) and MSFE 2 (dashed line computed from (9), squares by simulation).
Figure 2. MSFE 1 (solid lines computed from (8), circles by simulation) and MSFE 2 (dashed line computed from (9), squares by simulation).
Econometrics 09 00026 g002
Figure 3. The costs/benefits of selection measured by MSFE 3 MSFE 1 in (14).
Figure 3. The costs/benefits of selection measured by MSFE 3 MSFE 1 in (14).
Econometrics 09 00026 g003
Figure 4. Values of ( 1 p α ψ β ) for five independent regressors with the same noncentrality for a range of α and ψ β 2 .
Figure 4. Values of ( 1 p α ψ β ) for five independent regressors with the same noncentrality for a range of α and ψ β 2 .
Econometrics 09 00026 g004
Figure 5. MSFE comparisons of M 1 , M 2 and M 3 at 3 illustrative values of α for known future exogenous regressors where the break occurs in the mean of x 2 at T + 1 .
Figure 5. MSFE comparisons of M 1 , M 2 and M 3 at 3 illustrative values of α for known future exogenous regressors where the break occurs in the mean of x 2 at T + 1 .
Econometrics 09 00026 g005
Figure 6. MSFE comparisons between M 1 , M 2 and M 3 for known and unknown future exogenous regressors including in-sample mean and random walk forecasts, where the break occurs in the mean of x 2 at T + 1 .
Figure 6. MSFE comparisons between M 1 , M 2 and M 3 for known and unknown future exogenous regressors including in-sample mean and random walk forecasts, where the break occurs in the mean of x 2 at T + 1 .
Econometrics 09 00026 g006
Figure 7. MSFE 1 , MSFE 2 , and MSFE 3 for unknown future exogenous regressors where the break occurs in the mean of x 2 at T and the in-sample mean is used as the forecast for the regressors. Included are the results when the break occurs at T + 1 .
Figure 7. MSFE 1 , MSFE 2 , and MSFE 3 for unknown future exogenous regressors where the break occurs in the mean of x 2 at T and the in-sample mean is used as the forecast for the regressors. Included are the results when the break occurs at T + 1 .
Econometrics 09 00026 g007
Figure 8. MSFE comparisons between M 1 , M 2 and M 3 at α = 0.16 for unknown future exogenous regressors where the break occurs in the mean of x 2 at T and the last in-sample observation is used as the forecast for the conditioning regressors. Also recorded is the MSFE for M 1 and M 2 using in-sample means and a misspecified random walk for y T + 1 directly.
Figure 8. MSFE comparisons between M 1 , M 2 and M 3 at α = 0.16 for unknown future exogenous regressors where the break occurs in the mean of x 2 at T and the last in-sample observation is used as the forecast for the conditioning regressors. Also recorded is the MSFE for M 1 and M 2 using in-sample means and a misspecified random walk for y T + 1 directly.
Econometrics 09 00026 g008
Figure 9. One replication of the DGP without break (solid line) and breaks as in Table 5, T = 100 , H = 5 .
Figure 9. One replication of the DGP without break (solid line) and breaks as in Table 5, T = 100 , H = 5 .
Econometrics 09 00026 g009
Table 1. Root mean square error of one-step forecast for Δ π t over the period 2014Q1–2017Q4.
Table 1. Root mean square error of one-step forecast for Δ π t over the period 2014Q1–2017Q4.
Conditioning onM1M2M3
Known U t 0.5350.5300.515
Mean forecast for U t 0.5190.5300.542
Random walk forecast for U t 0.5490.5300.515
Table 2. Retention probabilities for individual t-tests given E [ t β ^ 2 ] = ψ β .
Table 2. Retention probabilities for individual t-tests given E [ t β ^ 2 ] = ψ β .
ψ β 1234
P 0.16 0.340.720.940.995
P 0.05 0.160.510.850.98
Table 3. Ratio of MSFE to that of MSFE 1 , T = 50 . M 2 has no selection ( α = 0 ); selection in M 3 at α .
Table 3. Ratio of MSFE to that of MSFE 1 , T = 50 . M 2 has no selection ( α = 0 ); selection in M 3 at α .
MSFE Relative to MSFE 1
Model ψ β 2 = 0 ψ β 2 = 1 ψ β 2 = 4 ψ β 2 = 9 ψ β 2 = 16
Section 4.1 and Section 4.2 No shift with known future regressors
α = 0 ( M 2 ) 0.9811.0011.0601.1581.295
α = 0.001 0.9811.0001.0511.0931.068
α = 0.05 0.9821.0001.0271.0231.006
α = 0.16 0.9841.0001.0161.0081.001
Section 5.2 and Section 5.3 Out-of-sample shift with known future regressors
α = 0 ( M 2 ) 0.7091.0141.9273.4505.582
α = 0.001 0.7091.0131.8362.5052.095
α = 0.05 0.7241.0111.4491.3661.095
α = 0.16 0.7561.0091.2561.1361.022
Section 5.4 Out-of-sample shift with mean forecast of future regressors
α = 0 ( M 2 ) 1.0001.0001.0001.0001.000
α = 0.001 1.0001.0001.0001.0001.000
α = 0.05 1.0001.0001.0001.0001.000
α = 0.16 1.0001.0001.0001.0001.000
Section 5.5 Out-of-sample shift with random walk forecast of future regressors
α = 0 ( M 2 ) 0.9931.0041.0201.0341.043
α = 0.001 0.9931.0041.0181.0211.010
α = 0.05 0.9941.0031.0101.0051.001
α = 0.16 0.9941.0021.0061.0021.000
Section 6.2 and Section 6.3 In-sample shift with mean forecast of future regressors
α = 0 ( M 2 ) 1.0201.0211.0221.0231.024
α = 0.001 1.0201.0211.0201.0141.006
α = 0.05 1.0191.0171.0111.0041.000
α = 0.16 1.0171.0141.0061.0011.000
Section 6.4 and Section 6.5 In-sample shift with random walk forecast of future regressors
α = 0 ( M 2 ) 0.8710.9901.2731.6532.078
α = 0.001 0.8710.9901.2461.4011.258
α = 0.05 0.8780.9911.1321.0971.022
α = 0.16 0.8920.9931.0751.0361.005
Table 4. Impact on x when coefficients change from ( δ , λ ) = ( 2 , 0.75 ) to ( δ Δ , λ Δ ) .
Table 4. Impact on x when coefficients change from ( δ , λ ) = ( 2 , 0.75 ) to ( δ Δ , λ Δ ) .
( δ Δ , λ Δ ) = (2, 0.75)(−0.3, 0.75)(−0.3, 0.95)(−0.3, 0.05)(2, 0.05)(2, 0.95)
x j , T + 1 | T 85.77.30.12.49.6
x j , T + 2 | T + 1 84.06.6−0.32.111.1
x j , T + 3 | T + 2 85.07.01.83.610.3
Table 5. Configurations of breaks in the simulations.
Table 5. Configurations of breaks in the simulations.
δ Δ λ Δ
No break20.75
Break in mean−0.30.75
Break in slope (a)20.95
Break in slope (b)20.05
Break in mean and slope (a)−0.30.95
Break in mean and slope (b)−0.30.05
Table 6. Probability of retaining one or all variables when the coefficients have the specified noncentrality, assuming independence at nominal significance α and Student-t(83) distribution.
Table 6. Probability of retaining one or all variables when the coefficients have the specified noncentrality, assuming independence at nominal significance α and Student-t(83) distribution.
ψ β = 1.2 ψ β = 0.5 ψ β = 1 ψ β = 1.5 ψ β = 2 ψ β = 3 ψ β = 4 JointAverage ψ β = 4
α n = 1 n = 10 n = 1 n = 1 n = 1 n = 1 n = 1 n = 1 n = 6 n = 6 n = 1 n = 3
0.0010.0150.0000.0020.0090.0300.0810.3410.7210.0000.1970.7210.375
0.010.0770.0000.0180.0530.1300.2630.6410.9120.0000.3360.9120.758
0.050.2160.0000.0700.1630.3130.5040.8430.9760.0010.4780.9760.930
0.10.3220.0000.1240.2540.4350.6310.9070.9890.0080.5570.9890.968
0.160.4140.0000.1810.3390.5330.7190.9410.9940.0220.6180.9940.983
0.320.5790.0040.3090.5000.6910.8400.9760.9980.0870.7190.9980.995
Table 7. Gauge and potency for three noncentrality designs, M = 10,000 replications.
Table 7. Gauge and potency for three noncentrality designs, M = 10,000 replications.
GaugePotency
α ψ ( 1 ) ψ ( 2 ) ψ ( 4 ) ψ ( 1 ) ψ ( 2 ) ψ ( 4 )
0.0010.0050.0060.0060.0340.2050.712
0.010.0250.0240.0200.1130.3450.884
0.050.0790.0750.0690.2310.4580.919
0.10.1260.1240.1210.2970.5070.919
0.160.1810.1800.1780.3550.5450.923
0.320.3280.3280.3270.4790.6340.941
Table 8. No break and out-of-sample break. Ratio of MSFE to MSFEary forecasting T + 1 | T .
Table 8. No break and out-of-sample break. Ratio of MSFE to MSFEary forecasting T + 1 | T .
ψ ( 1 ) ψ ( 2 ) ψ ( 4 )
infavgarxrwxinfavgarxrwxinfavgarxrwx
RatioNo break
α = 0 1.001.001.001.001.001.001.001.001.001.001.001.00
α = 0.001 1.031.011.021.030.951.060.991.020.831.130.940.99
α = 0.01 1.081.061.051.080.931.110.981.020.791.170.930.97
α = 0.05 1.131.131.081.120.951.190.991.030.831.230.950.99
α = 0.1 1.161.181.091.130.991.231.011.060.871.270.971.02
α = 0.16 1.191.231.111.151.011.281.041.080.911.311.001.04
α = 0.32 1.251.361.151.191.091.381.091.130.991.411.051.09
GUM1.341.511.201.231.181.501.131.171.081.521.101.14
MSFEary1.151.311.43
RatioAverage over five break types in relevant regressors
α = 0 1.001.001.001.001.001.001.001.001.001.001.001.00
α = 0.001 0.901.001.001.010.581.020.991.000.371.050.970.98
α = 0.01 0.741.011.011.020.421.040.990.990.281.060.970.97
α = 0.05 0.571.031.011.020.371.060.990.990.281.080.970.98
α = 0.1 0.521.051.021.030.371.070.991.000.291.090.980.98
α = 0.16 0.501.061.021.030.371.081.001.010.301.100.990.99
α = 0.32 0.481.101.031.040.381.111.011.020.321.131.001.01
GUM0.491.131.051.050.411.141.031.030.351.161.011.02
MSFEary18.5818.8018.98
Table 9. Break in mean and slope (b). MSFE for different locations of the break.
Table 9. Break in mean and slope (b). MSFE for different locations of the break.
T + 1 | T T + 2 | T + 1 T + 3 | T + 2
ψ Whereinfavgarxrwxinfavgarxrwxinfavgarxrwx
α = 0 ary ψ ( 1 ) Relevant54.4250.416.75
α = 0.1 ψ ( 1 ) Relevant16.4454.4954.5054.6010.6760.2418.3311.243.5133.303.033.48
GUM ψ ( 1 ) Relevant11.4354.7854.6354.7710.1560.6013.569.762.8941.422.434.15
α = 0 ary ψ ( 1 ) All54.4250.416.75
α = 0.1 ψ ( 1 ) All18.3254.4954.5054.6011.1961.3218.7111.703.3233.282.893.61
GUM ψ ( 1 ) All16.4254.7854.6354.7714.1264.3517.4113.643.0542.122.554.21
α = 0 ary ψ ( 1 ) Irrel.1.151.191.18
α = 0.1 ψ ( 1 ) Irrel.3.191.361.261.312.862.592.552.801.822.041.801.89
GUM ψ ( 1 ) Irrel.6.711.751.391.425.746.165.385.522.183.392.022.00
α = 0 ary ψ ( 2 ) Relevant54.7143.206.02
α = 0.1 ψ ( 2 ) Relevant7.9054.8654.8254.984.8461.2512.405.432.6437.402.513.87
GUM ψ ( 2 ) Relevant7.6055.0554.9455.176.7358.2010.676.582.6840.602.474.31
α = 0 ary ψ ( 2 ) All54.7143.206.02
α = 0.1 ψ ( 2 ) All11.0554.8654.8254.986.7662.8013.907.232.6537.152.594.27
GUM ψ ( 2 ) All16.4555.0554.9455.1713.9964.8817.6513.723.0242.092.714.41
α = 0 ary ψ ( 2 ) Irrel.1.311.391.35
α = 0.1 ψ ( 2 ) Irrel.4.441.611.331.383.713.703.433.721.932.662.042.14
GUM ψ ( 2 ) Irrel.10.461.971.481.538.909.478.598.782.364.112.282.26
α = 0 ary ψ ( 4 ) Relevant54.9839.745.66
α = 0.1 ψ ( 4 ) Relevant4.3854.9855.0355.312.6461.849.543.192.0038.642.214.54
GUM ψ ( 4 ) Relevant4.5655.2755.2155.514.2056.238.504.272.4239.532.454.47
α = 0 ary ψ ( 4 ) All54.9839.745.66
α = 0.1 ψ ( 4 ) All8.4554.9855.0355.315.2763.5911.825.712.3138.552.555.00
GUM ψ ( 4 ) All16.4755.2755.2155.5113.8965.3717.8913.793.0042.112.854.59
α = 0 ary ψ ( 4 ) Irrel.1.431.521.49
α = 0.1 ψ ( 4 ) Irrel.5.201.821.391.454.094.373.924.231.923.092.162.25
GUM ψ ( 4 ) Irrel.13.462.171.571.6311.3312.0311.0511.292.474.622.442.43
Table 10. Ratio of MSFE to that of MSFEary. Selection at α = 0.1 for arx, rwx, rdx, and cax.
Table 10. Ratio of MSFE to that of MSFEary. Selection at α = 0.1 for arx, rwx, rdx, and cax.
T + 2 | T + 1 T + 3 | T + 2 T + 4 | T + 3
arxrwxrdxcaxrwycayarxrwxrdxcaxrwycayarxrwxrdxcaxrwycay
No break
ψ ( 1 ) 1.101.151.311.151.211.291.111.161.311.161.211.291.101.141.301.151.211.28
ψ ( 2 ) 1.001.041.241.051.141.191.021.071.291.081.151.221.011.061.251.061.151.21
ψ ( 4 ) 0.961.001.231.011.121.160.971.021.261.031.121.170.961.001.231.001.121.17
Break in mean and slope (b) of irrelevant regressors
ψ ( 1 ) 2.142.353.302.331.211.291.531.611.751.641.211.291.201.251.351.251.211.28
ψ ( 2 ) 2.472.683.802.661.141.191.511.591.781.621.151.221.131.171.311.171.151.21
ψ ( 4 ) 2.582.793.902.761.121.161.451.511.731.531.121.171.071.111.291.111.121.17
Break in mean of all regressors
ψ ( 1 ) 0.620.500.340.500.630.570.480.470.750.480.280.260.720.690.820.690.670.67
ψ ( 2 ) 0.570.420.250.420.690.620.500.581.190.600.370.340.790.840.920.850.850.85
ψ ( 4 ) 0.540.370.220.370.720.650.510.691.610.720.430.400.810.960.980.960.940.94
Break in slope (a) of all regressors
ψ ( 1 ) 0.690.590.430.580.690.580.570.570.850.570.370.360.770.750.870.750.710.76
ψ ( 2 ) 0.640.510.340.500.730.630.580.661.290.680.480.460.820.870.980.860.870.92
ψ ( 4 ) 0.610.460.300.460.760.650.600.771.690.800.550.530.830.951.030.940.951.00
Break in slope (b) of all regressors
ψ ( 1 ) 0.410.280.420.290.380.330.420.410.490.420.210.210.850.860.990.841.161.03
ψ ( 2 ) 0.360.210.590.220.440.380.430.540.700.570.280.280.871.031.040.981.291.17
ψ ( 4 ) 0.320.190.780.190.490.410.450.690.910.740.330.340.871.181.051.111.351.24
Break in mean and slope (a) of all regressors
ψ ( 1 ) 0.830.780.750.780.860.870.880.911.110.920.790.820.991.011.141.011.001.05
ψ ( 2 ) 0.760.690.670.690.860.870.870.941.310.950.860.880.971.011.171.011.061.10
ψ ( 4 ) 0.730.650.630.640.870.870.850.951.420.960.880.910.931.001.181.001.081.10
Break in mean and slope (b) of all regressors
ψ ( 1 ) 0.370.230.390.250.350.320.430.530.670.600.210.220.861.091.071.031.641.44
ψ ( 2 ) 0.320.170.550.180.420.370.430.710.940.820.260.270.831.251.041.171.661.51
ψ ( 4 ) 0.300.140.710.160.460.400.450.881.191.040.300.310.821.411.021.301.671.55
Average over all breaks in all regressors
ψ ( 1 ) 0.580.480.470.480.580.540.560.580.770.600.370.370.840.880.980.871.040.99
ψ ( 2 ) 0.530.400.480.400.630.580.560.691.080.720.450.450.861.001.030.971.151.11
ψ ( 4 ) 0.500.360.530.360.660.600.570.801.370.850.500.500.851.101.051.061.201.17
Table 11. Ratio of MSFE to that of MSFEary. Average over noncentralities.
Table 11. Ratio of MSFE to that of MSFEary. Average over noncentralities.
T + 2 | T + 1 T + 3 | T + 2 T + 4 | T + 3
infarxrwxrdxcaxinfarxrwxrdxcaxinfarxrwxrdxcax
No break in y: no break and break in irrelevant variables
α = 0.01 1.131.151.221.551.220.991.071.131.261.130.931.011.041.161.04
α = 0.05 1.521.471.592.141.571.141.211.271.451.280.991.051.101.251.10
α = 0.1 1.781.711.842.461.811.211.261.331.521.331.031.081.121.291.12
GUM3.693.553.644.173.611.461.411.421.601.411.211.171.191.411.19
DGP0.820.920.961.120.960.830.930.971.140.970.820.930.961.130.96
Break in y: break in all variables
α = 0.01 0.400.630.520.520.520.590.620.680.940.700.820.870.991.000.97
α = 0.05 0.310.560.430.480.440.540.560.671.020.700.820.850.991.010.96
α = 0.1 0.300.540.410.490.420.550.560.691.070.720.830.850.991.020.97
GUM0.380.540.440.640.430.670.650.791.260.830.960.911.001.090.98
DGP0.170.440.290.410.300.360.400.611.110.660.640.721.020.860.97
Table 12. Ratio of MSFE to that of MSFEary. Selection at α = 0.1 . Average over noncentralities and horizons T + 2 , , T + 4 . Lowest two in bold (excluding inf).
Table 12. Ratio of MSFE to that of MSFEary. Selection at α = 0.1 . Average over noncentralities and horizons T + 2 , , T + 4 . Lowest two in bold (excluding inf).
infavgarxrwxrdxcaxrwycayapoolcpoolary
No break0.991.221.031.071.271.071.161.220.961.001.00
Break irrelevant1.691.991.681.782.251.771.161.221.131.201.00
All breaks0.562.930.650.700.860.700.730.700.730.581.00
Sum3.246.143.363.554.383.543.053.142.822.783.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Castle, J.L.; Doornik, J.A.; Hendry, D.F. Selecting a Model for Forecasting. Econometrics 2021, 9, 26. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics9030026

AMA Style

Castle JL, Doornik JA, Hendry DF. Selecting a Model for Forecasting. Econometrics. 2021; 9(3):26. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics9030026

Chicago/Turabian Style

Castle, Jennifer L., Jurgen A. Doornik, and David F. Hendry. 2021. "Selecting a Model for Forecasting" Econometrics 9, no. 3: 26. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics9030026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop