Next Article in Journal
Supply Chain Strategies for Quality Inspection under a Customer Return Policy: A Game Theoretical Approach
Previous Article in Journal
Static Einstein–Maxwell Magnetic Solitons and Black Holes in an Odd Dimensional AdS Spacetime
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

1
School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250014, China
2
School of MBA, Shandong University of Finance and Economics, Jinan 250014, China
3
College of Management and Economics, Tianjin University, Tianjin 300072, China
*
Authors to whom correspondence should be addressed.
Submission received: 14 October 2016 / Revised: 28 November 2016 / Accepted: 29 November 2016 / Published: 8 December 2016

Abstract

:
With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function) neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism) soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

1. Introduction

According to price forecasts, producers and managers adjust current productions and operations, and governments make proper macro-economic policy to stabilize prices. A great number of studies shows that price forecasts of agricultural products are meaningful [1,2]. Nowadays, the price forecasts of agricultural products are mainly based on the point forecasts. However, interval forecasts can deliver more information than point forecasts. Point forecasting is widely used, can provide a single value of the variable in the future and cannot provide any information about the value of uncertainty. Uncertainty information is particularly important for decision makers with different risk preferences. The result of interval forecasts is an interval with a confidence level, which is more convenient for decision makers to formulate risk management strategies. Due to the importance of interval forecasts, in the monthly World Agricultural Supply and Demand Estimates (WASDE) of the United States Department of Agriculture (USDA), price forecasts are published in the form of intervals.
The more stable and simple the price system of agricultural products, the more favorable the price forecast. Entropy is a measure of the complexity of a system. Greater entropy means that the system is more complex, and then, the price forecast is prone to distortion. Su et al. [3] verified that there is chaos in the price system by calculating the Kolmogorov entropy. Their price system has positive entropy; but the entropy is not big, and the system has weak chaos. How do we predict prices in such a system? When entropy is not big, can we obtain better forecasting results? In this paper, we illustrate that this price system can be forecasted, and we verify that the price interval forecast is feasible in the system with positive Kolmogorov entropy. Our paper illustrates that we can obtain the interval forecasts. In recent years, many scholars have carried out research work on entropy in the economic field. Bekiros et al. [4] studied the dynamic causality between stock and commodity futures markets in the United States by using complex network theory. We utilize the extended matrix and the time varying network topology to reveal the correlation and the temporal dimension of the entropy relationship. Selvakumar [5] proposed an enhanced cross-entropy (ECE) method to solve the dynamic economic dispatch (DED) problem with valve-point effects. Fan et al. [6] used multi-scale entropy analysis; we investigate the complexity of the carbon market and the average return trend of daily price returns. Billio et al. [7] analyzed the temporal evolution of systemic risk in Europe by using different entropy measures and constructed a new banking crisis early warning indicator. Ma and Si [8] studied a continuous duopoly game model with a two-stage delay. They investigated the influence of delay parameters on the stability of the system.
In agricultural economics, Teigen and Bell [9] established the confidence interval of the corn price by the approximate variance of the forecast. Prescott and Stengos [10] applied the bootstrap method to construct the confidence interval of the dynamic metering model and forecasted the pork supply. Bessler and Kling [11] affirmed the role of probability prediction and defined what is a “good” prediction. Sanders and Manfredo [12], Isengildina-Massa et al. [13] compared four methods, including the histogram method, the kernel density method, the parameter distribution estimation method and the quantile regression method. They evaluated the confidence intervals generated by these methods. The results showed that the kernel function method and the quantile regression method can get the best interval forecasts.
There are two main methods for interval forecasting. One is the prediction of the interval type data [14]. The interval type data are composed of the minimum and maximum sequences. This method can be used in a case with comprehensive information. The disadvantage is that it cannot provide the confidence level of the interval. The other is constructing the confidence interval by the estimation of the errors of point forecasts. The advantage is that one can obtain confidence levels. In this paper, we will construct a forecast interval with some target confidence level based on the entropy theory and system complexity theory.
In practice, the prediction interval of the same target confidence is not unique, so which is the best interval? Decision makers often choose those results that meet their own needs, so we can directly build the “optimal” forecast intervals under their standard. In this paper, we will construct the model of the optimal forecast interval and transform this problem to an optimization problem. Since it is difficult to solve the analytic solution for a nonlinear optimization problem, we establish an algorithm to solve the numerical solution.
Nowadays, the optimal criterion of the interval mainly lies in the accumulation of the accuracies of point forecasts. The M index defined by Batu [15] is an average of point forecast errors in the prediction interval. Demetresc [16] used the cumulative accuracy of the point forecasts, which can obtain longer intervals and high reliability. However, for the economic data, this kind of forecast loses significance. The forecast interval not only delivers accuracy, but also delivers information. How does one evaluate the interval from both the accuracy and the informativeness, which seem to be contradictory aspects? Yaniv and Foster [17] provided a formal model of the binary loss function, i.e., the trade-off model between accuracy and informativeness. They compared their model with many common models. The results showed that their model is more suitable to reflect individual preferences. In this paper, the optimal forecast interval model is established by using the trade-off model.
To obtain the confidence level, it is critical to correctly estimate the error distribution of the point forecasts. In general, the error distribution is supposed to be a normal distribution or a χ 2 distribution, etc. However, this method is subjective, and it is possible that the error distribution does not obey the assumed distribution. Gardner [18] found that prediction intervals generated by the Chebyshev inequality are more accurate than those generated by the hypothesis of the normal distribution, which was opposed by Bowerman and Koehler [19], Makridakis and Winkler [20] and Allen and Morzuch [21]. They thought the intervals generated by the Chebyshev inequality are too wide. Stoto [22] and Cohen [23] found that the forecast errors of population growth asymptotically obey the normal distribution. Shlyakhter et al. [24], recommended the exponential distribution in the data of population and energy. Willia and Goodman [25] first used the empirical method to estimate the distribution of historical errors of point forecasts without restrictions on the method of the point forecast. Chatfield [26] pointed out that the empirical method is a good choice when the error distribution is uncertain. Taylor and Bunn [27] first applied the quantile regression for the interval estimation. Hanse [28] used semi-parametric estimation and the quantile method to construct the asymptotic forecast intervals. This method has strict requirements on time series. Demetrescu [16] pointed out that quantile regressions are not so useful, since one does not know in advance which quantile is needed, and an iterative procedure would have obvious complexity. Jorgensen and Sjoberg [29] used the nonparametric histogram method to find the points of the software development workload distribution. Yan et al. [30], thought that the errors of point forecasts have great influence on the accuracy of uncertainty analysis. Ma et al. [31] investigated existence and the local stable region of the Nash equilibrium point. Ma and Xie [32] studied financial and economic system under the condition of three parameters’ change circumstances, Zhang and Ma [33] and ou and Ma [34] investigated a class of the nonlinear system modeling problem, with good research results. Martínez-Ballesteros et al. [35] forecasted by means of association rules. Ren and Ma deepen and complete a kind of macroeconomics IS-LM model with fractional-order calculus theory, which is a good reflection on the memory characteristics of economic variables.
The novelty of this paper is in providing two methods. One is the stratified historical errors’ estimation, and the other is the optimal confidence interval model.
To improve the estimation accuracy, we try to stratify the historical error data according to the price and estimate the error distribution of each layer. In the estimation of the historical error distribution, all errors are often treated as obeying the same distribution. Considering the heteroscedasticity of prediction errors of different prices, it is too rough. The frequencies of different prices in history are different. Some extreme prices in history only appeared several times with the emergence of sharp fluctuations. The forecast errors of these prices are generally large, and the sample capacity of such errors is small. On the contrary, some prices appear very frequently with small fluctuations. The forecast errors of these prices are generally small, and the sample capacity of such errors is big. Therefore, we stratify the historical error data according to different prices and estimate the error distribution of each layer.
In this paper, we induce the model of the optimal confidence interval according to the accuracy and the informativeness trade-off model, provide a practical and efficient algorithm for the optimal confidence interval model based on the complexity of the forecasting system and estimate the error distributions according to the stratified prices. The kernel function method is used to estimate the error distribution. For different target confidence levels, simulation prediction is achieved for the continuous futures daily closing prices of soybean meal and non-GMO soybean. Unconditional coverage, independence and conditional coverage tests are used to evaluate the interval forecasts. Empirical analysis is divided into two subsections. In Section 5.1, we apply the equal probability method, the shortest interval method and the optimal interval method to construct the prediction intervals, compare their loss functions and test whether the intervals generated by the optimal interval method are optimal. We add the various SNR (Signal-Noise Ratio) noises to the historical error data and the prediction prices and test the robustness of the algorithm. In Section 5.2, the prediction errors are divided into one to 20 layers according to the prices. The error distributions are estimated in different layers. The confidence intervals are constructed and evaluated, finding whether the error stratification method can improve the prediction accuracy. The evaluation indices are concluding from the loss function, interval endpoint, interval midpoint, interval length, coverage, unconditional coverage test statistic, independence test statistic and conditional coverage test statistic. The error data, including point forecast errors generated by the weighted local method and the RBF neural network method, are used to investigate whether the hierarchical estimation error method can improve the prediction accuracy for different point forecasts.

2. The Model and Algorithm of the Optimal Confidence Intervals

Denote by Y t the process to be forecast, and assume it has a continuous and strictly increasing cumulative distribution function. Suppose f t = f Y t | Ψ t 1 is the conditional density of Y t on its past Ψ t 1 = { Y t 1 , Y t 2 , } , and F t = F Y t | Ψ t 1 is the conditional cumulative distribution function of Y t . Clearly, the confidence interval [ L t , U t ] ( L t < U t ) of Y t with confidence level α 0 ( 0 < α 0 < 1 ) satisfies:
P ( L t Y t U t | Ψ t 1 ) = L t U t f t ( y ) d y = F t ( U t ) F t ( L t ) = α 0
Yaniv and Foster [17] established the accuracy-informativeness trade-off model:
L = f [ | y m g | , ln ( g ) ] ,
where the first variable evaluates accuracy, the second variable evaluates informativeness, y is a truth value, m is the midpoint of prediction interval and g denotes the width of interval. Actually, the accuracy-informativeness trade-off model is a kind of loss function. Yaniv and Foster [17] thought that, for a good interval, the lower the L score, the better. They gave a concrete expression of L :
L = | y m g | + γ ln ( g ) ,
where the coefficient γ 0 is a trade-off parameter that reflects the weights placed on the accuracy and informativeness of the estimates. Yaniv and Foster [17] supposed that the value of γ is taken from 0.6 to 1.2, close to one.
For a given confidence level α 0 , we take the minimum L as the objective to solve the optimal confidence interval, which can be transformed to find the solution ( L t U t ) of the nonlinear optimization problem under the condition Ψ t 1 , where:
( L t U t ) : = arg min L t , U t D t   E ( L t | L t Y t U t ) .
Denote by D t the set of all possible values of Y t ; the constraint conditions are:
( i ) F t ( U t ) F t ( L t ) = L t U t f t ( y ) d y = α 0 ; ( ii ) L t < U t
Then, we can obtain the following simplified objective function.
Proposition 1.
E ( L t | L t Y t U t ) = L t U t [ | y L t + U t 2 U t L t | + γ ln ( U t L t ) ] f t ( y ) d y
Proof. 
Thus, we only need to find the solution ( L t U t ) that can minimize E ( L t | L t Y t U t ) of (2) and that satisfies F t ( U t ) F t ( L t ) = α 0 with L t < U t . It is difficult to solve analytic solutions; however, for a strictly increasing and numerical F t , we can establish an algorithm to obtain numerical solutions. The steps are as follows.
Step 1. Take all L t , U t D t and L t < U t , and find all ( L t U t ) satisfying F t ( U t ) F t ( L t ) = α 0 , i.e., ( L t 1 U t 1 ) , ( L t 2 U t 2 ) , . Since the value F t ( U t ) F t ( L t ) increases with the increase of U t for a fixed L t , the value of U t can be solved uniquely. Therefore, we point out that it is not necessary to take all of the values of D t .
Step 2. For each point obtained from the first step ( L t 1 U t 1 ) , ( L t 2 U t 2 ) , , calculate the midpoint M t i = L t i + U t i 2 .
Step 3. For every ( L t i M t i U t i ) , compute:
L t i = ( 1 2 + γ ln ( U t i L t i ) ) α 0 + 1 U t i L t i ( L t i L t i + U t i 2 F t ( y ) d y L t i + U t i 2 U t i F t ( y ) d y )
Step 4. Sort all of the L t 1 , L t 2 , ; find the smallest L t ; and record the corresponding ( L t M t U t )

3. Estimate the Conditional Probability Distribution of Error

Denote by Y ^ t the forecast of Y t and by e t = Y t Y ^ t the error, i.e.,
Y t = Y ^ t + e t .
If we take Y ^ t as the optimal point forecast [27], we can estimate e t and obtain the distribution of Y t . In this paper, we apply the empirical method, which means that we can take all obtained point forecast errors as samples of the same probability distribution. We estimate the probability distribution by the kernel function method, for which we give the details below. However, it is rough to take all obtained errors as obeying one distribution. For one forecasting value, if we can collect all corresponding errors, the errors can be considered to obey one distribution. However, in general, the error sample size of one forecasting value is very small. In order to collect as many samples as possible, we can take the errors of one forecasting interval. Therefore, how to choose reasonable forecasting value intervals is very important.
We stratify the prediction error samples evenly according to the forecasting values. First, we divide the N historical forecasting values { Y ^ k , k = t 1 , t 2 , , t N } into M layers, i.e., M intervals, and record the upper limit and lower limit of every layer. The size of every layer is about N / M . The size of every layer may not be the same, and a 10% difference is admissible. Second, put the errors of every layer forecasts into the error sample set of the layer. For example, when N = 1000 , M = 8 , the division of the forecasts is shown in Figure 1.
When N is fixed, the bigger M is, the smaller N / M is; the smaller M is, the greater N / M is. When N / M is small, the size of the sample is small, and the estimating accuracy declines. When N / M is big, the size of the sample is big, which means the width between two adjacent red lines in Figure 1 will be bigger. At this time, the forecasting values within the same layer have a big difference; taking the errors in this layer as obeying the same probability distribution is not reasonable. In short, M cannot be taken too big or too small. When N is fixed, there is an optimal M to obtain the optimal estimated error distribution, which will be verified in Section 5.
For fixed N and M ( N > M ), we apply the kernel function method to estimate the error distribution. Assume that the size of each layer is N / M ; the error sequence of the i - th ( i = 1 , 2 , , M ) layer is { e i k , k = 1 , 2 , , K M } . Then, the density estimation of the sequence at point x is f ^ i ( x ) = 1 K M k = 1 K M ϕ ( x e i k ; h i ) , where ϕ is the normal kernel function:
ϕ ( u ; h ) = 1 2 π h exp { u 2 2 h 2 }
h i is the bandwidth or smoothing parameter. In this paper, we apply the optimal bandwidth [30] h i = ( 4 3 K M ) 1 5 σ ˜ i , where σ ˜ i = median { | e i k μ ˜ i | } / 0.6745 , and μ ˜ i represents the sample median.

4. Evaluation of the Prediction Interval

The accuracy of forecast intervals is traditionally examined in terms of coverage. However, only if test values are enough, the coverage can reflect the true confidence level. Bowman [36] describes the use of smoothing techniques in statistics, including both density estimation and nonparametric regression. Christoffersen [37] developed approaches to test the coverage and independence in terms of hypothesis tests. Since his methods do not make any assumption about the true distribution, they can be applied to all empirical confidence intervals. His methods include unconditional coverage, independence and conditional coverage tests.
Suppose that α 0 is the confidence level, and test sample sequence is { e t , t = 1 , , N 2 } . First, denote by I t the indicator:
I t = { 1 , i f e t [ L t | t 1 ( α 0 ) , U t | t 1 ( α 0 ) ] 0 , i f e t [ L t | t 1 ( α 0 ) , U t | t 1 ( α 0 ) ]
where [ L t | t 1 ( α 0 ) , U t | t 1 ( α 0 ) ] is an out-of-sample prediction interval, which denotes the prediction interval of e t constructed by the ( t 1 ) - th error; L t | t 1 ( α 0 ) and U t | t 1 ( α 0 ) are the lower limit and upper limit, respectively. Christoffersen [37] proved that t = 1 N 2 I t obeys the binomial distribution B ( N 2 , α 0 ) . When the capacity of test sample is finite, Christoffersen [37] constructed a standard likelihood ratio test with the null hypothesis H 0 : E ( I t | Ω t 1 ) = α 0 and the alternative hypothesis H 1 : E ( I t | Ω t 1 ) α 0 . The purpose is to examine, with condition Ω t 1 , whether t = 1 N 2 I t equals α 0 significantly. If H 0 is accepted, then the coverage of the test sample equals the target confidence level. Christoffersen [37] established the following test statistic:
L R u c ( α ) = 2 ln L ( α ; I 1 , I 2 , ... I N 2 ) L ( p ^ ; I 1 , I 2 , ... I N 2 )
When the null hypothesis holds, L R u c ( α ) a s y χ 2 ( 1 ) and χ 2 ( 1 ) represent the chi-squared distribution which degree of freedom is 1, where L ( α ; I 1 , I 2 , ... I N 2 ) = ( 1 α ) n 0 α n 1 , p ^ = n 1 n 0 + n 1 is the maximum likelihood estimation of α 0 , and n 0 and n 1 denote the number that { I t } “hit” zero and one, respectively.
Christoffersen [37] thought that the unconditional test is insufficient when the dynamics are present in the higher order moments. In order to test the independence, he introduced a binary first-order Markov chain with transition probability matrix:
Π 1 = [ 1 π 01 π 01 1 π 11 π 11 ] ,
where π i j = P ( I t = j | I t 1 = i ) . If independence holds true, then π i j = π j , i , j = 0 , 1 , where π j = P ( I t = j ) . Therefore, under the null hypothesis of independence, (4) turns to:
Π 2 = [ 1 π 1 π 1 1 π 1 π 1 ]
We can estimate π i j , π j by the test sample frequency, i.e.,
Π ^ 1 = [ n 00 n 00 + n 01 n 01 n 00 + n 01 n 10 n 10 + n 11 n 11 n 10 + n 11 ]
Additionally, π ^ 1 = n 01 + n 11 n 00 + n 10 + n 01 + n 11 . The test statistic under the null hypothesis is:
L R i n d = 2 ln L ( Π ^ 2 ; I 1 , I 2 , ... I N 2 ) L ( Π ^ 1 ; I 1 , I 2 , ... I N 2 ) a s y χ 2 ( 1 )
where L ( Π ^ 1 ) = ( 1 π ^ 01 ) n 00 π ^ 01 n 01 ( 1 π ^ 11 ) n 10 π ^ 11 n 11 , L ( Π ^ 2 ) = ( 1 π ^ 1 ) n 00 + n 10 π ^ 1 n 01 + n 11 .
The above tests for unconditional coverage and independence are now combined to form a complete test of conditional coverage:
L R cc = 2 ln L ( α ; I 1 , I 2 , ... , I N 2 ) L ( Π ^ 1 ; I 1 , I 2 , ... , I N 2 ) a s y χ 2 ( 2 ) .

5. Empirical Analysis

In this paper, the continuous futures daily closing prices of the soybean meal and non-GMO soybean from 4 January 2005 to 25 September 2015 are applied to the interval forecast. All of the data are from the Dalian Commodities Exchange in China. The data capacity is 2612. We use a rolling window approach to point forecast with a fixed bandwidth of 1558. Thus, a total of 1053 forecast values and a total of 1053 errors are obtained. The first 1000 of 1053 are used as the training set to construct the prediction interval, and the last 53 are used as the test set. Our purpose is as follows: (1) if the training set is too small, the accuracy of the error distribution estimation will be reduced; (2) if the training set is too big, the amount of data used in one-step prediction will be reduced, which also reduces the forecast accuracy; (3) in general, different amounts of data used to predict the price will induce different forecasts, and the prediction error is very likely to be negatively related to the amount of data, so we apply the fixed bandwidth method in order to avoid such systematic deviations.
We detected chaos by the method of [32]. Results show that the daily closing price data are a time series of chaos. Although the Kolmogorov entropy is positive, it is not big, which means the price system can be described and that price forecasts are feasible. Therefore, we use the classical weighted local region and RBF neural network methods to do the one-step point forecast. The classical weighted local region method is looking for some trajectory points closest to the central point as the correlation point and fitting the reconstructed function. The RBF neural network method uses the radial basis function to forecast. The prediction mechanisms of these two methods are different. The former is representative of the local region method, and the latter is a typical three-layer feedforward neural network. Therefore, the two point methods used in this paper are representative.

5.1. Evaluation and Robustness Analysis of the Method of Confidence Intervals

5.1.1. Comparison of Different Methods Constructing the Confidence Interval

Table 1 and Table 2 show the mean values of the closing price interval forecast in the last 53 days. In the following tables, “lower limit”, “upper limit”, “interval midpoint”, L and “interval width” denote the mean values of the 53 lower limits, 53 upper limits, 53 interval midpoints, 53 loss function values and 53 interval widths, respectively. Table 1 shows the result of soybean meal, and Table 2 shows that of non-GMO soybean. OI (optimal interval) presents the result with intervals constructed by our method; EI (equal probability interval) presents the result with intervals generated by the equal probability method, i.e., the bilateral tail probability equals half of the target confidence; and SI (shortest interval) presents the result with intervals constructed by the shortest interval method, i.e., the shortest interval is chosen, the one among all of the intervals with the target confidence. From Table 1 and Table 2, no matter if the confidence level is 80%, 90% or 95%, no matter if the point forecast method is the weighted local region method or the RBF neural network method, the forecast intervals constructed by our method have the smallest loss function value. The loss function of OI is 20% lower than that of EI and is 19% lower than that of SI.

5.1.2. The Robustness Analysis of the Optimal Confidence Interval Algorithm

In Equation (3), sequences Y ^ t and e t may contain noise. In this section, different SNR Gaussian white noises are added into the historical error data and the forecast price; the prediction interval and loss function are re-calculated; the absolute relative error percent is obtained with the no-noise results as benchmarks; and the robustness of the algorithm is analyzed.
Table 3 and Table 4 list the noise test results of the soybean meal price and non-GMO soybean price. Their point forecast methods are the RBF neural network method and the weighted local region method. The symbol H means that the historical error data are added into noise; P means that the forecast Y ^ t is added into the noise; and H and P mean that both historical error data and the forecast are added into the noise. Theoretically, SNR < 10 is strong noise, and SNR > 1000 is weak noise, where SNR means the signal to noise ratio. From the table below, we can know that, when SNR = 1, the forecast results produce a rather big deviation, while for SNR = 100 and 1000, the deviations are not over 0.06%, which can be ignored. Taken together, with the noise intensity increasing, the effect of the result is also increased; the noise added into the historical error data has relatively little effect on the results, and especially when SNR = 10, 100 and 1000, the effects are below 3%. Instead, the noise added into Y ^ t has a relatively big effect, especially when SNR = 1 and 10. Therefore, the algorithm to calculate the optimal prediction interval in this paper is robust for noise with SNR ≥ 100.

5.2. Optimal Hierarchical Analysis

In this section, we verify that, by empirical analysis: (1) the forecasting values and their corresponding errors are correlated, so estimating the error distribution should be according to different forecast values; (2) stratified error estimations are much better than those without stratified error estimations, since the former can obtain better interval forecasts; (3) for stratified error estimation, there is an optimal number of layers to attain the best interval forecasts.

5.2.1. Comparison of the Error Distribution under Different Hierarchies

First, correlation analysis was performed. We take the first 1000 historical errors obtained by the weighted local region point forecasts and show the scatter plot of the relative error percent in Figure 2. In order to find whether the prediction prices and errors are correlated, the Pearson correlation index 0.1813 is calculated, which indicates that the prices and the errors are significantly correlated at the 0.01 significance level (bilateral).
Second, using the method in Section 3, we divide 1000 historical price forecasts into M ( M = 1 , 2 , , 20 ) layers, where M = 1 means no hierarchy. Denote by A i ( i = 1 , 2 , , M ) the A i - th layer. Put the errors corresponding to the prices into each layer, and obtain the error sample in each layer. Third, with the error sample of the A i - th layer, we estimate the probability density in the A i - th layer by the kernel function method. Then, we can get M probability density functions. Fourth, we take the last 53 price data as the test set. Assume the number of layers is M . For each price in the test set, we choose the layer to which it belongs, namely the A - th layer. The A - th layer contains at least 50 errors. We pick out the error probability density function of the A - th layer and construct the optimal interval by the method of Section 2. For convenience, we take γ = 1. Thus, we can collect 53 optimal intervals for a number of layers M . For every M , we can also collect 53 intervals. Finally, we evaluate all of the intervals according to Section 4.
Table 5 and Table B1and Table B2 (see Appendix B) list the results and the evaluation of optimal prediction intervals with target confidence levels of 90%, 95% and 80%, respectively. In the following tables, n 1 presents the number with I t hitting one; the coverage = n 1 n ; according to all of the values of L R u c ,   L R i n d ,   L R c c (see Section 4), the null hypothesis is accepted at the 0.05 significance (bilateral) level that the coverage of the test samples is equal to the target confidence, and the confidence intervals satisfy the independence; for every element in the test set, we compute the value of the loss function, where L denotes the mean value of the 53 loss function values.
From the above table, the intervals constructed by our method satisfy the target confidence level and independence. Since the less the loss function value, the better the interval, the results in these tables mean that the intervals with stratified error estimations are much better than those without stratified error estimations. Therefore, it is efficient to construct prediction intervals with stratifying errors. From the tables, with the increasing of the number of layers, the value of L first decreases and then increases. The reason is that the greater the number of layers, the less the number of errors in each layer and the lower the accuracy of the estimated error density function. Table 5 and Table B1 and Table B2 present that the optimal number of layers with 90% and 95% intervals are 13, the optimal number of layers with 80% interval is 14 and the loss function values with stratified error distribution estimations are 18%, 17% and 52% lower than those without stratified error distribution estimations, respectively. Figure 3, Figure 4 and Figure 5 plot the prediction intervals with M   =   1 , 2 , , 20 . The red lines indicate the upper and lower limits of intervals with the optimal number of layers, i.e., M   =   13 or 14, the black lines indicate intervals with M   =   1 and the green lines indicate intervals with M taking the remaining values.

5.2.2. The Effect of Point Forecast Methods on the Error Hierarchy

We use the error sample of the RBF neural network point forecasts to repeat the numerical experiment of Section 5.2.1. The results are shown in Table B3, Table B4 and Table B5 (see Appendix B). From L , the intervals with stratified error estimations are better than those without stratified error estimations. For point forecasts obtained by different methods, the method of error hierarchy is helpful to obtain better interval forecasting. However, although all of the intervals pass the unconditional coverage test, intervals with the 90% confidence level and some intervals with the confidence levels of 80% and 95% fail to pass the independence and conditional coverage tests, which shows that the intervals obtained by the RBF neural network errors have poor independence.

6. Conclusions

In this paper, we deduce the theoretical model of the optimal confidence interval, establish the algorithm to solve the optimal interval, stratify the historical error data according to the prediction prices of data, estimate the error distribution by using the nonparametric method, construct the optimal confidence interval of the future price, use the point forecast errors obtained by the weighted region method and the RBF neural network methods as samples and simulate the optimal interval forecast of prices of soybean meal and non-GMO soybean futures. Numerical experiments show that: (1) the forecast intervals constructed by our method have the smallest loss function value; the loss function is 20% lower than that of intervals constructed by the equal probability method and is 19% lower than that of intervals constructed by the shortest interval method; (2) the algorithm to calculate the optimal prediction interval in this paper is robust for noise with SNR ≥ 100; (3) for error data obtained by the different point forecast method and for different target confidence levels, the intervals with stratified error estimations are much better than those without stratified error estimations, and the loss function value can be reduced by up to 52%. The interval forecast method provided in this paper can obtain a conclusion that is more in line with individual requirements, improve the accuracy of the forecasting of agricultural production prices and provide a reference for other economic data forecasts.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (Grant Nos. 11601270, 11526120 and 61273230); this work is partially supported by the National Social Science Fund Project (15AGL014); and this work is also partially supported by the Key Research and Development Projects in Shandong Province (2016GSF120013).

Author Contributions

Yi Wang, Xin Su and Shubing Guo contributed to this study. Yi Wang generated the idea, constructed the models, collected the data and wrote this manuscript. Xin Su and Shubing Guo reviewed and edited the manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A.

Proof of Proposition 1.
In order to simplify the objective function, from the constraint condition (ii), we have:
E ( L t | L t Y t U t ) = L t U t [ | y L t + U t 2 U t L t | + γ ln ( U t L t ) ] f t ( y ) d y
= 1 U t L t L t U t | y L t + U t 2 | f t ( y ) d y + γ ln ( U t L t ) L t U t f t ( y ) d y
= 1 U t L t L t L t + U t 2 ( y L t + U t 2 ) f t ( y ) d y + 1 U t L t L t + U t 2 U t ( L t + U t 2 y ) f t ( y ) d y
+ γ ln ( U t L t ) L t U t f t ( y ) d y
= 1 U t L t L t L t + U t 2 y f t ( y ) d y L t + U t 2 ( U t L t ) L t L t + U t 2 f t ( y ) d y + L t + U t 2 ( U t L t ) L t + U t 2 U t f t ( y ) d y
1 U t L t L t + U t 2 U t y f t ( y ) d y + γ ln ( U t L t ) L t U t f t ( y ) d y
Since y f t ( y ) d y = y F t ( y ) F t ( y ) d y , from (2) and the constraint condition (i),
E ( L t | L t Y t U t )   =   L t U t 2 ( U t L t ) F t ( L t ) + U t L t 2 ( U t L t ) F t ( U t )
+ 1 U t L t ( L t L t + U t 2 F t ( y ) d y L t + U t 2 U t F t ( y ) d y ) + γ ln ( U t L t ) ( F t ( U t ) F t ( L t ) )
= ( 1 2 + γ ln ( U t L t ) ) α 0 + 1 U t L t ( L t L t + U t 2 F t ( y ) d y L t + U t 2 U t F t ( y ) d y ) .
Therefore, Equation (2) holds true.

Appendix B.

Table B1. The weighted local region: the optimal interval-confidence level of 95%.
Table B1. The weighted local region: the optimal interval-confidence level of 95%.
Layer NumberLower LimitUpper LimitInterval WidthInterval Midpoint L Coverage n 1 L R u c L R i n d L R c c
125932826232.8627105.100.98521.400.041.48
226052823217.9927144.810.98521.400.041.48
325982828230.0227134.850.98521.400.041.48
425972826228.8127114.600.98521.400.041.48
526012825224.0227134.580.98521.400.041.48
625972825227.9927114.610.98501.280.041.36
725832834250.6627084.680.98481.150.041.24
825752855279.7527154.730.98471.090.041.18
925822849266.9127154.560.98461.030.041.12
1025932841247.8127174.360.98410.750.050.84
1125852854269.1727204.410.98390.640.050.74
1225972844247.1127204.250.98390.640.050.74
1325932855262.3727244.220.97370.540.060.65
1425902865275.2727274.260.97340.400.060.52
1525892868278.8027294.310.97330.350.060.48
1625852873287.3727294.330.97330.350.060.48
1725852875290.0227304.380.97310.270.070.40
1825822878295.3927304.400.96260.100.080.26
1925832882298.7427324.460.96260.100.080.26
2025832884301.2327334.500.95210.010.100.21
Table B2. The weighted local region: the optimal interval-confidence level of 80%.
Table B2. The weighted local region: the optimal interval-confidence level of 80%.
Layer NumberLower LimitUpper LimitInterval WidthInterval Midpoint L Coverage n 1 L R u c L R i n d L R c c
126422769126.3427053.300.85450.852.537.28
226462774128.0127102.880.83440.310.164.12
326422771129.4727062.890.85450.852.537.28
426502772121.6927112.450.81430.040.063.52
526522772120.3527122.430.81430.040.063.52
626552777121.6527162.470.82420.180.294.04
726562780124.5327182.490.84410.430.744.91
826582785127.6427212.450.79380.020.541.04
926592786126.7927222.200.81380.021.221.67
1026632789126.5327261.940.76320.361.632.55
1126612791129.2327261.940.80320.001.602.06
1226622788126.1427251.730.80320.001.602.06
1326662789123.3127271.620.76290.312.413.27
1426622790127.8027261.590.77270.173.694.39
1526622796133.5827291.670.85290.642.273.24
1626582796137.5827271.650.85290.642.273.24
1726582800142.3827291.710.88281.260.512.04
1826552802146.7727281.730.85230.490.301.12
1926542812157.8827331.790.89241.531.192.96
2026522815163.1927331.860.91201.962.604.76
Table B3. The RBF network: the optimal interval-confidence level of 90%.
Table B3. The RBF network: the optimal interval-confidence level of 90%.
Layer NumberLower LimitUpper LimitInterval WidthInterval Midpoint L Coverage n 1 L R u c L R i n d L R c c
124692893424.7626815.330.94501.29** 9.36** 10.77
224752883407.5226795.260.94501.29** 9.36** 10.77
324632848385.7526555.130.92490.38** 18.45** 24.25
424692827357.7726485.050.91480.02** 11.80**16.73
524762836360.5726565.060.92490.38** 18.44** 24.25
624852836351.4026605.030.92490.38** 18.44** 24.25
724852837351.6326614.930.92490.38** 18.44** 24.25
824802848368.2826644.960.92490.38** 18.44** 24.25
924802864384.5826724.990.92490.38** 18.44** 24.25
1024812871389.3926764.980.92490.38** 18.44** 24.25
1124832873390.1726784.990.94501.29** 9.36** 10.77
1224802881401.0426805.010.94501.29** 9.36** 10.77
1324872886399.3326865.030.94481.11** 9.20* 10.43
1424932885392.0726895.020.94481.11** 9.20* 10.43
1524952882387.0926885.020.94471.02** 9.18* 10.26
1624982873374.9426864.890.94471.02** 9.18* 10.26
1724982871373.6326844.880.92440.16** 17.82** 23.19
1824962879383.9226874.950.94440.77** 8.86* 9.77
1924902879389.0726854.940.94440.77** 8.86* 9.77
2024872884397.4026854.970.93430.70** 8.77* 9.61
* Denotes that the null hypothesis is rejected at the 0.05 significance (bilateral) level; ** denotes that the null hypothesis is rejected at the 0.01 significance (bilateral) level.
Table B4. The RBF network: the optimal interval-confidence level of 95%.
Table B4. The RBF network: the optimal interval-confidence level of 95%.
Layer NumberLower LimitUpper LimitInterval WidthInterval Midpoint L Coverage n 1 L R u c L R i n d L R c c
124272946519.1026865.460.98521.400.041.48
224342936501.7226855.370.98521.400.041.48
324212918496.9826695.200.98521.400.041.48
424302879448.5926545.090.94500.05** 9.36* 9.52
524382879441.3726585.090.94500.05** 9.36* 9.52
624422880437.2026615.070.94500.05** 9.36* 9.52
724452883437.6626644.910.94500.05** 9.36* 9.52
824432893450.2226684.930.94500.05** 9.36* 9.52
924412903461.8126724.960.94500.05** 9.36* 9.52
1024422907465.4526744.950.94500.05** 9.36* 9.52
1124442913469.2726784.960.96510.180.160.42
1224412918476.6826794.980.98521.400.041.48
1324432923479.5026835.010.98501.280.041.36
1424492924474.9726865.010.98501.280.041.36
1524532921468.7326875.010.98491.210.041.30
1624612913452.4826874.820.96480.110.170.37
1724612915453.8726884.810.96460.070.180.34
1824592920461.4726904.900.98461.030.041.12
1924522921468.4426864.870.98461.030.041.12
2024482924476.2026864.920.98450.970.051.06
* Denotes that the null hypothesis is rejected at the 0.05 significance (bilateral) level; ** denotes that the null hypothesis is rejected at the 0.01 significance (bilateral) level.
Table B5. The RBF network: the optimal interval-confidence level of 80%.
Table B5. The RBF network: the optimal interval-confidence level of 80%.
Layer NumberLower LimitUpper LimitInterval WidthInterval Midpoint L Coverage n 1 L R u c L R i n d L R c c
125362843307.4226894.520.89472.85** 8.15** 15.52
225332835302.2826844.480.89472.85** 8.15** 15.52
325202804283.8726624.360.89472.85** 8.15** 15.52
425242794270.4126594.300.81430.041.254.71
525252796271.3926614.310.81430.041.254.71
625302793262.4526614.280.81430.041.254.71
725332787253.9426604.180.79420.022.365.59
825332794260.8726634.200.81430.041.254.71
925352803267.7126694.210.85450.853.69* 8.44
1025372806269.5626714.200.85450.853.69* 8.44
1125372807269.6226724.200.85450.853.69* 8.44
1225342814279.6726744.230.85450.853.69* 8.44
1325372818281.3126774.250.86441.37* 5.37** 10.85
1425362815279.2526754.240.86441.37* 5.37** 10.85
1525332813279.2526734.240.86431.23* 5.26* 10.55
1625312803271.3726674.130.86431.23* 5.26* 10.55
1725392807267.5926734.120.85410.95* 5.03* 9.96
1825292812282.5326704.200.87411.71* 7.43** 13.41
1925262810283.8826684.180.87411.71* 7.43** 13.41
2025232813290.1426684.210.87401.54* 7.30** 13.07
* Denotes that the null hypothesis is rejected at the 0.05 significance (bilateral) level; ** denotes that the null hypothesis is rejected at the 0.01 significance (bilateral) level.

References

  1. Adam, B.D.; Garcia, P.; Hauser, R.J. The value of information to hedgers in the presence of futures and options. Rev. Agric. Econ. 1996, 18, 437–447. [Google Scholar] [CrossRef]
  2. Byerlee, D.; Anderson, J.R. Risk, Utility and the value of information in farmer decision making. Rev. Mark. Agric. Econ. 1982, 50, 231–246. [Google Scholar]
  3. Su, X.; Wang, Y.; Duan, S.; Ma, J. Detecting chaos from agricultural product price time series. Entropy 2014, 16, 6415–6433. [Google Scholar] [CrossRef]
  4. Bekiros, S.; Nguyen, D.K.; Junior, L.S.; Uddin, G.S. Information diffusion, cluster formation and entropy-based network dynamics in equity and commodity markets. Eur. J. Oper. Res. 2017, 256, 945–961. [Google Scholar] [CrossRef]
  5. Selvakumar, A.I. Enhanced cross-entropy method for dynamic economic dispatch with valve-point effects. Electr. Power Energy Syst. 2011, 33, 783–790. [Google Scholar] [CrossRef]
  6. Fan, X.; Li, S.; Tian, L. Complexity of carbon market from multi-scale entropy analysis. Physica A 2016, 452, 79–85. [Google Scholar]
  7. Billio, M.; Casarin, R.; Costola, M.; Pasqualini, A. An entropy-based early warning indicator for systemic risk. J. Int. Financ. Mark. Inst. Money 2016, 5, 1042–4431. [Google Scholar] [CrossRef]
  8. Ma, J.; Si, F. Complex Dynamics of a Continuous Bertrand Duopoly Game Model with Two-Stage Delay. Entropy 2016, 18, 266. [Google Scholar] [CrossRef]
  9. Teigen, L.D.; Bell, T.M. Confidence intervalsfor corn price and utilization forecasts. Agric. Econ. Res. 1978, 30, 23–29. [Google Scholar]
  10. Prescott, D.M.; Stengos, T. Bootstrapping confidence intervals: An application to forecastingthe supply of pork. Am. J. Agric. Econ. 1987, 9, 266–273. [Google Scholar] [CrossRef]
  11. Bessler, D.A.; Kling, J.L. The forecast and policy analysis. Am. J. Agric. Econ. 1989, 71, 503–506. [Google Scholar] [CrossRef]
  12. Sanders, D.R.; Manfredo, M.R. USDA livestock price forecasts: Acomprehensive evaluation. J. Agric. Resour. Econ. 2003, 28, 316–334. [Google Scholar]
  13. Isengildina-Mass, O.; Irwin, S.; Good, D.L.; Massa, L. Empirical confidence intervals for USDA commodity price forecast. Appl. Econ. 2011, 43, 379–3803. [Google Scholar] [CrossRef]
  14. Arroyo, J.; Espínola, R.; Maté, C. Different approaches to forecast interval time series: Acomparison in finance. Comput. Econ. 2011, 37, 169–191. [Google Scholar] [CrossRef]
  15. Bratu, M. Proposal of new forecast measures: M indicator for global accuracy of forecast intervals. J. Bus. Econ. 2012, 4, 216–235. [Google Scholar]
  16. Demetrescu, M. Optimal forecast intervals under asymmetric loss. J. Forecast. 2007, 26, 227–238. [Google Scholar] [CrossRef]
  17. Yaniv, I.; Foster, D.P. Graininess of judgmentunder uncertainty: An accuracy-informativeness trade-off. J. Exp. Psychol. Gen. 1995, 124, 424–432. [Google Scholar] [CrossRef]
  18. Gardner, E.S., Jr. A simple method of computingprediction intervals for time-series forecasts. Manag. Sci. 1988, 34, 541–546. [Google Scholar] [CrossRef]
  19. Bowerman, B.L.; Koehler, A.B.; Pack, D.J. Forecasting time series with increasing seasonal variation. J. Forecast. 1990, 9, 419–436. [Google Scholar] [CrossRef]
  20. Makridakis, S.; Winkler, R.L. Samplingdistributions of post-sample forecasting errors. Appl. Stat. 1989, 38, 331–342. [Google Scholar] [CrossRef]
  21. Allen, P.G.; Morzuch, B.J. Comparingprobability forecasts derived from theoretical distributions. Int. J. Forecast. 1995, 11, 147–157. [Google Scholar] [CrossRef]
  22. Stoto, M.A. The accuracy of population projections. J. Am. Stat. Assoc. 1983, 78, 13–20. [Google Scholar] [CrossRef] [PubMed]
  23. Cohen, J.E. Population forecasts and confidenceintervals for Sweden: A comparison of model-basedand empirical approaches. Demography 1986, 23, 105–126. [Google Scholar] [CrossRef] [PubMed]
  24. Shlyakhter, A.I.; Kammen, D.M.; Broido, C.L.; Wilson, R. Quantifying the credibility of energyprojections from trends in past data. Energy Policy 1994, 22, 119–130. [Google Scholar] [CrossRef]
  25. Williams, W.H.; Goodman, M.L. A simplemethod for the construction of empirical confidencelimits for economic forecasts. J. Am. Stat. Assoc. 1971, 66, 752–754. [Google Scholar] [CrossRef]
  26. Chatfield, C. Calculating interval forecasts. J. Bus. Econ. Stat. 1993, 11, 121–135. [Google Scholar]
  27. Taylor, J.W.; Bunn, D.W. Investigatingimprovements in the accuracy of predictionintervals for combinations of forecasts: A simulationstudy. Int. J. Forecast. 1999, 15, 325–339. [Google Scholar] [CrossRef]
  28. Hanse, B.E. Interval forecasts and parameter uncertainty. J. Econom. 2006, 135, 377–398. [Google Scholar] [CrossRef]
  29. Jorgensen, M.; Sjoberg, D.I.K. An effortprediction interval approach based on the empiricaldistribution of previous estimation accuracy. Inf. Softw. Technol. 2003, 45, 123–136. [Google Scholar] [CrossRef]
  30. Yan, J.; Liu, Y.; Han, S.; Wang, Y.; Feng, S. Reviews on uncertainty analysis of wind power forecasting. Renew. Sustain. Energy Rev. 2015, 52, 1322–1330. [Google Scholar] [CrossRef]
  31. Ma, J.; Ren, W.; Zhan, X. Study on the inherent complex features and chaos control of IS-LM fractional-order systems. Entropy 2016, 18, 332. [Google Scholar] [CrossRef]
  32. Ma, J.; Xie, L. The comparison and complex analysis on dual-channel supply chain under different channel power structures and uncertain demand. Nonlinear Dyn. 2016, 83, 1379–1393. [Google Scholar] [CrossRef]
  33. Ma, J.; Zhang, Q.; Gao, Q. Stability of a three-species symbiosis model with delays. Nonlinear Dyn. 2012, 67, 567–572. [Google Scholar] [CrossRef]
  34. Ma, J.; Pu, X. The research on Cournot–Bertrand duopoly model with heterogeneous goods and its complex characteristics. Nonlinear Dyn. 2013, 72, 895–903. [Google Scholar] [CrossRef]
  35. Martínez-Ballesteros, M.; Martínez-Álvarez, F.; Troncoso, A.; Riquelme, J.C. An evolutionary algorithm to discover quantitative association rules in multidensional time seres. Soft Comput. 2011, 15, 2065–20841. [Google Scholar] [CrossRef]
  36. Bowman, A.W.; Azzalini, A. Applied Smoothing Techniques for Data Analysis; OUP Oxford: Oxford, UK, 1997. [Google Scholar]
  37. Christoffersen, P.F. Evaluating interval forecasts. Int. Econ. Rev. 1998, 39, 841–862. [Google Scholar] [CrossRef]
Figure 1. Stratifying historical errors.
Figure 1. Stratifying historical errors.
Entropy 18 00439 g001
Figure 2. The historical errors of the weighted local region method.
Figure 2. The historical errors of the weighted local region method.
Entropy 18 00439 g002
Figure 3. The prediction interval of the 95% confidence level: the weighted local region.
Figure 3. The prediction interval of the 95% confidence level: the weighted local region.
Entropy 18 00439 g003
Figure 4. The prediction interval of the 90% confidence level: the weighted local region.
Figure 4. The prediction interval of the 90% confidence level: the weighted local region.
Entropy 18 00439 g004
Figure 5. The prediction interval of 80% confidence level: the weighted local region.
Figure 5. The prediction interval of 80% confidence level: the weighted local region.
Entropy 18 00439 g005
Table 1. The interval forecasts of soybean meal. OI, optimal interval; EI, equal probability interval; SI: shortest interval.
Table 1. The interval forecasts of soybean meal. OI, optimal interval; EI, equal probability interval; SI: shortest interval.
Confidence Level The Weighted Local RegionRBF Neural Network
Lower LimitUpper Limit L Lower LimitUpper Limit L
80%OI2642.342768.683.302535.812843.224.52
EI2646.982773.555.142528.492835.775.95
SI2644.502770.605.142530.932838.345.95
90%OI2617.392792.504.622468.682893.445.33
EI2625.682800.385.382470.052895.326.21
SI2622.512796.435.382458.212881.956.22
95%OI2593.382826.235.102426.982946.085.46
EI2427.042802.546.182309.042899.636.57
SI2597.832829.535.612419.102938.116.38
Table 2. The interval forecasts of non-GMO soybean.
Table 2. The interval forecasts of non-GMO soybean.
Confidence Level The Weighted Local RegionRBF Neural Network
Lower LimitUpper Limit
L
Lower LimitUpper Limit
L
80%OI4147.664285.143.104174.334440.233.44
EI4150.064287.245.344139.064412.036.03
SI4152.564289.655.344168.174433.396.05
90%OI4087.474353.795.084113.834471.035.06
EI4120.844316.165.574084.134444.146.19
SI4119.944315.145.574103.694458.776.20
95%OI4087.474353.795.084068.184507.385.50
EI4087.804354.215.804036.214473.386.33
SI4092.474359.285.814052.544486.676.33
Table 3. The weighted local region: soybean meal. H, historical error data.
Table 3. The weighted local region: soybean meal. H, historical error data.
SNRConfidence Level 80%Confidence Level 90% Confidence Level 95%
Lower LimitUpper Limit L Lower LimitUpper Limit L Lower LimitUpper Limit L
H10000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
1000.00%0.00%0.00%0.02%0.02%0.00%0.02%0.02%0.00%
100.21%0.36%2.17%0.21%0.33%2.61%0.39%0.14%1.62%
11.59%1.78%9.07%1.54%1.56%12.91%2.17%2.36%9.53%
P10000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
1000.05%0.04%0.01%0.02%0.02%0.00%0.02%0.02%0.00%
1040.78%40.75%9.45%19.32%19.31%11.77%7.47%7.46%2.26%
117.72%17.69%4.62%60.96%60.95%65.87%46.34%46.34%10.29%
H,P10,10000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
100,1000.00%0.00%0.00%0.02%0.02%0.00%0.02%0.02%0.00%
10,1037.02%37.61%10.01%39.77%40.52%18.57%8.20%8.67%3.47%
5,513.98%15.75%8.12%4.38%6.08%11.53%1.13%2.65%4.13%
1,185.22%91.29%24.40%87.23%92.80%39.09%11.87%16.00%9.58%
Table 4. The RBF neural network: non-GMO soybean.
Table 4. The RBF neural network: non-GMO soybean.
SNRConfidence Level 80% Confidence Level 90% Confidence Level 95%
Lower LimitUpper Limit L Lower LimitUpper Limit L Lower LimitUpper Limit L
H10000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
1000.00%0.00%0.00%0.04%0.06%0.00%0.00%0.00%0.00%
100.28%0.01%2.96%0.00%0.00%0.00%0.23%0.06%2.40%
11.16%1.01%18.56%0.03%0.05%0.01%1.99%1.24%7.95%
P10000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
1000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
1016.90%15.88%0.00%0.01%0.01%0.00%0.00%0.00%0.00%
151.15%48.08%0.00%0.02%0.01%0.00%44.03%39.76%0.00%
H,P1000,10000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
100,1000.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%0.00%
10,102.86%3.19%2.41%2.60%2.93%1.56%6.10%5.60%0.93%
5,513.26%13.63%10.78%2.78%3.53%3.85%19.43%18.64%3.74%
1,164.64%62.53%14.30%22.02%23.02%10.35%0.39%2.34%8.94%
Table 5. The weighted local region: the optimal interval-confidence level of 90%.
Table 5. The weighted local region: the optimal interval-confidence level of 90%.
Layer NumberLower LimitUpper LimitInterval WidthInterval Midpoint L Coverage n 1 L R u c L R i n d L R c c
126172793175.1127054.620.92490.380.671.21
226252796170.1727104.390.89470.101.571.91
326222795173.2227084.400.89470.101.571.91
426302795165.2027124.170.89470.101.571.91
526322795163.0927134.150.89470.101.571.91
626322797164.6627154.170.92470.280.701.14
726292802172.9227154.210.94460.940.401.46
826272803176.1327154.210.96462.280.182.54
926312804173.2127184.080.96452.150.182.42
1026332805171.6427193.950.95401.560.211.86
1126302808178.0027183.970.95381.340.221.66
1226322807174.3627193.850.95381.340.221.66
1326342806171.0827203.800.95361.130.231.46
1426302808177.4527193.820.94330.830.251.20
1526292812183.1527203.870.94320.740.261.12
1626252812187.2927183.880.94320.740.261.12
1726222817194.5227193.930.97312.240.072.37
1826182815196.4627173.950.96261.530.081.69
1926152825210.0327204.020.96261.530.081.69
2026112827216.1027194.070.95210.890.101.09

Share and Cite

MDPI and ACS Style

Wang, Y.; Su, X.; Guo, S. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors. Entropy 2016, 18, 439. https://0-doi-org.brum.beds.ac.uk/10.3390/e18120439

AMA Style

Wang Y, Su X, Guo S. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors. Entropy. 2016; 18(12):439. https://0-doi-org.brum.beds.ac.uk/10.3390/e18120439

Chicago/Turabian Style

Wang, Yi, Xin Su, and Shubing Guo. 2016. "The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors" Entropy 18, no. 12: 439. https://0-doi-org.brum.beds.ac.uk/10.3390/e18120439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop