Next Article in Journal
Integration of Joint Power-Heat Flexibility of Oil Refinery Industries to Uncertain Energy Markets
Next Article in Special Issue
Deep Learning Approach to Power Demand Forecasting in Polish Power System
Previous Article in Journal
Design Optimization of Conical Concrete Support Structure for Offshore Wind Turbine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Ultra-Short-Term Electrical Load Forecasting Method Based on Temperature-Factor-Weight and LSTM Model

1
Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha 410114, China
2
School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
3
Hunan Provincial Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems, Changsha University of Science and Technology, Changsha 410114, China
4
The School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411004, China
*
Author to whom correspondence should be addressed.
Submission received: 19 August 2020 / Revised: 11 September 2020 / Accepted: 14 September 2020 / Published: 17 September 2020
(This article belongs to the Special Issue Computational Intelligence and Load Forecasting in Power Systems)

Abstract

:
Ultra-short-term electrical load forecasting is an important guarantee for the safety and efficiency of energy system operation. Temperature is also an important factor affecting the changes in electric load. However, in different cases, the impact of temperature on load forecasting will vary greatly, and sometimes even lead to the decrease of forecasting accuracy. This often brings great difficulties to researchers’ work. In order to make more scientific use of temperature factor for ultra-short-term electrical load forecasting, especially to avoid the negative influence of temperature on load forecasting, in this paper we propose an ultra-short-term electrical load forecasting method based on temperature factor weight and long short-term memory model. The proposed method evaluates the importance of the current prediction task’s temperature based on the change magnitude of the recent load and the correlation between temperature and load, and therefore the negative impacts of the temperature model can be avoided. The mean absolute percentage error of proposed method is decreased by 1.24%, 1.86%, and 6.21% compared with traditional long short-term memory model, back-propagation neural network, and gray model on average, respectively. The experimental results demonstrate that this method has obvious advantages in prediction accuracy and generalization ability.

Graphical Abstract

1. Introduction

Ultra-short-term electrical load forecasting (USTLF), which refers to the forecasting of the load within one day [1], is the basis of safety, reliability, and economy of energy system operation. Owing to the increasing demand of distributed energy and various users, the randomness of load changes increases the difficulty associated with load forecasting. USTLF provides a basis for determining trends in the electricity market price [2]. Too large a prediction error will result in higher operating costs.
There are many load forecasting methods utilized at present. Zhengyuan et al. [3] proposed an original data sequence by third parties for the opening sequence of operations to generate new data. Furthermore, it can be used to establish an improved GM (1,1) model. Song et al. [4] built a combined model based on the BP network model and GM (1,1) residual correction, designed to improve the precision of load forecasting models. Liu et al. [5] introduced the idea of fractional order accumulation into the GM (1,1) model, then improved the traditional BP neural network through the use of the layered training algorithm. Long et al. [6] devised a monthly power load combination prediction model based on seasonal adjustment method and the BP neural network. Moreover, Ge et al. [7] developed a power load prediction algorithm based on fuzzy BP-NNs and a combined adaptive cubature Kalman filter. Hu [8] proposed a GM (1,1) model based on neural network to solve the problem of the development coefficient and control variables being dependent on the fluctuating background value in the traditional gray prediction model. Rim et al. [9] built an artificial neural networks to predict the half-hourly electric load demand in Tunisia over the period from 2000 to 2008. Behm et al. [10] developed a methodology to provide weather-dependent countrywide electricity load profiles using artificial neural networks. The method could be used as a basis for much needed long-term load predictions for European countries. Pal et al. [11] proposed a hybridized forecasting model based on weight adjustment of neural networks with BP learning using general type-2 fuzzy sets. Parvez et al. [12] proposed a Multilayer Perceptron (MLP)-based photo voltaic forecasting method for the rooftop photovoltaic systems of the smart home. Currently, with the rapid development of the deep learning [13,14,15,16,17], various deep neural network models, specially LSTM and its variants, are widely employed in the load forecasting tasks. Hochreiter et al. [18] proposed a LSTM model, which added several control gates to the traditional recurrent neural network (RNN) for processing of long-term dependencies in timing problems. Shahzad et al. [19] used long-short-term memory artificial neural networks to predict the power load in different time periods. Santra et al. [20] utilized LSTM and GA to increase the robustness of short-term load forecasting. For their part, Qing et al. [21] proposed an hourly solar radiation intensity prediction method based on weather forecast data; moreover, Li [22] employed a deep learning LSTM circulating neural network algorithm based on the TensorFlow intelligent learning system for short-term power load prediction purposes. Chen et al. [23] combined the LSTM forecasting model with the XGBoost forecasting model to achieve power load forecasting. Zhang et al. [24] developed a LSTM network model scheme suitable for power load forecasting in Yichang. Liu et al. [25] proposed a stacked denoising autoencoder model for short-term load forecasting. In addition, some researchers have applied a third-generation artificial neural network, the spiking neural network, to power load forecasting [26,27,28].
Ambient temperature is one of the important factors that impact changes in electric load [29]. For example, scorching heat will bring an increase in air conditioning load. However, when the scope of the research is extended to the general case, temperature analysis does not necessarily improve prediction accuracy. In fact, when the correlation between temperature and load is weak, temperature analysis can even decrease the load prediction accuracy. In some relatively stable load cases, the influence of temperature on load has been included in the recent historical data. Even if a strong correlation exists between temperature and load, higher accuracy can be achieved if temperature is not considered. In the present study [30,31], the researchers simply considered the correlation between temperature and load in load forecasting; no further assessment of the impact of temperature on the model was conducted. In order to make better use of the temperature factor, we propose a method that combines the temperature factor weight (TFW) and the Long short-term memory (LSTM) model (TFW-LSTM). By analyzing the historical load data and historical temperature data in the current prediction task, the module feeds back the TFW value which determines whether the system needs to consider the temperature factor. Therefore, the TFW-LSTM method can improve the forecasting accuracy of power load, which is beneficial to the utilization rate of power generation equipment and the effectiveness of economic dispatching.
Section 2 chiefly describes the basic principle of LSTM artificial neural networks, while Section 3 describes the TFW-LSTM method in more detail. Section 4 mainly presents the experimental results and the discussion thereof. Finally, Section 5 outlines the conclusion.

2. Long Short-Term Memory Artificial Neural Networks

LSTM artificial neural networks are a special type of recurrent neural network (RNN). LSTM mainly solves the phenomenon of “gradient explosion” or “gradient disappearance” in the RNN context, making them better able to deal with the problem of long-distance dependence. A multilayer LSTM network structure model is shown in Figure 1.
The construction of the LSTM network unit is depicted in Figure 2. Here, C represents the long-term memory of LSTM, which adds new memory in real-time as the network operates. The  h t 1 denotes the output from the previous point in time, while h t is the output at the current point in time. Moreover, X t represents the current input. The internal function modules of the LSTM unit will be introduced below.
  • Forget gate: The forget gate determines by the forgetting coefficient, which refers to how much of the long-term memory C t 1 of the previous moment should be retained. It further integrates the output h t 1 of the previous time point with the input X t of the current time point into an input matrix [ h t 1 , X t ] . Finally, the sigmoid activation function outputs a real number in the range (0,1); here, 1 means that all memories should be stored, while 0 indicates that all memories should be forgotten:
    f t = σ ( W f · [ h t 1 , x t ] + b f )
    Here, σ is the activation function, W f represents the weight matrix of the fully connected layer network, and b f indicates the bias matrix of the fully connected layer network; moreover, f t is the forgetting coefficient.
  • Input gate: Function of the input gate: it determines how much of the current input X t is saved for long-term memory C t :
    i t = σ ( W i · [ h t 1 , x t ] + b i ) C t ˜ = t a n h ( W C · [ h t 1 , x t ] + b C )
    Here, W i and b i denote the weight and bias parameters, respectively, of the function sigmoid at the fully connection layer, while W c and b C are the weight and bias parameters, respectively, of the tanh function of the fully connection layer.
  • Output gate: Function of the output gate: the intermediate parameter o t is used to determine the extent to which the long-term memory C t affects the current cell output:
    o t = σ ( W o · [ h t 1 , x t ] + b o ) h t = o t · t a n h ( C t )

3. TFW-LSTM Method

The traditional USTLF method is not sufficiently comprehensive when temperature is considered. When the correlation between the temperature and power load is strong, temperature can improve the precision of power load prediction; when this correlation is weak, however, this precision will decrease. Similarly, load forecasting that does not consider temperature can in fact achieve higher prediction accuracy, provided that the recent load is stable enough. In order to make better use of the temperature information, we propose an USTLF method that combines TFW and the LSTM model to solve the above problems. Accordingly, we add a TFW calculation module to the LSTM neural network based load forecasting method. After analyzing the historical load and temperature data in the current prediction task, the module feeds back the TFW value which determines whether the system needs to consider the temperature.

3.1. Data Acquisition and Preprocessing

We use the electrical load and temperature data of a city in Hunan province in 2019. The temporal resolution of the load data provided by the power company is 15 min, while the weather data was obtained from an open source weather (website Available: http://www.tianqihoubao.com/lishi/changsha.html) using a web crawler. It is determined that the original electric load data cannot be used directly in the present experiment; there are some missing data, which are marked by the power company using the value −999. Therefore, some missing data are simulated and filled according to the changing trends of the data across time. To accomplish this, a data filling algorithm is proposed to fill in the missing data values so that they are as close as possible to the real values. This filling algorithm, which averages the values in the cells adjacent to the missing data cell to fill in the missing values, is named the adjacent cell average (ACA) method and operates as follows.
Step 1: Get a new Excel cell location (row,col) and check the cell data; repeat this step if the data is normal, and execute step 2 if it is abnormal.
Step 2: Determine whether the data exception is surrounded by data in adjacent cells; if not, record the location and wait for manual processing; if so, perform step 3.
Step 3: Execute the ACA method to calculate the load value of abnormal data points.
Step 4: Determine whether the traversal of all data has been completed; if so, exit the program; if not, return to step 1.

3.2. Construction of LSTM Model

As for the selection of hyperparameters, we use different combinations of hyperparameters for experimental comparison. We select the hyperparameter combination with the lowest error metrics mean absolute percentage error (MAPE%) in Table 1. The input data includes the historical load data of the recent four time points, the sampling point and the temperature data of current time point (if the system determines that temperature should not be taken into account, the input value is 5). The parameter keep-prob works to make the neurons working with a certain probability during training. The LSTM neural network structure employed in this paper is illustrated in Figure 3. Here, the number of all hidden layer cells is equal. The LSTM model consists of one input layer, three hidden layers, and one output layer.

3.3. TFW

In this paper, a TFW calculation module is proposed to reflect the degree to which it is worth considering temperature in the process of predicting the current power load. The structure diagram of the TFW calculation module is presented in Figure 4. The algorithm flow is shown in Algorithm 1. The module inputs historical temperature data and historical load data. Subsequently, the model outputs the TFW value W t e m p through the intermediate variable temperature influence coefficient (TIC) T i f and the mapping relation f : T i f W t e m p . The module calculates the variance of the load value at the same time point across all dates in the historical data, while the sum of the corresponding variance of the 96 time points is represented by Var. Here, Var is used to reflect the degree of load fluctuation in the training data. As load fluctuation is mainly derived from weather-sensitive load, this variable can reflect the degree to which abrupt changes in weather-sensitive load are present in the training data. Variance calculation block is used to calculate the Var.
Algorithm 1 TFW Calculation Module.
Input:
  Historical load data L;
  Historical temperature data T;
Output:
  TFW W T e m p ;
1:
Initialize: x = 1.
2:
for each j [ 1 , 96 ] do
3:
  Calculate the variance of a sequence of historical data consisting of the jth point of the day;
4:
  The calculated results are temporarily stored in x;
5:
  sum = sum + x
6:
end for
7:
Var = sum/96
8:
Calculate the covariance C o v between L and T;
9:
Calculate the correlation coefficient r between L and T;
10:
Standardize r and Var;
11:
Calculate Temperature Influence Coefficient T i f ;
12:
Calculate the TFW according to the mapping relation;
13:
return W t e m p ;
Moreover, Var is calculated as Equation (4):
V a r = i = 1 96 j = 1 N ( L i j 1 N j = 1 N L i j ) 2 N
Here, N represents the total number of days of historical data used, while L i j represents the load value. The formulas used to calculate the covariance and correlation coefficients are shown in Equations (5) and (6):
C O V ( X , Y ) = E [ ( X E ( X ) ) ( Y E ( Y ) ) ]
R ( X , Y ) = C O V ( X , Y ) v a r ( X ) v a r ( Y )
Here, COV represents the covariance, while X and Y denote the temperature and power load, respectively. The normalized module is used to normalize the data. T i f is calculated according to Equation (7):
T i f = R ¯ · V a r ¯
The TFW mapping block maps the corresponding interval according to the calculated TIC value T i f . As shown in Figure 5, the add temperature factor interval indicates that the TFW W t e m p is 100%, which indicates that the temperature must be considered in the calculation; moreover, the no temperature factor interval indicates that W t e m p is 0, which indicates that the temperature should not be considered. However, the fuzzy endpoint T σ is a critical value and is characterized by volatility, which is in turn caused by the randomness and volatility of the power load and temperature. In this paper, the floating ranges of the fuzzy endpoints T σ are obtained via experimental study.
Here, the TIC T i f is located in the probability interval of the fuzzy endpoint T σ (0.450, 0.533), while W T e m p is calculated according to Equation (8).
W t e m p = T i f 0.45 0.083 × 100 %
Finally, the mapping relation f : T i f W t e m p between the TIC and the TFW is as presented in Table 2.

3.4. Implementation of TFW-LSTM Method

3.4.1. Structure of TFW-LSTM Method

The present paper proposes a short-term power load forecasting method based on TFW and the LSTM model. The block diagram of the method is illustrated in Figure 6. In phase 1, the historical load data and historical temperature data are input into the TFW calculation module, after which the corresponding W T e m p is calculated and output to the control block. Here, control block is a logical unit block that controls whether or not historical temperature data will be input into the neural network training module. When the TFW meets W t e m p 50 % , the control block decides that the temperature factor should be considered in the current prediction work, with the result that the historical temperature data will be passed through the control block; otherwise, historical temperature data are not allowed to pass, and the output value is None. In the next step, the training block receives the historical load data and the historical sampling point data simultaneously. The AdamOptimizer, under the tensorflow framework, is used for training so that the optimal parameters of the model can be found. Once the training is completed, the optimal parameters of the output model are sent on to the LSTM model for testing.
The control block of phase 2 receives the W T e m p calculated in phase 1 to control the temperature data used in the current forecast. The LSTM model receives the sampling point, the return value of control block and the optimal model parameters as input, then outputs the corresponding power load prediction results.

3.4.2. Experimental Configuration

In this paper, 39 dates are randomly selected in 2019 as testing set. The data of 10 days’ prior to each experimental prediction date are used for training. Therefore, the ratio of the training set to the test set is 10:1. During the experiments, the trained model is used to output the predicted load value corresponding to the predicted time point. Furthermore, the model output value is compared with the label value to calculate the error. Finally, four test sets are extracted to facilitate comparison between the proposed method and the traditional power load forecasting methods. Due to the large number of missing data points, the data for February are not used in this paper.
This article employs three performance metrics to evaluate the results of the model testing: MAPE, mean absolute error (MAE), and root mean square error (RMSE).
The M A P E is defined as follows.
M A P E = 1 n i = 1 n | y i ˜ y i y i | × 100 %
The M A E is defined as follows.
M A E = 1 n i = 1 n | y i ˜ y i |
The R M S E is defined as follows.
R M S E = 1 n i = 1 n ( y i ˜ y i ) 2
Here, the y ˜ denotes the result of the model, while y ˜ represents the true value and n is the total number of calculated values.

4. Results and Discussion

In order to verify the performance of the TFW-LSTM method, data from a certain region in Hunan, China in 2019 were selected for comparative experiments. Among them, 39 dates were randomly selected to compare the performance of the proposed method with the traditional LSTM, and the TIC T i f in each dates was calculated simultaneously. The experimental data results were shown in Table 3.
The experiments were conducted on a laptop with Intel(R) Core(TM) i7-8750H CPU 2.20 GHZ, 64-bit Windows 10 operating system and 8GB memory, using Python 3.7.4 in the tensorflow framework. The MAPE% comparison between the TFW-LSTM method and traditional LSTM model were presented in Figure 7, and we also selected 4 typical days for further study. The results of comparison was illustrated in Figure 8.
It can be concluded from the experimental results that the proposed method performs better overall and was generally more stable than the traditional LSTM model. Moreover, because the TFW-LSTM method was able to flexibly apply the temperature factor in the power load forecasting process, it was better able to absorb the advantages of utilizing the temperature factor while avoiding the associated disadvantages.
In the next step, so as to more objectively demonstrate the superiority of the proposed method, the proposed method was compared with the BP neural network and traditional grey model in the four typical dates above. The results of metric were listed in Table 4, and the line graph was presented in Figure 9.
As we can see in the results, the TFW-LSTM method was obviously superior to other traditional methods in each metrics. In the four typical dates, the proposed method reduced MAPE by 1.24%, 1.68% and 6.21% on average, respectively, compared with the traditional LSTM, BP, and GM. Compared with LSTM, the TFW-LSTM method added the dynamic controlling mechanism of feature, and can show higher stability and prediction accuracy in USTLF. In contrast with other traditional prediction methods, the TFW-LSTM method had a great advantage because of its inherent nonlinear processing ability and temporal data processing ability.

5. Conclusions

In order to eliminate the negative influence of temperature on load prediction in USTLF, we propose a method for USTLF based on TFW and the LSTM model. The TFW calculation module is the core of the proposed method, which determines whether the temperature factor should be considered.
The proposed method is based on TFW and the LSTM model, which uses real data from a region in Hunan Province, China in 2019 for performance verification. The results show that compared with the traditional load forecasting method, the proposed method evaluates the importance of temperature to forecasting at the current time. It dynamically avoids the negative impact of temperature, and achieves a higher prediction accuracy by combining with the LSTM model. The performance metrics MAPE, MAE, and RMSE reflect the superiority of the proposed method.
In the future, as deep learning theory comes to be utilized more widely in data processing [32,33,34], we will attempt to use additional methods to improve both the accuracy of power load prediction and the overall model stability. In recent years, with the development of nonlinear system theory and research [35,36], we will try to adopt nonlinear time series forecasting models based on chaos theory for power load forecasting. We will also consider adopting image data processing methods [37,38] for power load forecasting.

Author Contributions

Conceptualization, D.Z. and H.T.; methodology, D.Z., H.T. and F.L.; software, H.T.; validation, F.L., L.X. and X.D.; formal analysis, F.L.; investigation, D.Z.; resources, F.L.; data curation, H.T.; writing—original draft preparation, H.T. and D.Z.; writing—review and editing, D.Z., L.X. and X.D.; visualization, X.D.; supervision, L.X.; project administration, X.D.; funding acquisition, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This project is supported by the National Natural Science Foundation of China under grant 61972057, the Hunan Provincial Natural Science Foundation of China under Grants 2019JJ50655 and 2020JJ4626, the Scientific Research Fund of Hunan Provincial Education Department of China under Grants 18B160 and 19B004, the Open Fund of Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle Infrastructure Systems (Changsha University of Science and Technology) under Grant kfj180402, the “Double First-class” International Cooperation and Development Scientific Research Project of Changsha University of Science and Technology under Grant 2018IC25, and the Young Teacher Growth Plan Project of Changsha University of Science and Technology under Grant 2019QJCZ076.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript.
USTLFUltra-short-term electrical load forecasting
TFWTemperature factor weight
LSTMLong short-term memory
MAPEMean absolute percentage error
BPBack propagation
GMGrey model
MLPMulti-layer perceptron
TFW-LSTMThe abbreviated name of our proposed method
ACAAdjacent cell average
TICTemperature influence coefficient

References

  1. Hong, T.; Fan, S. Probabilistic electric load forecasting:a tutorial review. Int. J. Forecast. 2016, 32, 914–938. [Google Scholar] [CrossRef]
  2. Wang, Q.; Zhang, C.; Ding, Y.; Xydis, G.; Wang, J.; Østergaard, J. Review of real-time electricity markets for integrating distributed energy resources and demand response. Appl. Energy 2015, 138, 695–706. [Google Scholar] [CrossRef]
  3. Jia, Z.; Fan, Z.; Li, C.; Jiang, M. The Application of Improved Grey GM(1,1) Model in Power System Load Forecast. Future Wirel. Netw. Inf. Syst. 2011, 144, 603–608. [Google Scholar]
  4. Song, J.; Shu, H.; Dong, J.; Liang, Y.; Li, Y.; Yang, B. Comprehensive Load Forecast Based on GM(1,1) and BP Neural Network. Electr. Power Constr. 2020, 41, 75–80. [Google Scholar] [CrossRef]
  5. Liu, C.-W.; Zhao, H.-K.; Yan, H.; Wang, J.-H. Power Load Forecasting Based on Fractional GM(1,1) and BP Neural Network. Math. Pract. Theory 2018, 48, 145–151. [Google Scholar]
  6. Long, Y.; Su, Z.; Wang, Y. Monthly load forecasting model based on seasonal adjustment and BP neural network. Syst. Eng. Theory Pract. 2018, 38, 1052–1060. [Google Scholar] [CrossRef]
  7. Ge, Q.; Jiang, H.; He, M.; Zhu, Y.; Zhang, J. Power Load Forecast Based on Fuzzy BP Neural Networks with Dynamical Estimation of Weights. Int. J. Fuzzy Syst. 2020, 22, 956–969. [Google Scholar] [CrossRef]
  8. Hu, Y.-C. Electricity consumption prediction using a neural network based grey forecasting approach. J. Oper. Res. Soc. 2017, 68, 1259–1264. [Google Scholar] [CrossRef]
  9. Rim, H.; Mourad, Z.; Ousama, B.-S. Short-term electric load forecasting in Tunisia using artificial neural networks. Energy Syst. 2020, 11, 357–375. [Google Scholar] [CrossRef]
  10. Behm, C.; Nolting, L.; Praktiknjo, A. How to Model European Electricity Load Profiles using Artificial Neural Networks. Appl. Energy 2020, 277, 115564. [Google Scholar] [CrossRef]
  11. Pal, S.S.; Kar, S. A hybridized forecasting method based on weight adjustment of neural network using generalized type-2 fuzzy set. Int. J. Fuzzy Syst. 2019, 21, 308–320. [Google Scholar] [CrossRef]
  12. Imtiaz, P.; Arif, S.; Anjan, D.; Temitayo, O.; Md Golam, D. Multi-Layer Perceptron Based Photovoltaic Forecasting for Rooftop PV Applications in Smart Grid. 2020. Available online: https://www.researchgate.net/publication/338852633 (accessed on 26 August 2020).
  13. Tong, Y.; Liu, Y.; Wang, J.; Xin, G. Text Steganography on RNN-Generated Lyrics. Math. Biosci. Eng. 2019, 16, 5451–5463. [Google Scholar] [CrossRef]
  14. Xiang, L.; Yang, S.; Liu, Y.; Li, Q.; Zhu, C. Novel Linguistic Steganography Based on Character-Level Text Generation. Mathematics. 2020, 8, 1558. Available online: https://0-doi-org.brum.beds.ac.uk/10.3390/math8091558 (accessed on 17 September 2020). [CrossRef]
  15. Wang, J.; Qin, J.; Xiang, X.; Tan, Y.; Pan, N. CAPTCHA recognition based on deep convolutional neural network. Math. Biosci. Eng. 2019, 16, 5851–5861. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Lu, W.; Ou, W.; Zhang, G.; Zhang, X.; Cheng, J.; Zhang, W. Chinese medical question answer selection via hybrid models based on CNN and GRU. Multimed. Tools Appl. 2020, 79, 14751–14776. [Google Scholar] [CrossRef]
  17. Luo, Y.; Qin, J.; Xiang, X.; Tan, Y.; Liu, Q.; Xiang, L. Coverless real-time image information hiding based on image block matching and Dense Convolutional Network. J. Real-Time Image Proc. 2020, 17, 125–135. [Google Scholar] [CrossRef]
  18. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  19. Muzaffar, S.; Afshari, A. Short-Term Load Forecasts Using LSTM Networks. Energy Procedia 2019, 158, 2922–2927. [Google Scholar] [CrossRef]
  20. Santra, A.S.; Lin, J.-L. Integrating Long Short-Term Memory and Genetic Algorithm for Short-Term Load Forecasting. Energies 2019, 12, 2040. [Google Scholar] [CrossRef] [Green Version]
  21. Qing, X.; Niu, Y. Hourly day-ahead solar irradiance prediction using weather forecasts by LSTM. Energy 2018, 148, 461–468. [Google Scholar] [CrossRef]
  22. Li, S. LSTM Recurrent Neural Network Short-Term Power Load Forecasting Based on TensorFlow. Shanghai Energy Conserv. 2018, 7, 974–977. [Google Scholar]
  23. Chen, Z.; Liu, J.; Li, C.; Ji, X.; Li, D.; Huang, Y.; Di, F. Ultra Short-term Power Load Forecasting Based on Combined LSTM-XGBoost Model. Power System Technol. 2019, 44, 614–620. [Google Scholar] [CrossRef]
  24. Zhang, M.; Du, Y.; Hong, G. Application of LSTM in Yichang Short-term Power Load Forecasting. J. Meteorol. Res. Appl. 2019, 40, 72–77. [Google Scholar] [CrossRef]
  25. Liu, P.; Zheng, P.; Chen, Z. Deep Learning with Stacked Denoising Auto-Encoder for Short-Term Electric Load Forecasting. Energies 2019, 12, 2445. [Google Scholar] [CrossRef] [Green Version]
  26. Kulkarni, S.; Simon, S.P. A spiking neural network (SNN) forecast engine for short-term electrical load forecasting. Appl. Soft Comput. 2013, 13, 28–35. [Google Scholar] [CrossRef]
  27. Wang, H.; Xue, W.; Liu, Y.; Peng, J.; Jiang, H. Probabilistic wind power forecasting based on spiking neural network. Energy 2020, 196. [Google Scholar] [CrossRef]
  28. Udaya, B.R.; Alberto, M.; Anton, S.; Henrik, J.; Calogero, M.O. Cuneate spiking neural network learning to classify naturalistic texture stimuli under varying sensing conditions. Neural Netw. 2020, 123, 273–287. [Google Scholar] [CrossRef]
  29. Zhang, N.; Li, Z.; Zou, X.; Quiring, S.M. Quiring. Comparison of three short-term load forecast models in Southern California. Energy 2019, 189, 1–11. [Google Scholar] [CrossRef]
  30. Li, B.; Men, D.; Yang, J.; Zhou, J. Bus load Forecasting Based on Numerical Weather Prediction. Autom. Electr. Power Syst. 2015, 39, 137–140. [Google Scholar] [CrossRef]
  31. Yuan, S.; Zhang, G.; Zhang, J. Integrated Forecasting Model of Bus Load Based on Numerical Weather Prediction. Power Syst. Autom. 2019, 41, 62–65. [Google Scholar] [CrossRef]
  32. Wang, J.; Gu, X.; Liu, W. An Empower Hamilton Loop based Data Collection Algorithm with MobileAgent for WSNs. Hum.-Cent. Comput. Inf. Sci. 2019, 9, 1–14. [Google Scholar] [CrossRef]
  33. Wang, J.; Yang, Y.; Wang, T. Big Data Service Architecture: A Survey. J. Internet Technol. 2020, 21, 393–405. [Google Scholar]
  34. Zhang, J.; Zhong, S.; Wang, T. Blockchain-Based Systems and Applications: A Survey. J. Internet Technol. 2020, 21, 1–14. [Google Scholar]
  35. Yu, F.; Liu, L.; Xiao, L.; Li, K.; Ca, S. A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 2019, 350, 108–116. [Google Scholar] [CrossRef]
  36. Yu, F.; Liu, L.; He, B.; Huang, Y.; Shi, C.; Cai, S.; Song, Y.; Du, S.; Wan, Q. Analysis and FPGA Realization of a Novel 5D Hyperchaotic Four-Wing Memristive System, Active Control Synchronization, and Secure Communication Application. Complexity 2019, 1, 1–18. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, D.; Yang, G.; Li, F. Detecting seam carved images using uniform local binary patterns. Multimed. Tools Appl. 2018, 79, 8415–8430. [Google Scholar] [CrossRef]
  38. Ding, X.; Zhang, D. Detection of motion-compensated frame-rate up-conversion viaoptical flow-based prediction residue. Optik 2020, 207, 1637–1649. [Google Scholar] [CrossRef]
Figure 1. Long short-term memory (LSTM) network structure.
Figure 1. Long short-term memory (LSTM) network structure.
Energies 13 04875 g001
Figure 2. Structure diagram of LSTM cell.
Figure 2. Structure diagram of LSTM cell.
Energies 13 04875 g002
Figure 3. The structure of LSTM neural network.
Figure 3. The structure of LSTM neural network.
Energies 13 04875 g003
Figure 4. Structure of Temperature Factor Weight Calculation Module.
Figure 4. Structure of Temperature Factor Weight Calculation Module.
Energies 13 04875 g004
Figure 5. Significance map of temperature influence coefficient.
Figure 5. Significance map of temperature influence coefficient.
Energies 13 04875 g005
Figure 6. Block diagram of the proposed method.
Figure 6. Block diagram of the proposed method.
Energies 13 04875 g006
Figure 7. The mean absolute percentage error (MAPE%) comparison of TFW-LSTM method and traditional LSTM model.
Figure 7. The mean absolute percentage error (MAPE%) comparison of TFW-LSTM method and traditional LSTM model.
Energies 13 04875 g007
Figure 8. Prediction results for (a) 20 March 2019, (b) 31 July 2019, (c) 7 November 2019, and (d) 24 Decmber 2019. The “Real” stands for the actual value, and the TFW-LSTM is our method, the “LSTM” is the traditional LSTM model.
Figure 8. Prediction results for (a) 20 March 2019, (b) 31 July 2019, (c) 7 November 2019, and (d) 24 Decmber 2019. The “Real” stands for the actual value, and the TFW-LSTM is our method, the “LSTM” is the traditional LSTM model.
Energies 13 04875 g008
Figure 9. Prediction results for (a) 20 March 2019, (b) 31 July 2019, (c) 7 November 2019, and (d) 24 Decmber 2019. The “Real” stands for the actual value, the TFW-LSTM is our method, the “GM” is the traditional gray model, and the “BP” is the traditional back propagation neural network model.
Figure 9. Prediction results for (a) 20 March 2019, (b) 31 July 2019, (c) 7 November 2019, and (d) 24 Decmber 2019. The “Real” stands for the actual value, the TFW-LSTM is our method, the “GM” is the traditional gray model, and the “BP” is the traditional back propagation neural network model.
Energies 13 04875 g009
Table 1. The results of hyperparameters selection experimentation.
Table 1. The results of hyperparameters selection experimentation.
RNN-UnitInput SizeLearning RateRNN-Hid LayerBatch SizeKeep-ProbMAPE%
4060.000639610.516494
6070.001412010.928832
4070.00014480.91.87681
8050.00139611.065048
6040.000659610.586579
Table 2. Mapping relation table.
Table 2. Mapping relation table.
IntervalTFW
Add Temperature Factor Interval100%
NO Temperature Factor Interval0%
Fuzzy Endpoint T σ 12.048 · ( T i f 0.45 ) × 100 %
Table 3. Experimental result from January to September.
Table 3. Experimental result from January to September.
Prediction Date Proposed Method Traditional LSTM Model T if
MAPE%MAERMSEMAPE%MAERMSE
12 January 20191.22763.163103.1831.46377.701114.3540.075
18 January 20191.15356.60785.5891.63083.032127.2190.102
24 January 20191.24154.11084.7171.94682.014117.8680.206
12 March 20191.24643.65865.6771.35747.99474.5960.162
17 March 20191.20242.80369.2752.40585.847110.3840.110
20 March 20190.79027.63538.6171.98163.31684.6620.039
28 March 20191.71856.029117.4872.39982.745144.6350.059
13 April 20191.52549.92271.1111.96865.62293.8900.099
20 April 20190.91830.82443.7121.65455.77776.2640.012
30 April 20191.09635.62150.8031.61952.04573.9620.046
4 May 20191.29342.54666.5751.83661.70190.5580.253
12 May 20191.11836.72954.4851.59751.43170.3120.126
23 May 20191.06237.02156.1331.50749.84374.9770.007
31 May 20191.19239.72261.4481.82157.46074.0520.137
12 June 20191.45157.44178.4881.52660.38582.3750.578
20 June 20191.11955.52087.8151.57280.519119.3480.284
30 June 20191.39056.59088.3531.45858.24785.2440.904
13 July 20191.56457.28680.0401.58360.55094.4160.637
20 July 20191.42178.423110.1851.34574.223107.0560.980
27 July 20191.02566.57097.2230.95761.89088.5630.594
31 July 20190.50232.78342.4021.15968.64986.2430.533
6 August 20190.63842.12261.4460.76648.25265.2270.209
10 August 20190.60436.36052.5010.83649.44165.5960.236
20 August 20190.72646.77666.0120.88255.00475.8800.281
31 August 20191.15051.51867.1471.19477.39794.8420.699
8 September 20191.05955.73489.7831.14561.29592.1470.540
24 September 20191.31352.11278.5391.51056.26685.0890.450
28 September 20191.11742.16659.8311.25751.50575.3380.048
3 October 20191.24746.17365.4382.06169.81088.5870.196
13 October 20191.21543.05766.2251.92666.57789.7060.336
19 October 20191.10639.96759.5531.21843.72262.1590.026
30 October 20191.12040.29367.2581.37747.21562.7170.010
7 November 20190.65123.46529.6511.20544.64869.2120.038
14 November 20190.91933.96056.9521.14543.58367.2440.000
20 November 20191.15843.01267.3281.35952.55085.2530.043
30 November 20191.91887.432117.4663.755184.180244.7270.000
8 December 20190.90839.55563.8331.63172.002106.5870.120
24 December 20190.79738.79750.3693.375156.705201.1060.088
31 December 20191.40272.795106.0101.63680.844112.8530.093
Table 4. The comparison result table of proposed method and traditional method.
Table 4. The comparison result table of proposed method and traditional method.
Date TFW LSTM Method BP Neural Network Grey Model
MAPE%MAERMSEMAPE%MAERMSEMAPE%MAERMSE
20 March0.7927.6338.612.8589.11106.567.76252.16302.10
31 July0.5032.7842.402.47165.66200.965.32341.48427.58
7 November0.6523.4629.652.0473.3896.716.80241.92286.86
24 December0.7939.6953.182.87144.66176.367.70362.18442.44

Share and Cite

MDPI and ACS Style

Zhang, D.; Tong, H.; Li, F.; Xiang, L.; Ding, X. An Ultra-Short-Term Electrical Load Forecasting Method Based on Temperature-Factor-Weight and LSTM Model. Energies 2020, 13, 4875. https://0-doi-org.brum.beds.ac.uk/10.3390/en13184875

AMA Style

Zhang D, Tong H, Li F, Xiang L, Ding X. An Ultra-Short-Term Electrical Load Forecasting Method Based on Temperature-Factor-Weight and LSTM Model. Energies. 2020; 13(18):4875. https://0-doi-org.brum.beds.ac.uk/10.3390/en13184875

Chicago/Turabian Style

Zhang, Dengyong, Haixin Tong, Feng Li, Lingyun Xiang, and Xiangling Ding. 2020. "An Ultra-Short-Term Electrical Load Forecasting Method Based on Temperature-Factor-Weight and LSTM Model" Energies 13, no. 18: 4875. https://0-doi-org.brum.beds.ac.uk/10.3390/en13184875

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop