Next Article in Journal
Assessing Field Spectroscopy Metadata Quality
Previous Article in Journal
L-Band SAR Backscatter Related to Forest Cover, Height and Aboveground Biomass at Multiple Spatial Scales across Denmark
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Fire Smoke Detection Using Back-Propagation Neural Network Based on MODIS Data

State Key Laboratory of Fire Science, University of Science and Technology of China, Jinzhai 96, Hefei 2300027, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(4), 4473-4498; https://0-doi-org.brum.beds.ac.uk/10.3390/rs70404473
Submission received: 14 December 2014 / Revised: 15 March 2015 / Accepted: 15 March 2015 / Published: 15 April 2015

Abstract

:
Satellite remote sensing provides global observations of the Earth’s surface and provides useful information for monitoring smoke plumes emitted from forest fires. The aim of this study is to automatically separate smoke plumes from the background by analyzing the MODIS data. An identification algorithm was improved based on the spectral analysis among the smoke, cloud and underlying surface. In order to get satisfactory results, a multi-threshold method is used for extracting training sample sets to train back-propagation neural network (BPNN) classification for merging the smoke detection algorithm. The MODIS data from three forest fires were used to develop the algorithm and get parameter values. These fires occurred in (i) China on 16 October 2004, (ii) Northeast Asia on 29 April 2009 and (iii) Russia on 29 July 2010 in different seasons. Then, the data from four other fires were used to validate the algorithm. Results indicated that the algorithm captured both thick smoke and thin dispersed smoke over land, as well as the mixed pixels of smoke over the ocean. These results could provide valuable information concerning forest fire location, fire spreading and so on.

Graphical Abstract

1. Introduction

Forest fires are one of the major natural disasters in the world and they occur more and more frequently in recent years [1]. A large amount of smoke particles are emitted from forest fires and enter the atmosphere. As a component of aerosol, fire smoke has a tremendous impact on regional air quality and long-term climate of the fire regions and the downwind regions through long-range transport [2,3,4,5] and does harm to local public health [6]. This is because the particulate matter [6] and the greenhouse gases such as CO2 [7] are produced by fires. They can affect the chemistry of the troposphere [8,9] and cause chronic obstructive pulmonary disease, bronchitis, and asthma chest pain [6,10,11]. Moreover, smoke particles scatter and absorb incoming solar radiation so that they affect the local climate [3,12]. Smoke plumes emitted from fires may travel over hundreds, or even thousands kilometers horizontally and reach up to the stratosphere under certain atmospheric circulation conditions [13,14,15]. Thus, good knowledge of forest fire smoke is of critical importance to crisis management [16].
To understand the complex effects of smoke, smoke detection is one of the most important issues in current study. It has potential applications in air quality assessment, fire detection and fire behavior analysis [17], e.g., the detection of small and cool fires [18,19] and deduction of the process of fire propagation [20]. Smoke detection based on remote sensing has been a hotspot in the past several decades.
Satellite remote sensing provides global observations of the Earth’s surface. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard the Aqua and Terra satellites has 36 spectral channels covering the visible to far infrared bands and can obtain more abundant information than other sensors such as the Advanced Very High Resolution Radiometer (AVHRR) for smoke detection. However, as a mixture of chemical particles, smoke has no stable spectral reflectance curve because the mixing level is various in different situations. A large overlap exists in the spectral signal of satellite sensor measurements between smoke and other cover types such as cloud, water and vegetation [21]. As a result, it is difficult to identify smoke accurately.
So far, several smoke detection methods have been developed. One of the most common approaches is to combine three bands in satellite data to form either true or false color images by assigning three bands as the red, green and blue channel, respectively [22,23,24,25,26]. For example, MODIS RGB true-color composition images are generated with bands 1, 4 and 3 jointly. Then these images can be used to visually separate the smoke aerosol from other cover types, such as vegetation, cloud and water. However, these color-based approaches can, visually, only provide basic information about smoke, but fail when used in the automatic smoke detection procedure.
In order to develop an automatic and accurate smoke detection method, another effective approach called the multi-threshold method, which is based on the physical properties difference between smoke and other cover types, such as cloud, water and vegetation, was introduced. Generally speaking, the method employs a set of thresholds to automatically check all image pixels to filter non-smoke pixels step by step [21,26,27,28,29,30]. Baum and Trepte [27] adopted a grouped threshold method for scene identification in NOAA/AVHRR imagery that may contain clouds, fire, smoke or snow. Randriambelo et al. [26] proposed improved multispectral methods for detecting fire smoke plumes in NOAA/AVHRR imagery by using a multi-channel in visible and thermal ranges in south-eastern Africa and Madagascar. Xie et al. [28,30] developed a multi-threshold method for smoke detection by using eight MODIS spectral bands based on the spectral characteristics analysis of different cover types. Zhao et al. [29] used MODIS images for smoke detection based on spectral and spatial threshold test along with some uniformity texture. Li et al. [21] also developed a multi-threshold method to detect the smoke plumes in NOAA/AVHRR images based on channel 1, 2 and 4. Although the multi-threshold is effective for smoke detection, but it is difficult to find fixed thresholds for images in different regions and seasons. Li et al. [21] developed a neural network algorithm based on AVHRR images. However, it fails to identify thin dispersed smoke pixels in the downwind direction. Thus, Chrysoulakis et al. [16] proposed a multi-temporal change detection approach using two AVHRR images (one is acquired during the fire event and the other is acquired before fire event) at the same target area at different time. The anomalies in NDVI (Normalized Difference Vegetation Index) and infrared radiances were detected to identify the core of plume. Then the plume core was enlarged to detect the complete smoke area. These AVHRR based methods [16,21] may be affected by the limitation of spatial resolution and lack of sufficient channel information [31]. However, the MODIS sensor with 36 channels can provide more surface information for smoke detection.
The purpose of this article is to develop a new flexible and automatic smoke plume identification algorithm using MODIS data. The algorithm, which is based on the method developed by Li et al. [21], has two major modifications: firstly, in Li et al. [21]’s study, all the spectral bands of AVHRR were used as input vectors of the forward feed neural network while we perform a spectral analysis of the typical MODIS bands to determine the input vectors of the back-propagation neural network (BPNN); secondly, in Li et al. [21]’s study, the training pixels were obtain representative polygons containing smoke, cloud, land cover and water while we applied the multi-threshold to extract seasonal training data sets and then construct the BPNN to identify smoke plumes.
The article is organized as follows. In Section 2, data sources are introduced. In Section 3, the algorithm developed in this study is described, and different modules that constitute the algorithm are described in detail in the subsections. Application cases in different locations are presented and discussed in Section 4. The main advantages of the proposed algorithm are considered in the conclusions.

2. Data Source

Data from MODIS onboard Terra or Aqua satellites were used in this study for the development and validation of the algorithm. The MODIS sensor is a 36-channel instrument, covering the wavelength range from 0.4 to 14.2 μm [30,32,33]. Reflectance and brightness temperature from MODIS solar reflective channels in 1km resolution are employed to distinguish smoke pixels from other cover types. It has been demonstrated that the smoke plume has the strong reflectivity in the short wavelength bands but become more transparent in the long wavelength ones [30]. In other words, the short wavelength bands are more sensitive to smoke than the long ones. This is because the average geometrical radii of smoke particles are small, which leads to less Mie scattering [30]. Meanwhile, the brightness temperature located in the thermal infrared region in conjunction with the reflectance at the 1.38 μm [34,35] can be used to separate smoke plumes from clouds [30]. Therefore, bands 1–8 and band 26 located in visible and near-infrared region, band 20 at 3.7 μm, band 31 at 11 μm and band 32 at 12 μm were used to analyze and further select the feature vectors for the input layer of the BPNN. The characteristics of these spectral bands are listed in Table 1.
More specifically, the algorithm was developed using the MODIS data covering the Daxing’anling area, China (16 October 2004), Northeast Asia (29 April 2009) and Ryazan Oblast region, Russia (29 July 2010). Moreover, the MODIS data of fires happened in Russia on 3 August 2012, Canada on 19 June 2013,Greece on 24 August 2007 and Australia on 30 September 2011 were used to validate the applicability of the algorithm.
All data were downloaded from the MODIS website [36] (post 2002) or the Satellite Remote Sensing Facilities for Fire Monitoring in our laboratory (located in Hefei, China; post 2009), including MODIS Level 1B Radiance product (MOD02/MYD02) and geolocation data set (MOD03/MYD03). Then these data were geometrically rectified and radiometrically calibrated to produce the reflectance or brightness temperature of the channels and the geographic information of the studied area.
Table 1. The properties of the MODIS bands used.
Table 1. The properties of the MODIS bands used.
BandBandwidth *Signal to Noise RadioSpatial ResolutionPrimary Use
1620~670128250 mLand/Cloud/Aerosols Boundaries
2841~876201250 mLand/Cloud/Aerosols Boundaries
3459~479243500 mLand/Cloud/Aerosols Properties
4545~565228500 mLand/Cloud/Aerosols Properties
51230~125074500 mLand/Cloud/Aerosols Properties
61628~1652275500 mLand/Cloud/Aerosols Properties
72105~2155110500 mLand/Cloud/Aerosols Properties
8405~4208801000 mOcean Color/Phytoplankton/Biogeochemistry
9438~4488381000 mOcean Color/Phytoplankton/Biogeochemistry
19915~9652501000 mAtmospheric Water Vapor
203.66~3.840.0501000 mSurface/Cloud Temperature
261.36~1.3915041000 mCirrus Clouds Water Vapor
3110.78~11.280.051000 mSurface/Cloud Temperature
3211.77~12.270.051000 mSurface/Cloud Temperature
Note: * The units of bands 1–19 in this table are nm and bands 20, 26, 31 and 32 are μm.

3. Algorithm

In order to distinguish smoke plumes from other cover types more accurately and identify thin dispersed smoke, the integrated detection algorithm based on multi-threshold method [18,28] and BPNN [37] consists of the following steps:
(1)
Extraction of training samples: Extraction of seasonal training samples of different cover types by multi-threshold method;
(2)
Spectral analysis for selecting feature vectors: Spectral analysis of different cover types and selecting feature vectors for input layer of BPNN;
(3)
Training of BPNN and Elimination of noise pixels.
The flowchart of the proposed algorithm is presented in Figure 1. These steps are described below in detail.
Figure 1. Flowchart of the proposed algorithm for smoke detection.
Figure 1. Flowchart of the proposed algorithm for smoke detection.
Remotesensing 07 04473 g001

3.1. Extraction of Training Samples

Sample pixels of smoke, cloud, water and vegetation contain enough information for separating smoke plumes from other cover types. In this study, we extracted sample pixels by the multi-threshold method, and then built training sample sets of BPNN. In addition, true-color composition RGB image generated by MODIS bands 1, 4 and 3 was used to further confirm the validity of the extracted samples.

3.1.1 Multi-Thresholds for Various Cover Types

Multi-thresholds adopted in this study were modified based on the smoke detection algorithm developed by Xie et al. [28] and Wang et al. [18].The following criteria were used to identify smoke pixels:
0.4 ( R 8 R 19 ) / ( R 8 + R 19 ) 0.85
( R 9 R 7 ) / ( R 9 + R 7 ) 0.3
( R 8 R 3 ) / ( R 8 + R 3 ) 0.09
R 8 0.09
Ri refers to the reflectance of band i.
Cloud detection approach was modified based on the technique that was used in the International Geosphere Biosphere Program (IGBP) AVHRR-derived Global Fire Product [38]. Even though there are different kinds of clouds in the atmosphere, in this study, we make an assumption that clouds are treated as a unified entity (large and cool clouds). The following conditions have been proven to be adequate for identifying large and cool clouds [18,39]. Thus, the pixel that satisfies the following criteria was considered as cloud:
( R 1 + R 2 > 0.9 )   or   ( T 32 < 265  K )   or   ( R 1 + R 2 > 0.7   and   T 32 < 285  K )
where R1 and R2 are the reflectance of band 1 and 2 respectively, whereas T32 is the brightness temperature of band 32 at 12 μm.
A simple test based on the reflectance of band 2 (R2) and band 7 (R7) in conjunction with the Normalized Difference Vegetation Index (NDVI) [39,40] were used to identifying water pixels:
R 2 < 0.15   and   R 7 < 0.05   and   N D V I = ( R 2 R 1 ) / ( R 2 + R 1 ) < 0
NDVI is widely used for assessing vegetation cover and condition [41,42,43,44,45]. In general, a pixel is considered as vegetation pixel if its NDVI value is more than 0.3. The criterion [46] was shown below:
N D V I = ( R 2 R 1 ) / ( R 2 R 1 ) > 0.3

3.1.2. Seasonal Training Sample Set

Due to the difference of spectral characteristics in different seasons, seasonal training sample sets were formed to identify smoke plumes. Thus, three cases were used to extract samples by using the multi-threshold method detailed above in this study: the first case was shown in Figure 2a, the plumes emitted from major fires in Daxing’anling area, China in autumn (16 October 2004); the second case was shown in Figure 2b, the plumes emitted from fires happened in northeastern Asia in spring (29 April 2009); the third case was shown in Figure 2c, the plumes emitted from fires in Russia in summer (29 July 2010). In general, the temperature in winter is so low that fires hardly happen during this period.
Figure 2. True-color composition RGB images of three cases used to extract seasonal samples: (a) Smoke plumes emitted from fire happened in Daxing’anling area, China in autumn, (b) Smoke plumes emitted from major fires in northeastern Asia in spring, (c) Smoke plumes emitted from fires in Russia in summer.
Figure 2. True-color composition RGB images of three cases used to extract seasonal samples: (a) Smoke plumes emitted from fire happened in Daxing’anling area, China in autumn, (b) Smoke plumes emitted from major fires in northeastern Asia in spring, (c) Smoke plumes emitted from fires in Russia in summer.
Remotesensing 07 04473 g002

3.2. Spectral Analysis for Selecting Feature Vectors

In order to select feature vectors for the integrated algorithm, spectral analysis of different MODIS bands are conducted in this study. Due to the similar characteristics between smoke and cloud, the spectral analysis consists of the following two sub-steps:
(1)
Spectral analysis of potential smoke plumes and underlying surface;
(2)
Spectral analysis of smoke and cloud.

3.2.1. Spectral Analysis of Potential Smoke Plumes and Underlying Surface

Motivated by the study of Xie [30], we selected eight bands (band 1–8) located in visible and near-infrared regions of MODIS. In the analysis we focused on four typical cover types, i.e., smoke, cloud, water and vegetation, in the study areas. The bare soil can be masked by certain criteria that were mentioned in Xie’s study [28].
Five hundred samples of each cover type were extracted based on multi-threshold methods and visually true-color composition image on 28 June 2010 in Northeastern Asia. The smoke and cloud samples were shown in Figure 3. Response curves of the four cover types in bands 1–8 are presented in Figure 4. It can be seen that: (1) potential smoke plumes which are consisted of true smoke pixels and cloud pixels have higher reflectivity than that of underlying surface pixels (water and vegetation); (2) it appears possible to differentiate potential smoke plumes from vegetation and water based on the difference in reflectance in bands 3 and 8, which are the two shortest wavelength bands in MODIS, because they have high reflectivity and small overlap between smoke pixels and underlying pixels; (3) the mean reflectance in band 7 of water is the lowest and also band 7 can penetrate the smoke layer with relative small influence [28]. Overall, it is possible to separate potential smoke pixels from underlying surface by using bands 3, 8 and 7.
Figure 3. Samples used for spectral analysis are extracted in this area during the forest fire happened on 28 June 2010, (a) True-color composition RGB image acquired over Daxing’anling area, China on 28 June 2010, (b) The extracted smoke samples are shown in the bright white area, (c) The extracted cloud samples are marked with bright white color.
Figure 3. Samples used for spectral analysis are extracted in this area during the forest fire happened on 28 June 2010, (a) True-color composition RGB image acquired over Daxing’anling area, China on 28 June 2010, (b) The extracted smoke samples are shown in the bright white area, (c) The extracted cloud samples are marked with bright white color.
Remotesensing 07 04473 g003
To demonstrate this quantitatively, the normalized distance (D) was introduced to analyze the degree of variance between two classes [47]. The distance D is presented by the following formula:
D = | μ 1 μ 2 | ( σ 1 + σ 2 )
where μ1 and μ2 are the mean values of class 1 and class 2, while σ1 and σ2 are the corresponding standard deviations.
Figure 4. Response curves of the four cover types.
Figure 4. Response curves of the four cover types.
Remotesensing 07 04473 g004
The larger the D is, the better the discrimination between class 1 and class 2. The D values between three pairs namely, smoke vs. cloud, smoke vs. water and smoke vs. vegetation are computed and shown in Figure 5. The figure indicates that the degree of separation between smoke and other cover types. It can be seen that all bands have good discrimination power between smoke and water, but bands 3 and 8 are better for discriminating smoke from cloud and vegetation. The normalized distance D between smoke and cloud in bands 3 and 8 are 1.980 and 1.973 respectively, which are larger than that in other bands. In addition, also the D values between smoke and vegetation in bands 3 and 8 are 0.732 and 1.670, which have better discrimination power than other bands.
Thus, in this study we selected the reflectance of bands 3, 7 and 8 for separating potential smoke plumes from other cover types.

3.2.2. Spectral Analysis of Smoke and Cloud

It is difficult to separate smoke plume from cloud due to the similar characteristics (high reflectivity) between each other. However, clouds have a very low brightness temperature in long wavelength infrared bands [48]. Since the long wavelength is insensitive to smoke, the brightness temperature changes very slightly in smoke area [30]. Therefore, brightness temperature (BT) and brightness temperature difference (BTD) can be used to distinguish smoke from cloud. In this study, the BT of band 31 with the center wavelength of 11 μm, indicated as BT11, and the BTD between band 20 at 3.7 μm and band 32 at 12 μm, indicated as BTD (3.7–12), were used [47]. In addition, the reflectance of band 26 with the center wavelength near 1.38 μm, indicated as R26, was also used in this study. The wavelength was designed for high cirrus cloud detection [34,35] and sensitive to thin cirrus [49].
Figure 6 shows the three dimensional (3D) scatter plot of selected smoke and cloud samples in BT11, BTD (3.7–12) and R26. It indicates that smoke can be significantly distinguished from cloud by employing these three features. Only a very small part of the cloud may be misclassified as smoke due to the existence of similar spectral characteristics. Therefore BT11, BTD (3.7–12) and R26 can be used as feature vectors for the input layer of the neural network to distinguish smoke from cloud.
Figure 5. Normalized distances between smoke and cloud, smoke and water, smoke and vegetation in reflectance of MODIS bands 1–8.
Figure 5. Normalized distances between smoke and cloud, smoke and water, smoke and vegetation in reflectance of MODIS bands 1–8.
Remotesensing 07 04473 g005
Figure 6. 3D scatter plot between smoke and cloud in BT11, BTD (3.7–12) and R26.
Figure 6. 3D scatter plot between smoke and cloud in BT11, BTD (3.7–12) and R26.
Remotesensing 07 04473 g006
As a result, R3, R8, R7, BT11, BTD (3.7–12) and R26 were selected as the six feature vectors of the input layer of BPNN to identify the smoke plumes.

3.3. Training of BPNN and Elimination of Noise Pixels

The artificial neural network (ANN) has been widely used for pattern recognition [37,50,51,52] and been demonstrated available for smoke identification [21] and analysis [53] due to its ability to find and learn complex linear and nonlinear relationships in the radiometric data between smoke, clouds and underlying surface.
In this study, we designed a BPNN shown in Figure 7 to distinguish smoke plumes from other cover types. The network has one input layer with six feature vectors that discussed in Section 3.2, one hidden layer with twenty processing elements and one output layer with one neuron. The final value of output layer is used to determine the cover type of a pixel. Each hidden and output neuron obtains a value from a series of computation: each input vector is multiplied by a corresponding weight and then added together; next the sum is transformed via an activation function to the next layer. Different layers are connected by randomly initialized weights that will be updated based on error back-propagation algorithm [54,55]. Two kinds of activation functions were used in this neural network. The logarithmic sigmoid transfer function was used as an activation function in the hidden layer while the linear function was used in the output layer. The linear function was used to obtain a value between positive and negative infinity for determining the cover type of the pixel. Thus, a pixel is considered as smoke if it satisfies the condition through the following computation:
y j = f ( i ω j i x i θ j )
o k = f ( j ω k j y j θ k )
o k > 0.5
where xi is the input feature vector selected in Section 3.2, yj is the value computed by the input vectors and connected weights in hidden layer, ok is the output result computed by the net input yj and connected weights in output layer. ω and θ are connected weights and thresholds between layers respectively. They are produced by randomly initialization and then updated based on the error back-propagation (BP) algorithm in the training process.
We investigated four different optimization learning algorithms to test the previous described BPNN: (1) the Levenberg-Marquardt (LM) back-propagation optimization learning algorithm [56,57]; (2) resilient back-propagation; (3) scaled conjugated-gradient back-propagation [58] and (4) gradient descent back-propagation. The best results were obtained with the gradient descent method applied to a one hidden layer BPNN. Thus, in our study, we chose the gradient descent as the optimization learning algorithm.
During the training process, the training data sets were selected from MODIS images containing forest fires in three different seasons, as indicated above in 3.1. Input feature vectors of the BPNN included reflectance of bands 3, 8, 7 and 26, brightness temperature of band 31 at 11 μm, and the combination of 3.7 μm and 12 μm bands, that is BTD (3.7–12). Training pixels containing smoke, cloud, vegetation and water were obtained by applying the multi-threshold method. Output value was generated by the BPNN and then compared with the desired response of the cover types shown in Table 2. Then the connected weights and thresholds between layers were modified based on a gradient descent method in order to minimize the error function (Mean Squared Error, MSE), which is calculated by Equation (12). Thus, we could obtain the final connected weights and thresholds for identifying smoke plumes.
M S E = 1 N i = 1 N ( t i o i ) 2
where N is the number of output vectors, ti is the target vectors and oi is the output vectors.
Figure 7. The structure of BP neural networks that designed for identifying smoke plumes. S means smoke, C represents cloud whereas W/V is underlying surface.
Figure 7. The structure of BP neural networks that designed for identifying smoke plumes. S means smoke, C represents cloud whereas W/V is underlying surface.
Remotesensing 07 04473 g007
Table 2. Desired response of the cover types.
Table 2. Desired response of the cover types.
Cover TypesSmokeUnderlying SurfaceCloud
Desired Output10−1
The described BPNN is consisted of one input layer with six feature vectors (R3, R8, R7, BT11, BTD (3.7–12) and R26), one hidden layer with twenty neurons and one output layer with one neuron which is used to determine the cover type of pixel. The number of hidden neurons is determined by increasing the number of neurons from 10 to 40, which obtained the best result with 20 neurons. The logarithmic sigmoid transfer function and the linear function were used as activation functions in hidden layer and output layer respectively. The gradient descent method was used to update the connected weights and thresholds. The BPNN was trained by 4000 samples, which 50% of the samples were used as training samples and the others were used as validation sample. The evolution of NN performance with epoch is shown in Figure 8. It have clearly shown that the MSE of train, validation and best are almost horizontal at 7300 and it has the best validation MSE = 0.0254 at epoch 8000. Thus, 8000 iterations are adopted in this classification model.
Figure 8. The evolution of BPNN performance with epoch.
Figure 8. The evolution of BPNN performance with epoch.
Remotesensing 07 04473 g008
At last, an additional test is carried out to further filter out noisy pixels. The median filter is adopted to remove small cluster of noisy pixels [21] and the discrete and single pixel is considered as noise and eliminated because of the continuity of smoke [28,30].The proposed algorithm described in this section was applied to the training-set cases to verify its availability, applicability and potential in smoke detection, which is presented in Section 4.2. In order to check the robustness of the algorithm’s performance and its applicability in a broader area, it was applied to other cases in different locations, which is presented in Section 4.3.

4. Results and Discussion

4.1. Accuracy Evaluation of the Algorithm

In order to evaluate the performance of the algorithm, we employed MODIS data containing active forest fires in northeastern Asia on 29 April 2009 as an example. It was shown in Figure 2b and acquired at 04:20 UTC. The total number of the pixels used for training and testing the BPNN was more than 3000, and 67% of the pixels were randomly selected from each class and used for training the BPNN, while the remaining pixels served as test samples. The error matrix [19] which describes probabilities of each cover type being correctly identified (diagonal elements in the matrix) and misclassified into different categories (off-diagonal elements) is presented in Table 3. Reference data given in the columns represent the real number of pixels belonging to each category, that is, the sums of the elements appearing in the same column are considered as true. The numbers in the rows are output results obtained by the BPNN. It can be used to compute the overall accuracy, omission error, commission error and Kappa coefficient as included in the table. The overall accuracy was computed as the ratio of the sum of numbers in the diagonal divided by the total number that used for testing. The omission error is the ratio of the number of one type misclassified into other types divided by the total number of this type, while the commission error is the ratio is the number of cases misclassified as one type divided by the total number this type. The Kappa coefficient is widely accepted in the field of content analysis and usually uses for accuracy assessment in remote sensing. The larger differences between two results exist, the smaller Kappa coefficient is. It is interpretable and better to validate the accuracy of classification results. The Kappa coefficient K is computed by the following formula:
K = N i = 1 r x i i i = 1 r ( x i + x + i ) N 2 i = 1 r ( x i + x + i )
where r is the total number of the column, which is the number of the cover types, xii is the diagonal element in the error matrix, which is the number of correctly classified, xi+ and x+i are the sums of number of the ith row and column and N is the total number used for accuracy evaluation.
It can be seen that the overall accuracy is about 97.63%, and the omission error and commission error are about 1.66% and 5.73% respectively, which are relatively low although the commission error are a bit higher than that of the underlying surface and cloud. This may be due to the absence of real ground truth information. Thus, this accuracy contains uncertainties. In addition, the Kappa coefficient of the algorithm is about 96.29%, which means the output results have high agreement with the desired response.
Table 3. The error matrix and classification results of the algorithm.
Table 3. The error matrix and classification results of the algorithm.
Cover TypesSmokeUnderlying SurfaceCloudOmission ErrorCommission Error
Smoke2961801.66%5.73%
Underlying Surface552143.34%1.70%
Cloud002961.35%0
Overall Accuracy97.63%
Kappa Coefficient96.29%
Meanwhile, in order to demonstrate the importance of constructing seasonal detecting models, the summer training data which were extracted from Russia on 29 July 2010 were used as training samples to test the spring data that were extracted from northeastern Asia on 29 April 2009. The error matrix of this case is shown in Table 4. The decrease of accuracy and the increase of error are shown in Figure 9. It can be seen that the overall accuracy and Kappa Coefficient are about 58.23% and 36.92%. The accuracy and Kappa Coefficient decrease 39.4% and 59.73% by comparing with the case using spring training samples to test spring samples. In addition, the omission error and commission error increase about 8.42% and 23.05%. Thus, in this point of view, season can be considered as an important factor in constructing smoke detection models.
Table 4. The error matrix and classification results by using summer samples to test spring samples.
Table 4. The error matrix and classification results by using summer samples to test spring samples.
Cover TypesSmokeUnderlying SurfaceCloudOmission ErrorCommission Error
Smoke117822824810.08%28.78%
Underlying Surface13211511062
Cloud010
Overall Accuracy58.23%
Kappa Coefficient36.92%
Figure 9. The decrease of accuracy and the increase of error by comparing the tests in different ways (one is that the training samples and testing samples extracted in the same season; the other is that the training samples extracted in summer while the testing samples extracted in spring).
Figure 9. The decrease of accuracy and the increase of error by comparing the tests in different ways (one is that the training samples and testing samples extracted in the same season; the other is that the training samples extracted in summer while the testing samples extracted in spring).
Remotesensing 07 04473 g009

4.2. Seasonal Applicability of the Algorithm

In order to test the applicability of the algorithm in different seasons, we verified it with the MODIS data of three fire-affected areas: the first is the Daxing’anling area (China) shown in Figure 10a and the fire happened on 16 October 2004 in autumn; the second area shown in Figure 10b is northeastern Asia including the Daxing’anling area (China) and the Amur region (Russia) and the fire happened on 29 April 2009 in spring; the third area shown in Figure 10c is the Ryazan region (Russia) and the fire happened on 29 July 2010 in summer. The potential smoke plumes, which consisted of both smoke plumes and clouds, are depicted in white in the true-color images (Figure 10a–c). The detected smoke plumes after eliminating clouds were marked with red color as shown in Figure 10d, Figure 10f and Figure 10h. Figure 10e, Figure 10g and Figure 10i are the results of the rectangle areas shown in Figure 10a–c, respectively.
It can be seen that not only can the major plumes be identified but also most of the dispersed and thin smoke area could be successfully detected by comparing the detected results with the true-color images in Figure 10. On the other hand, the cloud pixels which were marked with blue color (Figure 10f–i) and the smoke pixels marked with red color (Figure 10d–i) could be clearly discriminated from each other. However, a small part of thin smoke in the downwind direction was not efficiently identified by comparing Figure 10h, Figure 10i with Figure 10c. This is because of the similar spectral characteristics and overlap between thin smoke pixels and background. The smoke and the clouds in the low atmosphere also has similar properties, thus there exist some misclassifications (Figure 10f–h). However, the undetected and incorrectly detected smoke pixels are so less numerously than those successfully detected. Thus, the improved algorithm could be considered reliable and used to detect smoke plumes with a low error as analyzed in Section 4.1. As a result, it could provide more abundant information for fire detection and analysis.
Figure 10. (a) True-color composition RGB image acquired over Daxing’anling area, China on 16 October 2004, (b) True-color composition RGB image acquired over Daxing’anling area, China and Amur region, Russia on 29 April 2009, (c) True-color composition RGB image acquired over Ryazan region, Russia on 29 July 2010, Panels (d) and (e) are the results of panel (a) and the rectangle area in (a) by using the algorithm, Panels (f) and (g) are the results of panel (b) and the rectangle area in (b), Panels (h) and (i) are the results of panel (c) and the rectangle area in (c).
Figure 10. (a) True-color composition RGB image acquired over Daxing’anling area, China on 16 October 2004, (b) True-color composition RGB image acquired over Daxing’anling area, China and Amur region, Russia on 29 April 2009, (c) True-color composition RGB image acquired over Ryazan region, Russia on 29 July 2010, Panels (d) and (e) are the results of panel (a) and the rectangle area in (a) by using the algorithm, Panels (f) and (g) are the results of panel (b) and the rectangle area in (b), Panels (h) and (i) are the results of panel (c) and the rectangle area in (c).
Remotesensing 07 04473 g010

4.3. Robustness of the Algorithm

To check the robustness of its performance, the algorithm was applied to MODIS images acquired during several past forest fire events in different locations. The algorithm produced relatively satisfactory results in all cases. Four application cases are presented in this study: In the first case, the plumes emitted from major fires in eastern Saha Region, Russia (3 August 2012) were detected; in the second case the major plume emitted from a large, complex fire burning in Quebec, Canada (19 June 2013) was identified; in third case, the plume caused by forest fire in Greece (24 August 2007) was detected and in the fourth case, the plumes generated from several wildfires around Alice Springs in Australia (30 September 2011) were identified.
Figure 11. Smoke plumes detection by using the improved algorithm in Russia on 3 August 2012 (summer): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area. The smoke plumes are depicted in red color.
Figure 11. Smoke plumes detection by using the improved algorithm in Russia on 3 August 2012 (summer): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area. The smoke plumes are depicted in red color.
Remotesensing 07 04473 g011
Figure 11a shows the MODIS image acquired during the forest fire happened in Saha Region (Russia) on 3 August 2012 at 03:30 UTC. Several main large plumes had been formed along with hot spots, which are depicted in dark white tones. The proposed integrated algorithm based on the multi-threshold method and BP neural network classification was applied to detect the smoke plumes. The training samples of this case extracted in summer from Figure 2c. The detected result of Figure 11a is shown in Figure 11c. It can be seen that the outputs of the smoke pixels are more than 0.5 and depicted in red color. The main plumes even the thin smoke pixels were well discriminated from other cover types especially the clouds which are marked in blue color in Figure 11c. Figure 11d is the detected result of Figure 11b, which is the rectangle area in Figure 11a. As seen from Figure 11d, it can be stated more clearly that the algorithm has successfully discriminated the smoke plumes including the thin smoke area shown in Figure 11b. In addition, the smoke plumes are significantly different from the clouds marked in blue in the detected results. In this case, the application of the proposed algorithm has relative satisfactory results (the major smoke plumes, the thin smoke area and the clouds were well identified). Thus, the proposed algorithm has the potential to distinguish smoke plumes from other cover types successfully and then provides signals for fire detection and fire behavior analysis.
Figure 12. Smoke plumes detection in western Quebec, Canada on 19 June 2013 (spring): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area, the smoke plumes are depicted in red color.
Figure 12. Smoke plumes detection in western Quebec, Canada on 19 June 2013 (spring): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area, the smoke plumes are depicted in red color.
Remotesensing 07 04473 g012
Figure 12a shows the true-color composition RGB image of MODIS bands 1, 4 and 3 acquired during a large, complex forest fire burning in Quebec (Canada) on 19 June 2013 at 18:15 UTC. A huge plume had been also formed in this case and depicted in dark white tones. The detected plume is shown in Figure 12c in red by using the proposed algorithm based on the training samples extracted from Figure 2b. As seen from this image, the main part of the emitted plumes could be identified; however, a small part of the plume at the downwind direction cloud not be detected and some parts of misclassification existed over the ocean in the west of Figure 12c. Figure 12d is the detected result of Figure 12b. This can further indicate that there is significant difference between smoke and cloud even though a small part of cloud pixels were misclassified into smoke pixels in the east of the image. The miss-classification and misclassification may be caused by the following reasons: (1) similar spectral characteristics and large overlap between smoke and other cover types; (2) geographic difference between the area of training samples and the detected area, which lead to different ecology. However, the main plume can be detected to provide fire signal by applying the proposed algorithm although the hot spots are covered with high density smoke and cloud interference. Thus, in this case, the proposed algorithm is considered to be reliable for smoke detection although a small number of miss-classifications and misclassifications exist.
Figure 13a shows the MODIS true-color composition RGB image by assigning red, green and blue colors to bands 1, 4 and 3, respectively. It was acquired during a forest fire in Greece on 24 August 2007 at 09:45 UTC. A huge plume was dispersed from land to the ocean. The algorithm was applied to this MODIS image by using the training samples extracted from Figure 2c in summer. The detected plume is shown in Figure 13c in red, whose outputs are more than 0.5, whereas the detected result of the rectangle area in Figure 13a is shown in Figure 13d. As can be seen from Figure 13c and Figure 13d, the algorithm detected the part of the plume located over land, however, the dispersed part with lower density of the plume cannot be identified successfully. This might be due to the existence of the mixed pixels of smoke over ocean. Furthermore, the outputs of the mixed pixels are higher than that of the water pixels. However, as a fire signal, the detected part of the plume over land is enough to help fire detection.
Figure 13. Smoke plumes detection by using the algorithm in Greece on 24 August 2007 (summer): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area, the smoke plumes are depicted in red color.
Figure 13. Smoke plumes detection by using the algorithm in Greece on 24 August 2007 (summer): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area, the smoke plumes are depicted in red color.
Remotesensing 07 04473 g013
Figure 14a shows the MODIS true-color composition RGB image of bands 1, 4 and 3. It was acquired during several wildfires happened near Alice Springs in Australia on 30 September 2011 at 05:05 UTC. Several main large plumes had been formed in this case and depicted in grey tones. The proposed algorithm was applied to identify smoke plumes emitted from these wildfires based on the seasonal training samples sets extracted from Figure 2b in spring. The detected smoke pixels are marked with red color, whose outputs are more than 0.5. The detected result of the rectangle area in Figure 14a is shown in Figure 14d. Comparing Figure 14a with Figure 14c and Figure 14b with Figure 14d, it can be seen that the algorithm discriminated the major smoke plumes. However, in this case, the dispersed and thin smoke area cannot be successfully detected. This may be due to the geographic difference between the area of training samples and detected area, which have different ecology; and also the similar characteristics and large overlap between smoke pixels and underlying surface. However, because the major smoke plumes can be efficiently identified, the proposed algorithm in this case is considered to be reliable for smoke detection to provide fire signals for further fire management even though a part of thin smoke areas in the downwind direction are miss-classified.
Figure 14. Smoke plumes detection by using the algorithm around the Alice Spring (Australia) on 30 September 2011 (spring): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area, the smoke plumes are depicted in red color.
Figure 14. Smoke plumes detection by using the algorithm around the Alice Spring (Australia) on 30 September 2011 (spring): (a) True-color composition RGB image of MODIS bands 1, 4 and 3 covering the detected area, (b) The rectangle area shown in panel (a), (c) The detected result of panel (a) and panel (d) is the result of rectangle area, the smoke plumes are depicted in red color.
Remotesensing 07 04473 g014
On the one hand, the concentration of smoke changes because of its dispersion into the atmosphere and then it leads to overlap in spectral characteristics between smoke and other cover types, that is to say, the smoke pixels with low density have different spectral characteristics from the detected ones; on the other hand, the geographic difference between the area of seasonal training sample sets and the detected areas, which have different ecology, lead to the different spectral characteristics of the underlying surface. Thus, the misclassification and miss-classification are inevitable by applying the proposed algorithm. However, the previous results indicate that the major smoke plumes even some part of the dispersed and thin smoke areas can be efficiently identified by comparing with the true-color composition MODIS images based on the BPNN classification method.
In fact, as smoke plumes are the product of early forest fire, information regarding their presence and evolution could clearly help for fire detection. The cases presented above support this idea. The basic principle of fire detection is that fires increase the response of brightness temperature bands when compared to background temperature. However, the following encountered situations in practice limit the possibility of fire detection: (1) the presence of smoke and cloud may cover the ground fire information from the satellite sensors’ observation; (2) fire spots cannot be found under a thick canopy; (3) the fire can be masked by a hot background during the afternoon in summer; (4) some hot ground regions can be easily miss-classified as fires [16]. Therefore, smoke plumes can be used as a signal for fire detection and fire behavior analysis without associated hotspots. For instance, forest fire burning in Quebec (Canada) shown in Figure 12a was masked by the hot dense smoke. Thus, smoke detection by applying the algorithm proposed in this study could help to refine the result of fire detection.

4.4. Results of the Multi-Threshold Method

As mentioned in the introduction, some previous studies demonstrated that the multi-threshold method is simple and practical [18,21,26,27,28,29]. However, it has some limitations such as the thresholds need to be adjusted due to the geographic differences. Besides, a group of thresholds cannot produce a relative accurate smoke identification. The multi-threshold method [18,28] was also applied to the previous cases. The results were shown in Figure 15. The thresholds adopted in these scenes were listed in Table 5.
It can be seen that there exist many miss-classifications and many non-smoke pixels are misclassified in these detected results. Figure 15a shows that a large amount of smoke pixels were not identified by using a group of thresholds shown in the second column of Table 5, whereas, Figure 15b shows that a number of non-smoke pixels were misclassified into smoke pixels and also some parts of the smoke plumes cannot be efficiently detected based on a group of thresholds shown in the third column of Table 5. Figure 15c shows that a large number of non-smoke pixels were detected as smoke pixels even though the major plume could be successfully detected, but Figure 15d not only has misclassifications but also the smoke plume cannot be detected. Comparing Figure 15e with Figure 15f, different parts of smoke plumes were detected by using different thresholds. These indicate that the thresholds need to be changed in different scenes in order to obtain more relatively satisfactory results. However, it needs some experiences and prior knowledge in different location for changing the thresholds. Thus, it is difficult for researchers outside a specific study area to identify smoke plumes by using multi-thresholds. In this study, we improved the self-adapted classification method to accomplish accurate and effective smoke detection in a relative broader area even in the global. Comparing Figure 15 with previous results, the proposed algorithm can capture most of the smoke pixels (including major plumes and the some diffusion area of the plumes) and produce more satisfactory results than those of the multi-threshold method. Thus, we think the applicability of the proposed algorithm is better than the multi-threshold.
Figure 15. Smoke plumes detection by using the multi-threshold method in different locations. Panel (a) and Panel (b) are the results of two groups of thresholds in Russia on 3 August 2012. Panel (c) and Panel (d) are the results of two groups of thresholds in Quebec, Canada on 19 June 2013. Panel (e) and Panel (f) are the results of two groups of thresholds in Greece on 24 August 2007.
Figure 15. Smoke plumes detection by using the multi-threshold method in different locations. Panel (a) and Panel (b) are the results of two groups of thresholds in Russia on 3 August 2012. Panel (c) and Panel (d) are the results of two groups of thresholds in Quebec, Canada on 19 June 2013. Panel (e) and Panel (f) are the results of two groups of thresholds in Greece on 24 August 2007.
Remotesensing 07 04473 g015
Table 5. The thresholds adopted in Figure 15.
Table 5. The thresholds adopted in Figure 15.
EquationsFigure 15 a,c,eFigure 15 b,d,f
( R 8 R 19 ) / ( R 8 + R 19 ) [0.15, 0.5][0.4, 0.85]
( R 9 R 7 ) / ( R 9 + R 7 ) [0.3, +∞)[0.3, +∞)
( R 8 R 3 ) / ( R 8 + R 3 ) (−∞, 0.09](−∞, 0.09]
R 8 [0.09, +∞)[0.09, +∞)

5. Conclusions

It is difficult to detect small forest fires or those under a thick canopy by satellite remote sensing. As smoke plumes are concomitant with forest fires, smoke detection can be used as an indicative signal to provide valuable information and help in refining fire detection. The MODIS sensor makes global observations with 36 bands, covering the wavelength range from 0.4 to 14.2 μm. Thus, it can provide more abundant information for smoke detection.
The algorithm proposed in this study is developed using MODIS data. It combined multi-threshold method and BPNN classification to carry out efficient smoke detection based on the analysis of the spectral characteristic of smoke and other cover types. The algorithm contains the following parts: (1) pre-processing the MODIS data and then transferring bands information to reflectance or brightness temperature; (2) extracting seasonal training samples of smoke, cloud and underlying surface by using multi-threshold method; (3) analysis of spectral characteristics of smoke and other cover types to select the input feature vectors of BPNN; (4) training BPNN to separate smoke plumes from other types in different seasons based on the seasonal training sample sets and eliminating interference noise. Then the BPNN can be used to identify smoke. The performance of the algorithm was evaluated using the error matrix to calculate the overall accuracy and Kappa coefficient in conjunction with visual inspection of the true-color composition RGB images. The algorithm was validated with MODIS data of several forest fires occurred in different places and different dates.
The results of all cases are relatively satisfactory when compared to true-color composition images. The algorithm can capture both thick smoke plumes and dispersed thin plumes over land. It also can be noticed that the outputs of smoke plumes over the ocean has significant difference from that over land or pure water pixels due to the existence of the mixed pixels. Only a small part of missed plumes exists and misclassification is inevitable due to the similar spectral characteristics between each other and the geographic difference between the area of seasonal sample sets and the detected areas. The main advantage of the proposed algorithm is that it can be used to detect smoke plumes in different seasons using the seasonal training data sets. Moreover, it provides quantitative and continuous outputs of smoke as well as other objects. The smoke outputs provide a measure of both the concentration of smoke and the mixing with other cover types. The plumes in different locations are efficiently detected by the algorithm. Thus, these results could provide valuable information concerning fire location, fire spread direction and so on. However, the main disadvantage of proposed algorithm is that the geographic differences between the area of seasonal training sample sets and the detected exist, thus, it can be used to identify the smoke plumes emitted from wildfires in different seasonal, but the misclassifications in different locations are inevitable. Thus, the seasonal and regional-dependent training sample sets will be established in the future. In addition, misclassification and miss-classification are also generated because of the overlap between smoke and other cover types.

Acknowledgements

This research was supported by the National Natural Science Foundation of China (Grant No. 51120165001),CAS China and CNR Italy joint project “Development of Models and Time Series Analysis Tools for the Analysis of the Fire Sequences”and Specialized Research Fund for the Doctoral Program of Higher Education of China (No.20133402110009).

Author Contributions

Xiaolian Li and Weiguo Song had the original idea for the study and, with all co-authors carried out the work. Weiguo Song was responsible for organization of study participants. Liping Lian and Xiaoge Wei were responsible for collecting and pre-processing the data. Xiaolian Li was responsible for data analyses and drafted the manuscript, which was revised by all authors. All authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Telesca, L.; Kanevski, M.; Tonini, M.; Pezzatti, G.B.; Conedera, M. Temporal patterns of fire sequences observed in canton of Ticino (Southern Switzerland). Nat. Hazard. Earth Syst. 2010, 10, 723–728. [Google Scholar] [CrossRef]
  2. Kaufman, Y.J.; Fraser, R.S. The effect of smoke particles on clouds and climate forcing. Science 1997, 277, 1636–1639. [Google Scholar] [CrossRef]
  3. Randerson, J.T.; Liu, H.; Flanner, M.G.; Chambers, S.D.; Jin, Y.; Hess, P.G.; Pfister, G.; Mack, M.C.; Treseder, K.K.; Welp, L.R.; et al. The impact of boreal forest fire on climate warming. Science 2006, 314, 1130–1132. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, J.; Christopher, S.A.; Nair, U.S.; Reid, J.S.; Prins, E.M.; Szykman, J.; Hand, J.L. Mesoscale modeling of central American smoke transport to the united states: 1. “Top-down” assessment of emission strength and diurnal variation impacts. J. Geophys. Res.: Atmos. 2006, 111. [Google Scholar] [CrossRef]
  5. Ge, C.; Wang, J.; Reid, J.S. Mesoscale modeling of smoke transport over the Southeast Asian maritime continent: Coupling of smoke direct radiative effect below and above the low-level clouds. Atmos. Chem. Phys. 2014, 14, 159–174. [Google Scholar] [CrossRef]
  6. Mott, J.A.; Meyer, P.; Mannino, D.; Redd, S.C.; Smith, E.M.; Gotway-Crawford, C.; Chase, E. Wildland forest fire smoke: Health effects and intervention evaluation, Hoopa, California, 1999. Western J. Med. 2002, 176, 157–162. [Google Scholar] [CrossRef]
  7. Penner, J.E.; Dickinson, R.E.; Oneill, C.A. Effects of aerosol from biomass burning on the global radiation budget. Science 1992, 256, 1432–1434. [Google Scholar] [CrossRef] [PubMed]
  8. Crutzen, P.J.; Andreae, M.O. Biomass burning in the tropics—Impact on atmospheric chemistry and biogeochemical cycles. Science 1990, 250, 1669–1678. [Google Scholar] [CrossRef] [PubMed]
  9. Kaskaoutis, D.G.; Kharol, S.K.; Sifakis, N.; Nastos, P.T.; Sharma, A.R.; Badarinath, K.V.S.; Kambezidis, H.D. Satellite monitoring of the biomass-burning aerosols during the wildfires of august 2007 in Greece: Climate implications. Atmos. Environ. 2011, 45, 716–726. [Google Scholar] [CrossRef]
  10. Duclos, P.; Sanderson, L.M.; Lipsett, M. The 1987 forest fire disaster in California—Assessment of emergency room visits. Arch. Environ. Health 1990, 45, 53–58. [Google Scholar] [CrossRef] [PubMed]
  11. Shusterman, D.; Kaplan, J.Z.; Canabarro, C. Immediate health-effects of an urban wildfire. Western J. Med. 1993, 158, 133–138. [Google Scholar]
  12. Li, Z.Q. Influence of absorbing aerosols on the inference of solar surface radiation budget and cloud absorption. J. Climate 1998, 11, 5–17. [Google Scholar] [CrossRef]
  13. Cahoon, D.R.; Stocks, B.J.; Levine, J.S.; Cofer, W.R.; Pierson, J.M. Satellite analysis of the severe 1987 forest-fires in northern china and southeastern Siberia. J. Geophys. Res.: Atmos. 1994, 99, 18627–18638. [Google Scholar] [CrossRef]
  14. Prins, E.M.; Feltz, J.M.; Menzel, W.P.; Ward, D.E. An overview of goes-8 diurnal fire and smoke results for scar-b and 1995 fire season in South America. J. Geophys. Res.: Atmos. 1998, 103, 31821–31835. [Google Scholar] [CrossRef]
  15. Fromm, M.; Alfred, J.; Hoppel, K.; Hornstein, J.; Bevilacqua, R.; Shettle, E.; Servranckx, R.; Li, Z.Q.; Stocks, B. Observations of boreal forest fire smoke in the stratosphere by Poam III, Sage II, and Lidar in 1998. Geophys. Res. Lett. 2000, 27, 1407–1410. [Google Scholar] [CrossRef]
  16. Chrysoulakis, N.; Herlin, I.; Prastacos, P.; Yahia, H.; Grazzini, J.; Cartalis, C. An improved algorithm for the detection of plumes caused by natural or technological hazards using AVHRR imagery. Remote Sens Environ. 2007, 108, 393–406. [Google Scholar] [CrossRef]
  17. San-Miguel-Ayanz, J.; Ravail, N.; Kelha, V.; Ollero, A. Active fire detection for fire emergency management: Potential and limitations for the operational use of remote sensing. Nat. Hazards 2005, 35, 361–376. [Google Scholar] [CrossRef]
  18. Wang, W.T.; Qu, J.J.; Hao, X.J.; Liu, Y.Q.; Sommers, W.T. An improved algorithm for small and cool fire detection using modis data: A preliminary study in the southeastern United States. Remote Sens. Environ. 2007, 108, 163–170. [Google Scholar] [CrossRef]
  19. Wang, W.T.; Qu, J.J.; Hao, X.J.; Liu, Y.Q. Analysis of the moderate resolution imaging spectroradiometer contextual algorithm for small fire detection. J. Appl. Remote Sens. 2009, 3. [Google Scholar] [CrossRef]
  20. Liu, C.C.; Kuo, Y.C.; Chen, C.W. Emergency responses to natural disasters using formosat-2 high-spatiotemporal-resolution imagery: Forest fires. Nat. Hazards 2013, 66, 1037–1057. [Google Scholar] [CrossRef]
  21. Li, Z.Q.; Khananian, A.; Fraser, R.H.; Cihlar, J. Automatic detection of fire smoke using artificial neural networks and threshold approaches applied to AVHRR imagery. IEEE. Trans. Geosci. Remote Sens. 2001, 39, 1859–1870. [Google Scholar] [CrossRef]
  22. Chung, Y.S.; Le, H.V. Detection of forest-fire smoke plumes by satellite imagery. Atmos. Environ. 1984, 18, 2143–2151. [Google Scholar] [CrossRef]
  23. Christopher, S.A.; Chou, J. The potential for collocated aglp and erbe data for fire, smoke, and radiation budget studies. Int. J. Remote Sens. 1997, 18, 2657–2676. [Google Scholar] [CrossRef]
  24. Chrysoulakis, N.; Opie, C. Using NOAA and FY imagery to track plumes caused by the 2003 bombing of Baghdad. Int. J. Remote Sens. 2004, 25, 5247–5254. [Google Scholar] [CrossRef]
  25. Kaufman, Y.J.; Tucker, C.J.; Fung, I. Remote-sensing of biomass burning in the tropics. J. Geophys. Res.: Atmos. 1990, 95, 9927–9939. [Google Scholar] [CrossRef]
  26. Randriambelo, T.; Baldy, S.; Bessafi, M. An improved detection and characterization of active fires and smoke plumes in south-eastern Africa and Madagascar. Int. J. Remote Sens. 1998, 19, 2623–2638. [Google Scholar] [CrossRef]
  27. Baum, B.A.; Trepte, Q. A grouped threshold approach for scene identification in AVHRR imagery. J. Atmos. Ocean. Tech. 1999, 16, 793–800. [Google Scholar] [CrossRef]
  28. Xie, Y.; Qu, J.J.; Xiong, X.; Hao, X.; Che, N.; Sommers, W. Smoke plume detection in the eastern united states using MODIS. Int. J. Remote Sens. 2007, 28, 2367–2374. [Google Scholar] [CrossRef]
  29. Zhao, T.X.P.; Ackerman, S.; Guo, W. Dust and smoke detection for multi-channel imagers. Remote Sens. 2010, 2, 2347–2368. [Google Scholar] [CrossRef]
  30. Xie, Y. Detection of Smoke and Dust Aerosols Using Multi-Sensor Satellite Remote Sensing Measurements; George Mason University: Fairfax, VA, USA, 2009. [Google Scholar]
  31. Gong, P.; Pu, R.L.; Li, Z.Q.; Scarborough, J.; Clinton, N.; Levien, L.M. An integrated approach to wildland fire mapping of California, USA using NOAA/AVHRR data. Photogramm. Eng. Remote Sens. 2006, 72, 139–150. [Google Scholar] [CrossRef]
  32. Kaufman, Y.J.; Herring, D.D.; Ranson, K.J.; Collatz, G.J. Earth observing system AM1 mission to earth. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1045–1055. [Google Scholar] [CrossRef]
  33. Remer, L.A.; Kaufman, Y.J.; Tanre, D.; Mattoo, S.; Chu, D.A.; Martins, J.V.; Li, R.R.; Ichoku, C.; Levy, R.C.; Kleidman, R.G.; et al. The MODIS aerosol algorithm, products, and validation. J. Atmos. Sci. 2005, 62, 947–973. [Google Scholar] [CrossRef]
  34. Gao, B.-C.; Goetz, A.F.H.; Wiscombe, W.J. Cirrus cloud detection from airborne imaging Spectrometer data using the 1.38 µm water vapor band. Geophys. Res. Lett. 1993, 20, 301–304. [Google Scholar] [CrossRef]
  35. Gao, B.-C.; Yang, P.; Han, W.; Li, R.-R.; Wiscombe, W.J. An algorithm using visible and 1.38-μm channels to retrieve cirrus cloud reflectances from aircraft and satellite data. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1659–1668. [Google Scholar] [CrossRef]
  36. NASA: MODIS Level 1 Data, Geolocation, Cloud Mask, and Atmosphere Products. Available online: http://ladsweb.nascom.nasa.gov/ (accessed on 14 December 2014).
  37. Clark, C.; Canas, A. Spectral identification by artificial neural-network and genetic algorithm. Int. J. Remote Sens 1995, 16, 2255–2275. [Google Scholar] [CrossRef]
  38. Stroppiana, D.; Pinnock, S.; Gregoire, J.M. The global fire product: Daily fire occurrence from April 1992 to December 1993 derived from NOAA AVHRR data. Int. J. Remote Sens. 2000, 21, 1279–1288. [Google Scholar] [CrossRef]
  39. Giglio, L.; Descloitres, J.; Justice, C.O.; Kaufman, Y.J. An enhanced contextual fire detection algorithm for MODIS. Remote Sens. Environ. 2003, 87, 273–282. [Google Scholar] [CrossRef]
  40. Huang, S.F.; Li, J.G.; Xu, M. Water surface variations monitoring and flood hazard analysis in Dongting Lake area using long-term TERRA/MODIS data time series. Nat. Hazards 2012, 62, 93–100. [Google Scholar] [CrossRef]
  41. Ben-Ze’Ev, E.; Karnieli, A.; Agam, N.; Kaufman, Y.; Holben, B. Assessing vegetation condition in the presence of biomass burning smoke by applying the aerosol-free vegetation index (AFRI) on MODIS images. Int. J. Remote Sens. 2006, 27, 3203–3221. [Google Scholar] [CrossRef]
  42. Escuin, S.; Navarro, R.; Fernandez, P. Fire severity assessment by using NBR (normalized burn ratio) and NDVI (normalized difference vegetation index) derived from Landsat TM/ETM images. Int. J. Remote Sens. 2008, 29, 1053–1073. [Google Scholar] [CrossRef]
  43. Telesca, L.; Lasaponara, R. Investigating fire-induced behavioural trends in vegetation covers. Commun. Nonlinear Sci. 2008, 13, 2018–2023. [Google Scholar] [CrossRef]
  44. Leon, J.R.R.; van Leeuwen, W.J.D.; Casady, G.M. Using MODIS-NDVI for the modeling of post-wildfire vegetation response as a function of environmental conditions and pre-fire restoration treatments. Remote Sens. 2012, 4, 598–621. [Google Scholar] [CrossRef]
  45. Clerici, N.; Weissteiner, C.J.; Gerard, F. Exploring the use of MODIS NDVI-based phenology indicators for classifying forest general habitat categories. Remote Sens. 2012, 4, 1781–1803. [Google Scholar] [CrossRef] [Green Version]
  46. Chowdhury, E.H.; Hassan, Q.K. Use of remote sensing-derived variables in developing a forest fire danger forecasting system. Nat. Hazards 2013, 67, 321–334. [Google Scholar] [CrossRef]
  47. Li, X.L.; Wang, J.; Song, W.G.; Ma, J.; Telesca, L.; Zhang, Y.M. Automatic smoke detection in modis satellite data based on k-means clustering and fisher linear discrimination. Photogramm. Eng. Remote Sens. 2014, 80, 971–982. [Google Scholar]
  48. Gao, B.C.; Xiong, X.X.; Li, R.R.; Wang, D.Y. Evaluation of the moderate resolution imaging spectrometer special 3.95-mu m fire channel and implications on fire channel selections for future satellite instruments. J. Appl. Remote Sens. 2007, 1. [Google Scholar] [CrossRef]
  49. Bendor, E. A precaution regarding cirrus cloud detection from airborne imaging spectrometer data using the 1.38-µm water-vapor band. Remote Sens. Environ. 1994, 50, 346–350. [Google Scholar] [CrossRef]
  50. Hsieh, W.W.; Tang, B.Y. Applying neural network models to prediction and data analysis in meteorology and oceanography. Bull. Am. Meteorol. Soc. 1998, 79, 1855–1870. [Google Scholar] [CrossRef]
  51. Li, L.M.; Song, W.G.; Ma, J.; Satoh, K. Artificial neural network approach for modeling the impact of population density and weather parameters on forest fire risk. Int. J. Wildland. Fire 2009, 18, 640–647. [Google Scholar] [CrossRef]
  52. Vasilakos, C.; Kalabokidis, K.; Hatzopoulos, J.; Matsinos, I. Identifying wildland fire ignition factors through sensitivity analysis of a neural network. Nat. Hazards 2009, 50, 125–143. [Google Scholar] [CrossRef]
  53. Albayrak, A.; Wei, J.; Petrenko, M.; Lynnes, C.; Levy, R.C. Global bias adjustment for MODIS aerosol optical thickness using neural network. J. Appl. Remote Sens. 2013, 7. [Google Scholar] [CrossRef]
  54. Rumelhart, D.E.; Mcclelland, J.L. Parallel Distributed Processing: Explorations in the Microstructure of Cognition; The MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  55. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  56. Levenberg, K. A method for the solution of certain non-linear problems in least squares. Q. J. Appl. Math. 1944, II, 164–168. [Google Scholar]
  57. Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  58. Taylor, M.; Kazadzis, S.; Tsekeri, A.; Gkikas, A.; Amiridis, V. Satellite retrieval of aerosol microphysical and optical parameters using neural networks: A new methodology applied to the Sahara Desert dust peak. Atmos. Meas. Tech. 2014, 7, 3151–3175. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Li, X.; Song, W.; Lian, L.; Wei, X. Forest Fire Smoke Detection Using Back-Propagation Neural Network Based on MODIS Data. Remote Sens. 2015, 7, 4473-4498. https://0-doi-org.brum.beds.ac.uk/10.3390/rs70404473

AMA Style

Li X, Song W, Lian L, Wei X. Forest Fire Smoke Detection Using Back-Propagation Neural Network Based on MODIS Data. Remote Sensing. 2015; 7(4):4473-4498. https://0-doi-org.brum.beds.ac.uk/10.3390/rs70404473

Chicago/Turabian Style

Li, Xiaolian, Weiguo Song, Liping Lian, and Xiaoge Wei. 2015. "Forest Fire Smoke Detection Using Back-Propagation Neural Network Based on MODIS Data" Remote Sensing 7, no. 4: 4473-4498. https://0-doi-org.brum.beds.ac.uk/10.3390/rs70404473

Article Metrics

Back to TopTop