Next Article in Journal
Sand Dune Dynamics Exploiting a Fully Automatic Method Using Satellite SAR Data
Previous Article in Journal
Adaptive Iterated Shrinkage Thresholding-Based Lp-Norm Sparse Representation for Hyperspectral Imagery Target Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan

1
Institute of Geology, China Earthquake Administration, Beijing 100029, China
2
Key Laboratory of Seismic and Volcanic Hazards, China Earthquake Administration, Beijing 100029, China
3
National Institute of Natural Hazards, Ministry of Emergency Management of China, Beijing 100085, China
4
Southern Yunan Observatory for Cross-block Dynamic Process, Yuxi 652799, China
5
Xichang Observatory for Natural Disaster Dynamics of Strike-slip Fault System in the Tibetan Plateau, Xichang 615000, China
6
School of Engineering and Technology, China University of Geosciences (Beijing), Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(23), 3992; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233992
Submission received: 23 October 2020 / Revised: 20 November 2020 / Accepted: 3 December 2020 / Published: 6 December 2020

Abstract

:
After a major earthquake, the rapid identification and mapping of co-seismic landslides in the whole affected area is of great significance for emergency rescue and loss assessment of seismic hazards. In recent years, researchers have achieved good results in research on a small scale and single environment characteristics of this issue. However, for the whole earthquake-affected area with large scale and complex environments, the correct rate of extracting co-seismic landslides remains low, and there is no ideal method to solve this problem. In this paper, Planet Satellite images with a spatial resolution of 3 m are used to train a seismic landslide recognition model based on the deep learning method to carry out rapid and automatic extraction of landslides triggered by the 2018 Iburi earthquake, Japan. The study area is about 671.87 km2, of which 60% is used to train the model, and the remaining 40% is used to verify the accuracy of the model. The results show that most of the co-seismic landslides can be identified by this method. In this experiment, the verification precision of the model is 0.7965 and the F1 score is 0.8288. This method can intelligently identify and map landslides triggered by earthquakes from Planet images. It has strong practicability and high accuracy. It can provide assistance for earthquake emergency rescue and rapid disaster assessment.

Graphical Abstract

1. Introduction

Co-seismic landslides are a major secondary effect of earthquakes, of which the loss usually accounts for a large proportion of the total loss of an earthquake disaster [1,2]. Co-seismic landslides can cause road damage, river blockage, house burial and bridge collapse, making emergency rescue and on-site investigation difficult. This will seriously affect life rescue and earthquake disaster assessment. Therefore, it is of great significance to obtain information on the location, scope and size of co-seismic landslides quickly and accurately to guide earthquake emergency rescue, disaster assessment and reconstruction [3]. In recent years, researchers have carried out some research work in this field. Most of their work focused on small-scale issues and in a single environment. They have achieved good results [4,5,6]. However, for the whole earthquake-affected area with a large scale and complex environment, the extraction accuracy of co-seismic landslides is low, and there is no ideal method to solve this problem [7,8].
After a major earthquake, the rapid acquisition of satellite remote sensing images and UAV photos permits investigation and cataloging of co-seismic landslides. The commonly used landslide identification methods mainly rely on the visual interpretation of such images. This method has high accuracy. Interpreters are required to have high professional knowledge and experience. For images of large-scale areas and a large number of earthquake-triggered landslides, this method cannot meet the timeliness of earthquake emergency and rapid assessment. To tackle this problem, one possible approach is to develop automatic identification of co-seismic landslides.
To realize automatic identification of landslides, automatic classification of remote sensing images is a first step, which is divided into pixel-based and object-oriented methods. The traditional pixel-based methods include supervised classification, unsupervised classification and decision tree classification based on expert knowledge. Some researchers have done much work on these classification methods [9]. Most of them conduct research based on traditional statistical analysis and machine learning methods, including maximum likelihood [10], principal component transformation [11], word bag feature [12], support vector machine [13], transfer learning [14,15] and so on. When images of different time phases after the earthquake are available, some experts use the method of changing detection to identify landslides [16,17,18,19]. These methods are capable of automatic or semiautomatic classification in a certain sense, but the samples cannot be reused. They analyze pixels according to the relevant characteristics of the remote sensing image, and the effect of landslide recognition in different regions and situations are different. At the same time, they have limitations for high-resolution images. There exists the problem of a foreign object with the same spectrum or different spectra of the same object. The result of extraction has the phenomenon of a “pepper and salt” appearance, similar to the black and white noise of TV snowflakes. Most machine learning algorithms only perform nonlinear mapping for the internal features of the data. Due to the multiple-source detection data, explosive growth of data and other problems, the traditional machine learning cannot extract the complex features from high-dimensional data [20].
Object-oriented classification technology segments homogeneous images and collects adjacent pixels as analysis objects, and uses high-resolution and multiple-spectral data to conduct high-precision classification [21]. Many scholars used the object-oriented classification method to extract landslides automatically, and the accuracy is higher than the traditional statistical analysis and machine learning methods [5,22,23,24,25,26,27,28,29,30]. In order to improve the accuracy of automatic extraction, some scholars combined traditional statistical analysis and machine learning methods to optimize the object-oriented method [31,32,33,34,35,36]. The multiple-level landslide identification method established by object-oriented classification requires experience and parameter adjustment in the process of setting segmentation parameters and classification rules, which spends much time and cannot meet the needs of an earthquake emergency [5]. Moreover, this method is more suitable for Unmanned Aerial Vehicle (UAV) images with high spatial resolution, and its applicability still needs to be improved for satellite remote sensing images over larger areas with relatively low spatial resolution.
In recent years, with the development of deep learning technology, many researchers applied this tool to automate the extraction of landslides [37,38,39,40,41]. For example, Zhang et al. proposed to develop a new pattern of early rapid intelligent recognition of geological hazards, which includes image recognition, deformation recognition, displacement recognition, internal cause recognition, inducement recognition and comprehensive recognition, using the mature technology of rule-based, traditional machine learning, expression learning and partial deep learning [42]. Some scholars have explored the automatic extraction of landslides based on convolution neural networks and proposed some improved schemes [4,43,44,45,46]. In this aspect, the U-Net model is an improved Fully Convolutional Networks (FCN) structure, which combines the characteristics of the encoding–decoding structure and the skip network [47], which is more elegant and clever in the model structure and can get more accurate classification results with fewer training samples. Therefore, many scholars carried out the research of automatic landslide identification by improving the U-Net model [6,48,49]. Compared with traditional machine learning methods, deep learning algorithms can automatically extract the most effective features by using deep convolution layers in large data sets [50]. Although models with more convolution layers have better separability than models with fewer convolution layers, deep neural networks are more difficult to train samples and require a large number of annotated training samples. The ENVI Deep Learning Module used in this paper is a U-Net model developed based on the TensorFlow framework, which can classify remote sensing images. Compared with other approaches, this method is more convenient. ENVI deep learning requires no programming, is simple to operate and has a low error rate. In addition, it can be applied to the identification of landslides in large areas with complex environments.
In view of the urgency of disasters and in order to improve the timeliness of the application of remote sensing to earthquake emergency, this work attempts to quickly and accurately extract the distribution information of landslides from images. Based on the high-resolution satellite images (3 m) acquired before and after an earthquake, the spatial distribution of landslides can be accurately determined using this tool in the detailed investigation of a post-earthquake disaster. It permits establishing a high-precision identification model of a landslide disaster in a short time after the earthquake, and realizing the automatic identification of the spatial distribution of landslides. Its core is through the training of a large number of landslide samples to improve the accuracy and efficiency of landslide recognition, and improving the working mode of visual interpretation of post-disaster data.

2. Study Area

The study area of this work is located in the Iburi–Tobu district, Hokkaido, Japan (Figure 1). At 03:08 a.m. on 6 September 2018 (Japan Time), an Mw 6.6 earthquake occurred in the Oshima Belt region, east of Tomakomai on the island of Hokkaido, Japan, with epicenter at 42.72 °N, 142.0 °E. This shock induced at least 9295 landslides [51], which were widely distributed, leading to strong destruction. According to reports, this earthquake caused 44 deaths, of which 36 died of landslides triggered by seismic ground motion [52].
Figure 2 is a group of aerial photos of landslides caused by the Iburi earthquake in 2018. Apparently the density of landslides triggered by this earthquake is relatively high, most of which are shallow slope failures with continuous distribution. The affected area has extensive vegetation and a large number of houses, roads, farms and other features. The study area of this work is a large-scale, complex environment in the entire earthquake-affected area. The co-seismic landslides destroyed the ridges, many houses, and farmland, blocked roads and valleys, and changed the terrain. At the same time, this hazard resulted in a large number of uprooted trees over the disaster-stricken mountainous areas. After the landslide occurred, the mountain collapsed, and a lot of rocks and soil were mixed together and piled up on the foot of the slope and the flat ground. This changed the original landform features.

3. Data

This study uses the high-resolution (3 m) Planet Satellite images obtained five days after the earthquake [54]. The spectral bands of Planet satellite remote sensing images include blue (455–515 nm), green (500–590 nm), red (590–670 nm) and near infrared (780–860 nm). Its spatial resolution is 3 m. The Planet remote sensing image level used is 3B, which is an orthographic image after sensor, radiometric calibration and orthorectification. Finally, a true color image combined with three bands of red, green and blue is used for mosaicking. These images cover the entire affected area and with a low cloud cover. High-resolution (3 m) Planet Satellite images, taken shortly before the earthquake, are also employed to ensure that landslides existing before the earthquake would not be identified as co-seismic landslides. The study area is a regular rectangular area with an area of about 671.87 km2, longitudes 141°50’24.25”–142°8’18.93” E and latitudes 42°38’21.29”–42°53’15.72” N. Figure 3A,B shows remote sensing images before the earthquake and after the earthquake, respectively.
The Iburi earthquake’s landslide inventory map (Figure 3B) was prepared by our visual interpretation to the Planet Satellite images. Figure 4 shows the comparison of remote sensing images and artificial visual interpretation of landslides before and after the earthquake in some areas. We can observe that the earthquake caused a large number of co-seismic landslides. The areas with flowing texture, light hue and damaged vegetation coverage can be considered as co-seismic landslides. Due to the thick vegetation cover in the study area, the hue of nonseismic landslides is usually darker than that of co-seismic landslides. In the study area, visual interpretation shows that there are at least 9314 landslides that were triggered by the 2018 Iburi earthquake.
In order to enable the model to extract co-seismic landslides, a group of labeled raster cells, which can indicate the characteristics of landslides, must be used to train the model. The landslide vector data are derived into polygonal ROIs (ROI is the region of interest, it refers to the landslide sample area in this article) of the corresponding region, which contains a variety of shape, color and texture feature targets, permitting improvements in the final classification accuracy. In general, the more ROIs drawn, the higher the classification accuracy. ROIs are used to create the label raster of the landslide training area and verification area (Figure 5). The size of the image in the study area is 8034 × 9292 pixels. The image of the study area is cropped from the northern 60% of the area to create the training raster (Figure 5A) with a size of 8034 × 5576 pixels and containing 7019 landslide samples. The total number of pixels in the landslide area is 2,851,789. The remaining 40% of the southern part of the image is used to create the validation raster (Figure 5B) with a size of 8034 × 3717 pixels and containing 2349 landslide samples. The total number of pixels in the landslide area is 925,134. The label raster contains the original band of the input image and a mask band. The original band of the experimental image is set to red, green and blue. The DN value of the mask band indicates the probability that the pixel is a landslide. A pixel with a DN value of 1 represents a landslide, and a value of 0 represents a nonlandslide pixel. Figure 5 displays the 3 bands as mask bands combined into RGB.

4. Method

4.1. ENVINet5 Network Architecture

The TensorFlow model is an open source software library that uses data flow diagrams for numerical calculation [55]. The nodes in the data flow graph represent mathematical operations, and the edges represent multiple-dimensional data groups, namely the tensors, which are related to each other between the nodes. The TensorFlow model is characterized by high flexibility, portability and performance optimization. ENVI deep learning relies on the TensorFlow model to perform classification, which is the core of the whole process.
The ENVINet5 architecture used by the ENVI Deep Learning Module is based on the U-Net architecture developed by Ronneberger et al. [47]. Like U-Net, ENVINet5 is a mask-based and encoder–decoder architecture, which can be used to classify each pixel on remote sensing images.
Figure 6 shows the network architecture of ENVINet5, which has 5 levels and 23 convolutional layers. Each level represents a different pixel resolution in the model. Downsampling can increase the robustness to some small disturbances of the input image, such as image translation and rotation, reducing the risk of overfitting, reducing the amount of computation and increasing the size of the receptive field. Upsampling can restore and decode the abstract features to the size of the original image, and finally get the segmentation result. The shallow network can retain obvious content information. The deeper the network layer, the less content and more features. The merge can add content information to the deep network.

4.2. Initializing TensorFlow Model

Before starting the training, we needed to set an initialized TensorFlow model. Patch is a small picture that is introduced into this model for training, which cannot be larger than the minimum number of edge-length pixels in the clipped subregion. The number of bands is set as the number of bands of the raster image. According to this principle, the patch size of the initial model used in this work is defined as 572 × 572 pixels and 3 bands. Both the initialization model and the trained model in ENVI are data in HDF5 (.h5) format.

4.3. Training TensorFlow Models

The established landslide training raster is used to train the initialization model. Model training is to expose training raster to the model repeatedly. As the training progresses, the model learns to convert the spectral and spatial information in the training raster into a class activation map/raster (CAM), and highlighting the target to be extracted. In the first training process, the model tries to guess initially and generates a random CAM grayscale image, which is compared with the mask band in the training raster. Through the goodness-of-fit function (also known as loss function), the model can know where its random guess results are wrong. Then the internal parameters or weights of the model are adjusted to make it more accurate.
ENVINet5 refers to the binary cross–entropy loss function with the weighted map used by U-Net [47]:
E = x Ω ω x log p l x x ,
where p l x x is the softmax loss function; l : Ω 1 , , K is the label value of the pixel and ω : Ω ϵ is the weight of the pixel that gives a higher weight to the pixel close to the boundary point in the image.
The optimizer used in this experiment is stochastic gradient descent (SGD) and a momentum coefficient of 0.99. The class weight is used to highlight feature pixels at the beginning of training. The value of the class weight is approximately 0.8022. The loss weight is used to highlight feature pixels when checking the training effect. The value of the loss weight is approximately 0.70. The blur distance is to blur the label ROI with the maximum distance at the beginning of training to help the model find the target edge and reduce it to the minimum. The value of the blur distance is approximately 3.8566.
To prevent overfitting, data augmentation is used. Data augmentation is a technique commonly used with deep learning to supplement the original training data. By having more information to extract from the training data, the trainer and classifier can more effectively learn the appearance of features of interest. During each epoch, ENVI creates a new training dataset with a randomly assigned angle per training example. Likewise, it creates a new training dataset with a randomly assigned scale factor per training example.
In traditional deep learning, an epoch represents the period in which all data sets are passed into the training model. However, it is different in the ENVI Deep Learning Module, which intelligently extracts patches from the training raster, so high-brightness characteristic pixel areas are more often encountered than low-brightness regions at the beginning of training. At the end of the training, all areas look more uniform. Because there is bias determination in patch extraction, an epoch in ENVI deep learning refers to the number of patches trained before bias decision adjustment.
In order to get a better model, multiple epochs are needed to fully train the model. The number of epochs and the number of patches per epoch depend on the diversity of feature sets to be learned, which has no exact value. In general, enough epochs are needed to adjust the weight to ensure smooth progress. The number of epochs is set to 30 in this work. The number of patches per epoch determines the amount of training. The settings are lower for small datasets and higher for large datasets. The number of patches per epoch is set to 500 in this work. The number of patches per batch system is automatically set to 1. The rest of the parameters are default or ENVI automatically determines the appropriate values.

4.4. Model Training and Validation Indexes

After training, the evaluation indexes of the deep learning model include loss, accuracy, precision, recall and F-measure (F1) of each batch, training data set and verification data set. The loss is a unitless number, which indicates the degree of matching between the classifier and the verification training data. Its value of 0 indicates a perfect fit; the further the value from 0, the lower the fit accuracy. The accuracy represents the ratio of samples to total samples that are predicted to match the landslide label. The precision indicates the proportion of correct predicted positive samples to the actual predicted positive samples. The recall means to correctly predict the proportion of positive samples to positive samples. The F-measure, also known as F1 scores, is the harmonic average of precision and call values, ranging from 0 to 1; the larger the F1, the better the prediction. These indexes are expressed as follows:
A c c u r a c y = TP + TN TP + TN + FP + FN ,
P r e c i s i o n = TP TP + FP ,
R e c a l l = TP TP + FN ,
F 1 = 2 1 P r e c i s i o n + 1 R e c a l l = 2 · P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l ,
where TP is true positive, which means correctly predicting positive samples to be positive; FN is false negative, which means correctly predicting positive samples to be negative; FP is false positive, which means predicting negative samples wrongly to be positive; and TN is true negative, which means correctly predicting negative samples to be negative.

4.5. Work Flow of Landslide Extraction

In order to realize automatic identification of co-seismic landslides, the following preparatory work is made: (1) establishing co-seismic landslide database—visual interpretation and cataloging of landslides for the 2018 Iburi earthquake; (2) model construction—some areas in the study area are selected to establish samples for training the deep learning landslide classification model and to train the model, and (3) image classification—the trained model is applied to the recognition of earthquake landslides in the whole study area to complete the classification, and the accuracy of the model is evaluated with reference to the results of visual interpretation.
TensorFlow is the second generation of a Google open source software library for digital computing [55]. ENVI uses it to perform deep learning. Its framework is based on the convolution neural network (CNN), which looks for spatial and spectral patterns that match the training data provided in the image pixels. Figure 7 shows the workflow of using the ENVI Deep Learning Module, from model building to training and then to landslide extraction.

5. Results

5.1. Model Training and Validation

The device model used in this work is a DELL Precision 7920 Tower, on NVIDIA GeForce GTX 1080 GPU (8GB memory) in the Windows system. The training time of the model is 1 h, 40 min and 41 s. The curves of training accuracy, training loss, training precision and training recall are shown in Figure 8. Ideally, the value of training loss should rapidly decrease in the first few epochs, and then converge to 0 as the number of epochs increases.
Similarly, the verification accuracy, verification loss, verification precision and verification recall curves of this work are shown in Figure 9. ENVI generates a model based on the lowest point of the validation loss value, that is, the epoch with the best match between the classifier and the verification data. If smoothing is set to 0, it can be seen from Figure 9 that the loss value of the fifth training is the lowest, so the loss of the trained model is 0.1507, accuracy is 0.8815, precision is 0.7965, recall is 0.8638 and F1 is 0.8288.

5.2. Image Classification

The trained model can be used to identify landslides in other images. The model obtained in this work re-recognizes the image of the whole study area. The process of using the DELL Precision 7920 Tower device on the NVIDIA GeForce GTX 1080 GPU (8GB memory) in the Windows system to recognize the image takes 1 minute and 33.40 seconds. After the recognition, we get a class activation raster (Figure 10). Each pixel in the grayscale image roughly represents the probability of belonging to the landslide, and the threshold ranges from 0 to 1. The white area in the class activated grayscale image represents the area with high probability, which means it is identified as a landslide. The result identified by the trained model is a landslide probability map; the larger the values in this probability map, the more confident the identified landslides.
The classification results can be output as polygon shapefile for analysis and evaluation and other related work. In order to prepare a final landslide identification map, a threshold must be set. The threshold should be set to the minimum value in the class activation raster that will be used to differentiate feature pixels from background pixels. If the highest feature class value for a given pixel is greater than or equal to the threshold value, the pixel will be designated as that feature class. Pixels with probability values greater than the defined threshold are determined as landslides. Generally speaking, a lower threshold may identify more real landslides, but it will lead to lower accuracy and increase misjudgment. Therefore, choosing an appropriate threshold is important. The threshold for this work is set as 0.56.
This work adopts the Otsu automatic threshold method [56], which is based on discriminant analysis and uses zero-order and first-order cumulative moments of the histogram to calculate the value of the threshold level. The threshold is used to set the minimum value in the class activation raster, which is then used to distinguish between feature pixels and background pixels. If the highest feature class value of a given pixel is greater than or equal to the threshold, the pixel is designated as a landslide. Otherwise, it will be specified as the background.
By trying different thresholds, the best threshold for extracting landslides is determined. Figure 11 shows the result of post-earthquake image and visual interpretation of a selected area within the study area. Figure 12 is the result of setting the threshold to 0.1–0.9. By comparing with the results of visual interpretation, it can be found that the threshold between 0.5 and 0.6 (Figure 13) is the classification result that gives a threshold set to 0.51–0.59. Although the comparison effect of the results is not very obvious, the result with the threshold 0.56 is considered better, so the threshold of shapefile conversion of the classification result in this work is set at 0.56 (Figure 14).
In order to further illustrate the mapping results of the model in the study area, four typical regions in the image are selected (Figure 15). Because the results of automatic identification of landslides are not compared with the existing landslides before the earthquake, this work only analyzes the effect of landslide identification in the image, regardless whether or not it is a landslide induced by this earthquake. The results, shown in Figure 15A,B, indicate that in the areas with dense co-seismic landslides and obvious characteristics, the effect of landslide identification is very good, and the landslide boundaries can be accurately recognized with only a few omissions. Figure 15C shows that for small landslides, which are difficult to distinguish by eye, the model also has a good recognition degree and can extract the boundary accurately. Figure 15D shows that the characteristics of landslides are not obvious when there are farmland and unpaved roads close to towns and villages. There are some omissions and misjudgments, and some farmland and unpaved roads are also identified as landslides. Some boundaries are not extracted accurately, and the landslides identified are smaller than the actual landslides. Nevertheless, the overall effect is still good.

6. Discussion

The deep learning method can extract deep hierarchical structure features from the data step-by-step and supports pixel classification. Because the deep learning model has self-learning ability, it can add learning samples at any time and can be used for different data.
Much previous work focused on using the ENVI Deep Learning Module to extract buildings, roads, aircraft, glaciers and so on [57], while applied rarely in the extraction of co-seismic landslides. This research shows that the deep learning model trained by ENVI can be used to extract co-seismic landslides in the whole earthquake area with a complex environment and with good precision, which will be helpful to emergency rescue and disaster investigation.
In traditional machine learning classification, feature selection (such as NDVI and texture information) is the key to identifying landslides [58,59]. The selection of the most effective features as the input of the machine learning model largely depends on the experience of experts. The deep learning model is developed on the basis of large data sets and high computing power. When training the deep learning model, the model input is the label raster that can automatically generate semantic features, which are used to identify landslides. Compared with the traditional machine learning model, the main advantage of the deep learning model is to transform low-level (spectral) features into high-level (semantic) features.
In the process from the creation of the initialization model to the training of the landslide extraction model, the selection of parameters has a certain impact on the final training model. The training deep learning model involves some random processes and has a certain degree of randomness. Even if the same parameters are used for training, different models will be constructed. This is related to the way the algorithm attempts to converge, and is a fundamental part of the training process.
The accuracy of landslide extraction in this study is very high, and the model can easily distinguish landslides from dense vegetation. However, there is a small amount of bare land in the study area, and its spectrum is similar to that of landslides. It is still difficult to accurately extract landslides from exposed spectrum information. The image of the study area in this work is of good quality, there is no cloud interference, and few water systems or unpaved roads. However, a lot of vegetation covers the study area, and the characteristics of landslides and vegetation are obvious. The trained model can get good results, but there may be misjudgment when it is used to identify images with similar spectral characteristics, such as water systems and unpaved roads. The amount of sample data in this work is small, which can increase the training of the model by the landslide database caused by other earthquake events, so as to improve the precision. This work uses three bands of remote sensing images (red, green and blue). In future research, we can try the combination method of other bands to train the model, and examine which model is more accurate in extracting seismic landslides. For example, Qi et al. used near infrared, red and green bands for analysis and obtained relatively good accuracy [49]. ENVI’s new version of the Deep Learning Module adds multiple-class architecture and support for extracting multiple-category targets at once. We can train a model to extract multiple-class ground features and extract landslides, clouds, unpaved roads and water systems at the same time to improve the accuracy of landslide extraction. If we increase the feature extraction of damaged buildings and roads, we can use the model to extract this disaster information very soon after the earthquake to support emergency rescue. Eliminating the existing landslides before the earthquake can be accomplished by combining NDVI and DEM data before and after the earthquake [60,61]. It is also possible to secondarily develop ENVI by using IDL and other languages to improve the automatic extraction model.
This work demonstrates that the automatic extraction method is effective in dealing with co-seismic landslides in a whole earthquake area characterized by large-scale and complex environments. It can also be used to identify other types of landslides, such as regional landslides triggered by heavy rainfall. The purpose of this paper is to provide a scheme for quickly identifying landslides immediately after a major earthquake to obtain landslide cataloging maps. It is also possible to consider the extraction of landslides in different regions, integrate different environmental information and improve the accuracy of model extraction. This method can also be used to analyze the sensitivity of landslide identification and other related work. In future work, landslide samples of other earthquake events can be increased to continue to train the model to improve its accuracy and applicability to different regions.

7. Conclusions

This paper uses the ENVI Deep Learning Module to train an automatic identification model for co-seismic landslides. Based on this approach, our work shows a good extraction effect for landslides triggered by the Hokkaido earthquake in Japan, which occurred in an area with complex environments. In this work, the loss of the model verification data set is 0.1507, the accuracy is 0.8815, the precision is 0.7965, the recall is 0.8638 and the F1 is 0.8288. In the future, the landslide database of other earthquake events can be used to train the model to improve its accuracy. We can try to apply the model to the identification of rainfall landslides and other types of landslides. ENVI’s deep learning capability has stronger learning and transfer ability than other methods, and it will be the trend and direction of remote sensing image classification in the future. The ENVI deep learning training model has the advantages of simple operation, low error probability and can be quickly put into use. The method and results meet the needs of an earthquake emergency, meet the requirements of timeliness and provide support for life rescue and rapid disaster assessment after the earthquake. It can also lay a foundation for follow-up landslide risk assessment and disaster investigation.
At present, all kinds of methods have limitations in remote sensing image classification—none of which is absolutely the best. According to the spectral features, texture features and required accuracy of remote sensing images, a variety of classification algorithms can be combined to optimize the deep learning training model. Through the improved method, we can improve the efficiency of classification as much as possible on the premise of ensuring accuracy.

Author Contributions

C.X. proposed the research concept, organized landslides interpretation work and offered basic data. P.Z. designed the framework of this research, conducted the experiment and wrote the manuscript. Other authors participated in landslide interpretation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key Research and Development Program of China (2018YFC1504703).

Acknowledgments

Thanks for PLANET Company for providing us with pre- and post-earthquake images. We also express our gratitude to the anonymous reviewers for their comments and suggestions that improved the quality of our paper.

Conflicts of Interest

The authors declare that there are no conflict of interest.

References

  1. Keefer, D.K. Landslides caused by earthquakes. Geol. Soc. Am. Bull. 1984, 95, 406–421. [Google Scholar] [CrossRef]
  2. Xu, C.; Xu, X.; Zhou, B.; Shen, L. Probability of coseismic landslides: A new generation of earthquake-triggered landslide hazard model. J. Eng. Geol. 2019, 27, 1122–1130. [Google Scholar]
  3. Li, H.; Xu, Q.; He, Y.; Fan, X.; Li, S. Modeling and predicting reservoir landslide displacement with deep belief network and EWMA control charts: A case study in Three Gorges Reservoir. Landslides 2019, 17, 693–707. [Google Scholar] [CrossRef]
  4. Ghorbanzadeh, O.; Meena, S.R.; Blaschke, T.; Aryal, J. UAV-based slope failure detection using deep-learning convolutional neural networks. Remote Sens. 2019, 11, 2046. [Google Scholar] [CrossRef] [Green Version]
  5. Li, Q.; Zhang, J.; Luo, Y.; Jiao, Q. Recognition of earthquake-induced landslide and spatial distribution patterns triggered by the Jiuzhaigou earthquake in August 8, 2017. J. Remote Sens. 2019, 23, 785–795. [Google Scholar]
  6. Prakash, N.; Manconi, A.; Loew, S. Mapping landslides on EO data: Performance of deep learning models vs. traditional machine learning models. Remote Sens. 2020, 12, 346. [Google Scholar] [CrossRef] [Green Version]
  7. Ouyang, Z.; Xu, W.; Wang, X.; Wang, W.; Dong, R.; Zheng, H.; Li, D.; Li, Z.; Zhang, H.; Zhuang, C. Impact assessment of Wenchuan Earthquake on ecosystems. Acta Ecol. Sin. 2008, 28, 5801–5809. [Google Scholar]
  8. Parker, R.N.; Densmore, A.L.; Rosser, N.J.; de Michele, M.; Li, Y.; Huang, R.; Whadcoat, S.; Petley, D.N. Mass wasting triggered by the 2008 Wenchuan earthquake is greater than orogenic growth. Nat. Geosci. 2011, 4, 449–452. [Google Scholar] [CrossRef] [Green Version]
  9. Marc, O.; Hovius, N. Amalgamation in landslide maps: Effects and automatic detection. Nat. Hazards Earth Syst. Sci. 2015, 15, 723–733. [Google Scholar] [CrossRef] [Green Version]
  10. Xu, C. Automatic extraction of earthquake-triggered landslides based on maximum likelihood method and its validation. Chin. J. Geol. Hazard Control 2013, 24, 19–25. [Google Scholar]
  11. Chen, W.; Hou, Y.; Li, N.; Zhong, C.; Amu, L.; Chen, c.; Sun, J.; Li, H. Post-Earthquake Landslides Detection in Nepal Based On Principal Component Analysis(PCA). J. Yangtze River Sci. Res. Inst. 2020, 37, 166–171, (In Chinese with English abstract). [Google Scholar] [CrossRef]
  12. Li, Z.; Li, Y.; Guo, J.; Zhang, S.; Liu, K. An automatic landslide interpretation model of UAV imagery based on BoW. Remote Sens. Inf. 2016, 31, 24–29. [Google Scholar] [CrossRef]
  13. Fu, W.; Hong, J. Discussion on application of support vector machine technique in extraction of information on landslide hazard from remote sensing images. Res. Soil Water Conserv. 2006, 13, 120–121+124. [Google Scholar]
  14. Fu, X.; Guo, J.; Liu, X.; Lu, H.; Yang, Z.; Xiang, X. Method of earthquake landslide information extraction based on high resolution unmanned aerial vehicle images. J. Seismol. Reseach 2018, 41, 186–191. [Google Scholar]
  15. Guo, J.; Li, Y.; Li, Z.; Liu, K.; Zhang, S. An automatic interpretation model for mountains landslide disaster of high-resolution remote sensing images based on transfer learning. J. Geomat. Sci. Technol. 2016, 33, 496–501. [Google Scholar] [CrossRef]
  16. Cheng, K.S.; Wei, C.; Chang, S.C. Locating landslides using multi-temporal satellite images. Adv. Space Res. 2004, 33, 296–301. [Google Scholar] [CrossRef]
  17. Hervás, J.; Barredo, J.I.; Rosin, P.L.; Pasuto, A.; Franco, M.; Silvano, S. Monitoring landslides from optical remotely sensed imagery: The case history of Tessina landslide, Italy. Geomorphol. 2003, 54, 63–75. [Google Scholar] [CrossRef]
  18. Nichol, J.; Wong, M.S. Satellite remote sensing for detailed landslide inventories using change detection and image fusion. Int. J. Remote Sens. 2005, 26, 1913–1926. [Google Scholar] [CrossRef]
  19. Park, N.W.; Chi, K.H. Quantitative assessment of landslide susceptibility using high-resolution remote sensing data and a generalized additive model. Int. J. Remote Sens. 2008, 29, 247–264. [Google Scholar] [CrossRef]
  20. Labrinidis, A.; Jagadish, H.V. Challenges and Opportunities With Big Data. In Proceedings of the Vldb Endowment; VLDB Endowment: Istanbul, Turkey, 2012; pp. 2032–2033. [Google Scholar]
  21. Deng, S. ENVI Remote Sensing Image Processing Method; Science Press: Peking, China, 2010. [Google Scholar]
  22. Budha, P.B.; Bhardwaj, A. Landslide extraction from sentinel-2 image in Siwalik of Surkhet District, Nepal. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 4, 9–15. [Google Scholar] [CrossRef] [Green Version]
  23. Ding, H. Study on Landslides Geo-Hazard Zoning Base on Remote Sensing. Ph.D. Thesis, Chang’an University, Xi’an, China, 2011. [Google Scholar]
  24. Huang, T.; Bai, X.; Zhuang, Q.; Xu, J. Research on landslides extraction based on the Wenchuan Earthquake in GF-1 remote sensing image. Bull. Surv. Mapp. 2018, 82, 67–71. [Google Scholar]
  25. Jiao, Q.S.; Luo, Y.; Shen, W.H.; Li, Q.; Wang, X. Rapid extraction of landslide and spatial distribution analysis after Jiuzhaigou Ms 7.0 earthquake based on UAV images. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 685–690. [Google Scholar] [CrossRef] [Green Version]
  26. Li, J. Landslide Information Extraction and Risk Assessment of High Resolution Imagery in Weizhou Town, Wenchuan County. Master’s Thesis, Chengdu University of Technology, Chengdu, China, 2019. [Google Scholar]
  27. Li, X. Loess Landsliderecognition Bases on Remote Sensing Image and DEM-a Case Study of North Mountain and South Mountain in Tianshui. Master’s Thesis, Lanzhou University, Lanzhou, China, 2017. [Google Scholar]
  28. Lin, L. Research on Information Extraction Methods of Landslide in the Red-Bed of Mayang. Master’s Thesis, China University of Geosciences, Peking, China, 2020. [Google Scholar]
  29. Lin, Q.; Zou, Z.; Zhu, Y.; Wang, Y. Object-oriented detection of landslides based on the spectral, spatial and morphometric properties of landslides. Remote Sens. Technol. Appl. 2017, 32, 931–937. [Google Scholar] [CrossRef]
  30. Zhang, Q. Landslide Recognition Based on High Resolution Remote Sensing Images in Heifangtai. Master’s Thesis, Chang’an University, Xi’an, China, 2017. [Google Scholar]
  31. Cheng, T.; Hu, Z.; Wei, L.; Hu, S. Data processing and landslide information extraction based on UAV remote sensing. J. Geo Inf. Sci. 2017, 19, 692–701. [Google Scholar] [CrossRef]
  32. Li, Q.; Zhang, W.; Yi, Y. An information extraction method of earthquake-induced landslide: A case study of the Jiuzhaigou earthquake in 2017. J. Univ. Chin. Acad. Sci. 2020, 37, 93–102. [Google Scholar]
  33. Lu, H.; Ma, L.; Fu, X.; Liu, C.; Wang, Z.; Tang, M.; Li, N. Landslides information extraction using object-oriented image analysis paradigm based on deep learning and transfer learning. Remote Sens. 2020, 12, 752. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, N.; Chen, F.; Yu, B. A Object-oriented landslide extraction method based on morphological opening operation. Remote Sens. Technol. Appl. 2018, 33, 520–529. [Google Scholar]
  35. Wang, X.; Lu, H.; Liu, X.; Yang, Z.; Xiang, X.; Cai, S. Rapid detection of seismic landslide information based on SHALSTAB model and object-oriented remote sensing image. J. Seismol. Reseach 2019, 42, 273–279+306. [Google Scholar]
  36. Zhang, S. The method of landslide extraction with high resolution remote sensing image combining change detection and object oriented method. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2017. [Google Scholar]
  37. Gao, H. Object-Oriente Classification Based on Deep Features for High Resolution Remotely Sensed Imagery. Master’s Thesis, University of Chinese Academy of Sciences (Institute of Remote Sensing and Digital Earth Chinese Academy of Sciences), Peking, China, 2018. [Google Scholar]
  38. Hao, Y. Object-based image classification of post-earthquake high resolution imagery based on deep learning. Master’s Thesis, China University of Petroleum (East China), Tsingtao, China, 2018. [Google Scholar]
  39. Li, Y. Research on Landslide Detection Algorithm Based on Deep Learning. Master’s Thesis, Chengdu University of Technology, Chengdu, China, 2018. [Google Scholar]
  40. Yu, B.; Chen, F.; Xu, C. Landslide detection based on contour-based deep learning framework in case of national scale of Nepal in 2015. Comput. Geosci. 2020, 135, 104388. [Google Scholar] [CrossRef]
  41. Zhao, P.; Li, J.; Kang, F. Fast recognition method for mountain hazards in river courses based on convolutional neural networks. Hydro Sci. Eng. 2019, 65–70. [Google Scholar]
  42. Zhang, M.; Jia, J.; Wang, Y.; Niu, Q.; Mao, Y.; Dong, Y. Construction of geological disaster prevention and control system based on AI. Northwest. Geol. 2019, 52, 103–116. [Google Scholar]
  43. Ji, S.; Yu, D.; Shen, C.; Li, W.; Xu, Q. Landslide detection from an open satellite imagery and digital elevation model dataset using attention boosted convolutional neural networks. Landslides 2020, 17, 1337–1352. [Google Scholar] [CrossRef]
  44. Mutlu, B.; Nefeslioglu, H.A.; Sezer, E.A.; Akcayol, M.A.; Gokceoglu, C. An experimental research on the use of recurrent neural networks in landslide susceptibility mapping. ISPRS Int. J. Geo Inf. 2019, 8, 578. [Google Scholar] [CrossRef] [Green Version]
  45. Shi, W.; Zhang, M.; Ke, H.; Fang, X.; Zhan, Z.; Chen, S. Landslide recognition by deep convolutional neural network and change detection. IEEE Trans. Geosci. Remote Sens. 2020, 1–19. [Google Scholar] [CrossRef]
  46. Ye, C.; Li, Y.; Cui, P.; Li, L.; Pirasteh, S.; Marcato, J.; Goncalves, W.N.; Li, J. Landslide detection of hyperspectral remote sensing data based on deep learning with constrains. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 1–14. [Google Scholar] [CrossRef]
  47. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Springer International Publishing: Cham, Sitzerland, 2015; pp. 234–241. [Google Scholar]
  48. Liu, P.; Wei, Y.; Wang, Q.; Chen, Y.; Xie, J. Research on post-earthquake landslide extraction algorithm based on improved U-Net model. Remote Sens. 2020, 12, 894. [Google Scholar] [CrossRef] [Green Version]
  49. Qi, W.; Wei, M.; Yang, W.; Xu, C.; Ma, C. Automatic mapping of landslides by the ResU-Net. Remote Sens. 2020, 12, 2487. [Google Scholar] [CrossRef]
  50. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  51. Shao, X.; Ma, S.; Xu, C.; Zhang, P.; Wen, B.; Tian, Y.; Zhou, Q.; Cui, Y. Planet image-based inventorying and machine learning-based susceptibility mapping for the landslides triggered by the 2018 Mw6.6 Tomakomai, Japan Earthquake. Remote Sens. 2019, 11, 978. [Google Scholar] [CrossRef] [Green Version]
  52. Yamagishi, H.; Yamazaki, F. Landslides by the 2018 Hokkaido Iburi-Tobu Earthquake on September 6. Landslides 2018, 15, 2521–2524. [Google Scholar] [CrossRef] [Green Version]
  53. Amante, C.; Eakins, B.W. ETOPO1 1 Arc-Minute Global Relief Model: Procedures, Data Sources and Analysis. US Department of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite, Data, and Information Service, National Geophysical Data Center Marine Geology and Geophysics Division: Boulder, Colorado. The United States, 2009. Available online: https://www.ngdc.noaa.gov/mgg/global/relief/ETOPO1/docs/ETOPO1.pdf (accessed on 10 January 2019).
  54. Team, P. Planet Application Program Interface: In Space for Life on Earth. Planet Company: San Francisco, CA, USA, 2018. Available online: https://api.planet.com (accessed on 10 January 2019).
  55. Li, J. Getting Started and Best Practices With TensorFlow for Deep Learning; China Machine Press: Peking, China, 2018. [Google Scholar]
  56. Otsu, N. A threshold selection method from Gray-Level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  57. Li, D. Application research in glacier information extraction based on U-NET Model of deep learning. Master’s Thesis, China University of Geosciences, Peking, China, 2020. [Google Scholar]
  58. Lu, P.; Qin, Y.; Li, Z.; Mondini, A.C.; Casagli, N. Landslide mapping from multi-sensor data through improved change detection-based Markov random field. Remote Sens. Environ. 2019, 231. [Google Scholar] [CrossRef]
  59. Mondini, A.C.; Guzzetti, F.; Reichenbach, P.; Rossi, M.; Cardinali, M.; Ardizzone, F. Semi-automatic recognition and mapping of rainfall induced shallow landslides using optical satellite images. Remote Sens. Environ. 2011, 115, 1743–1757. [Google Scholar] [CrossRef]
  60. Fiorucci, F.; Ardizzone, F.; Mondini, A.C.; Viero, A.; Guzzetti, F. Visual interpretation of stereoscopic NDVI satellite images to map rainfall-induced landslides. Landslides 2018, 16, 165–174. [Google Scholar] [CrossRef]
  61. Tang, C.; Tanyas, H.; van Westen, C.J.; Tang, C.; Fan, X.; Jetten, V.G. Analysing post-earthquake mass movement volume dynamics with multi-source DEMs. Eng. Geol. 2018, 248, 89–101. [Google Scholar] [CrossRef]
Figure 1. Map showing the study area. The base map is from Google Earth. The topographic background is from National Geophysical Data Center, NOAA [53].
Figure 1. Map showing the study area. The base map is from Google Earth. The topographic background is from National Geophysical Data Center, NOAA [53].
Remotesensing 12 03992 g001
Figure 2. Aerial photos of co-seismic landslides, damaged houses, buried roads and destroyed farmlands and houses in the affected area of the 2018 Iburi earthquake, taken by Asia Air Survey and Aero Asahi Corporation [52]).
Figure 2. Aerial photos of co-seismic landslides, damaged houses, buried roads and destroyed farmlands and houses in the affected area of the 2018 Iburi earthquake, taken by Asia Air Survey and Aero Asahi Corporation [52]).
Remotesensing 12 03992 g002
Figure 3. High-resolution remote sensing images: (A) before the 9.6 Iburi earthquake (3 August 2018) and (B) after the 9.6 Iburi earthquake (11 September 2018).
Figure 3. High-resolution remote sensing images: (A) before the 9.6 Iburi earthquake (3 August 2018) and (B) after the 9.6 Iburi earthquake (11 September 2018).
Remotesensing 12 03992 g003
Figure 4. Visual interpretation of co-seismic landslides using remote sensing images: (A) before the Iburi earthquake (3 August 2018) and (B) after Iburi earthquake (11 September 2018). Red polygons show the boundaries of individual landslides triggered by the earthquake.
Figure 4. Visual interpretation of co-seismic landslides using remote sensing images: (A) before the Iburi earthquake (3 August 2018) and (B) after Iburi earthquake (11 September 2018). Red polygons show the boundaries of individual landslides triggered by the earthquake.
Remotesensing 12 03992 g004
Figure 5. Label raster of the training area (A) and verification area (B).
Figure 5. Label raster of the training area (A) and verification area (B).
Remotesensing 12 03992 g005
Figure 6. Envinet5 architecture (image from ENVI Deep Learning 1.1.1 Help). The yellow box represents Input Patch, the green box represents Feature Map, the blue arrow indicates 3 × 3 Convolution for feature extraction, the purple arrow indicates Merge for feature fusion, the green arrow indicates Max Pooling for dimensionality reduction, the red arrow represents Co-convolution for dimension recovery and the yellow arrow represents 1 × 1 Convolution for output results.
Figure 6. Envinet5 architecture (image from ENVI Deep Learning 1.1.1 Help). The yellow box represents Input Patch, the green box represents Feature Map, the blue arrow indicates 3 × 3 Convolution for feature extraction, the purple arrow indicates Merge for feature fusion, the green arrow indicates Max Pooling for dimensionality reduction, the red arrow represents Co-convolution for dimension recovery and the yellow arrow represents 1 × 1 Convolution for output results.
Remotesensing 12 03992 g006
Figure 7. Workflow of extracting landslides based on the ENVI Deep Learning Module.
Figure 7. Workflow of extracting landslides based on the ENVI Deep Learning Module.
Remotesensing 12 03992 g007
Figure 8. Line chart of the accuracy, loss, precision and recall in the training data set experiment.
Figure 8. Line chart of the accuracy, loss, precision and recall in the training data set experiment.
Remotesensing 12 03992 g008
Figure 9. Line chart of the accuracy, loss, precision and recall in the validation data set experiment.
Figure 9. Line chart of the accuracy, loss, precision and recall in the validation data set experiment.
Remotesensing 12 03992 g009
Figure 10. Image class activation raster.
Figure 10. Image class activation raster.
Remotesensing 12 03992 g010
Figure 11. (A) Post-earthquake image. (B) Results of manual visual interpretation of the same region shown in (A).
Figure 11. (A) Post-earthquake image. (B) Results of manual visual interpretation of the same region shown in (A).
Remotesensing 12 03992 g011
Figure 12. Classification results with threshold 0.1–0.9 ((A). Results with a threshold of 0.1; (B). Results with a threshold of 0.2; (C). Results with a threshold of 0.3; (D). Results with a threshold of 0.4; (E). Results with a threshold of 0.5; (F). Results with a threshold of 0.6; (G). Results with a threshold of 0.7; (H). Results with a threshold of 0.8; (I). Results with a threshold of 0.9.). Yellow polygons show the result of automatic recognition and red polygons indicate manual visual interpretation.
Figure 12. Classification results with threshold 0.1–0.9 ((A). Results with a threshold of 0.1; (B). Results with a threshold of 0.2; (C). Results with a threshold of 0.3; (D). Results with a threshold of 0.4; (E). Results with a threshold of 0.5; (F). Results with a threshold of 0.6; (G). Results with a threshold of 0.7; (H). Results with a threshold of 0.8; (I). Results with a threshold of 0.9.). Yellow polygons show the result of automatic recognition and red polygons indicate manual visual interpretation.
Remotesensing 12 03992 g012
Figure 13. Classification results with threshold 0.51–0.59 ((A). Results with a threshold of 0.51; (B). Results with a threshold of 0.52; (C). Results with a threshold of 0.53; (D). Results with a threshold of 0.54; (E). Results with a threshold of 0.55; (F). Results with a threshold of 0.56; (G). Results with a threshold of 0.57; (H). Results with a threshold of 0.58; (I). Results with a threshold of 0.59.). Yellow polygons show the result of automatic recognition and red polygons indicate manual visual interpretation.
Figure 13. Classification results with threshold 0.51–0.59 ((A). Results with a threshold of 0.51; (B). Results with a threshold of 0.52; (C). Results with a threshold of 0.53; (D). Results with a threshold of 0.54; (E). Results with a threshold of 0.55; (F). Results with a threshold of 0.56; (G). Results with a threshold of 0.57; (H). Results with a threshold of 0.58; (I). Results with a threshold of 0.59.). Yellow polygons show the result of automatic recognition and red polygons indicate manual visual interpretation.
Remotesensing 12 03992 g013
Figure 14. Landslide automatic cataloging map with threshold 0.56.
Figure 14. Landslide automatic cataloging map with threshold 0.56.
Remotesensing 12 03992 g014
Figure 15. The recognition effect of the model on the landslides identified by the model in four subregions. (A,B,C,D) (The red polygons represent the result of manual visual interpretation, and the yellow polygons represent the automatic extraction result).
Figure 15. The recognition effect of the model on the landslides identified by the model in four subregions. (A,B,C,D) (The red polygons represent the result of manual visual interpretation, and the yellow polygons represent the automatic extraction result).
Remotesensing 12 03992 g015
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, P.; Xu, C.; Ma, S.; Shao, X.; Tian, Y.; Wen, B. Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan. Remote Sens. 2020, 12, 3992. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233992

AMA Style

Zhang P, Xu C, Ma S, Shao X, Tian Y, Wen B. Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan. Remote Sensing. 2020; 12(23):3992. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233992

Chicago/Turabian Style

Zhang, Pengfei, Chong Xu, Siyuan Ma, Xiaoyi Shao, Yingying Tian, and Boyu Wen. 2020. "Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan" Remote Sensing 12, no. 23: 3992. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop