Next Article in Journal
Visualizing the USA’s Maritime Freight Flows Using DM, LP, and AON in GIS
Next Article in Special Issue
An Economic Development Evaluation Based on the OpenStreetMap Road Network Density: The Case Study of 85 Cities in China
Previous Article in Journal / Special Issue
Spatiotemporal Variation of NDVI in the Vegetation Growing Season in the Source Region of the Yellow River, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Damage Signature Generation of Revetment Surface along Urban Rivers Using UAV-Based Mapping

1
School of Water Resources & Environmental Engineering, East China University of Technology, Nanchang 330013, China
2
School of Geomatics, East China University of Technology, Nanchang 330013, China
3
Key Laboratory of Watershed Ecology and Geographical Environment Monitoring, National Administration of Surveying, Mapping and Geoinformation, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(4), 283; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9040283
Submission received: 2 March 2020 / Revised: 17 April 2020 / Accepted: 22 April 2020 / Published: 24 April 2020
(This article belongs to the Special Issue Geo-Information Technology and Its Applications)

Abstract

:
The all-embracing inspection of geometry structures of revetments along urban rivers using the conventional field visual inspection is technically complex and time-consuming. In this study, an approach using dense point clouds derived from low-cost unmanned aerial vehicle (UAV) photogrammetry is proposed to automatically and efficiently recognize the signatures of revetment damage. To quickly and accurately recover the finely detailed surface of a revetment, an object space-based dense matching approach, that is, region growing coupled with semi-global matching, is exploited to generate pixel-by-pixel dense point clouds for characterizing the signatures of revetment damage. Then, damage recognition is conducted using a proposed operator, that is, a self-adaptive and multiscale gradient operator, which is designed to extract the damaged regions with different sizes in the slope intensity image of the revetment. A revetment with slope protection along urban rivers is selected to evaluate the performance of damage recognition. Results indicate that the proposed approach can be considered an effective alternative to field visual inspection for revetment damage recognition along urban rivers because our method not only recovers the finely detailed surface of the revetment but also remarkably improves the accuracy of revetment damage recognition.

1. Introduction

Revetment systems in urban rivers are constructed to protect riverbanks, infrastructures, and people, in an effort to control floods. Revetments are usually designed as slope protection and covered in concrete [1]. Floods can trigger revetment erosion to weaken revetments continuously and cause damage. In addition, revetments are damaged by complex factors, such as land subsidence, ground collapse, erosion, vegetation presence, riverbed degradation, and human interference [2,3]. Therefore, monitoring the condition of revetments is an essential task in the management of flood defense infrastructure and important in providing evidence for maintenance or improvements [4].
At present, some studies have been done to conduct an assessment of the condition of revetments by the use of remotely sensed data in countries such as England [4] and France [5], but these methods are not applicable to urban revetment monitoring, and the assessment of the condition of revetments is visually inspected by the Municipal Engineering Management Agency in China. However, field visual inspection is time-consuming and technically complex in obtaining complete information on revetments. Additionally, the assessment of the subsurface condition of revetments is difficult because visual inspection is preferred in investigating important signs, such as surface collapse. Notably, early damage recognition is highly beneficial in enabling maintenance and improvements in advance before further deterioration occurs [4].
Apart from field visual inspection, remote sensing technologies, such as unmanned aerial vehicle (UAV)-based photogrammetry, have been useful techniques for the creation of digital surface models (DSMs) and also widely used in obtaining revetment information due to their advantages in high-precision three-dimensional (3D) geometry reconstruction [6,7,8]. Terrestrial laser scanning is usually used to monitor revetment damage caused by revetment erosion in small areas rather in large-scale areas [9,10,11]. This method requires frequent measurements that usually involve expensive sensors and field logistics when monitoring large areas. For instance, airborne laser scanning was used to estimate the volume change in river valley walls caused by revetment erosion [12], and point clouds were used to analyze the protection of a revetment rock beach [13]. Pye et al. [14] assessed beach and dune erosion and accretion for coastal management. Ternate et al. [15] modeled water-related structures to assist the design of revetments. Although sensors enable the generation of dense 3D points for good reconstruction of the geometry structure for revetment monitoring, point clouds cannot directly provide the color texture of the revetment and are less intuitive in the damage interpretation of revetments. As a result, the noise in point clouds is difficult to remove. A portion of the revetment surface may typically be covered with vegetation (e.g., grass), which appears as 3D points in the fluctuating height values within dense point clouds. Other platforms have been used for revetment monitoring [16], but these platforms are unsuitable in certain areas with shallow water, such as urban rivers. Meanwhile, 3D point clouds derived from these sensors may be much more expensive than their image-derived counterpart, such as low-cost consumer-grade UAV-based photogrammetry [17,18,19]. In addition, image-derived 3D point clouds from UAV photogrammetry can capture the spatially detailed structure of the ground surface and offer more competitive accuracy compared with laser scanning-based products [20]. Compared with laser scanning, ground control points (GCPs) are needed in aerial photogrammetry and GCP measurement is a time-consuming task. Fortunately, several GCPs can be marked and measured once in advance, that is, multiple measurements are not required to collect GCPs for absolute orientation. Therefore, although laser scanning can produce high-resolution and dense 3D point clouds, this technology requires more complex operations and has a higher cost when collecting revetment information on urban rivers than the low-cost UAV-based mapping. In this study, a low-cost UAV platform equipped with a consumer-grade onboard camera (e.g., DJI Phantom quadcopters) is used to prove that it is suitable for recognizing the damage signatures with respect to finely detailed revetment surfaces.
In recent years, research on UAV has focused on understanding and modeling revetments, and high-resolution 3D data derived from low-cost UAV mapping has been widely used in the efficient and accurate monitoring of revetments for the implementation of relevant maintenance management strategies [19,21,22,23,24,25]. Hallermann et al. [21] and Kubota et al. [22] used the dense point clouds derived from low-cost UAV photogrammetry to visualize the deformation of revetments in the assessment of structural stability. Pitman et al. [19] obtained high-resolution and competitive accuracy of DSM of revetments derived from UAV-based mapping and compared the results with those derived from real-time kinematic global positioning systems (RTK GPS) and offered new possibilities (i.e., using UAVs) for measuring, monitoring, and understanding the deformation of revetments against the approaches of traditional geomorphology observation. This method achieves high-accuracy DSM, which are approximately equal to those obtained via airborne laser scanning. Although their research can reconstruct a good 3D geometry structure of a revetment using UAV-based mapping for monitoring, they ignored the automatic damage recognition from the image-derived point clouds. Moreover, photogrammetric surveying using UAV has been often used to monitor the changes in revetments for river management. Pires et al. [26] combined mapping and photogrammetric surveying in the revetment model to investigate coastal dynamics and shoreline evolution and contributed to coastal management. Jayson et al. [25] used UAV photogrammetry to reconstruct the delta revetment topography to analyze changes in beach sediments. Although many applications are effective in revetment monitoring using low-cost UAV photogrammetry, studies on the use of UAV-based mapping for revetment damage recognition along urban rivers have been rarely reported. Most importantly, the effectiveness and efficiency of UAV-based damage recognition are two indicators that determine whether this approach can be applied. Furthermore, the quality and efficiency of point cloud generation are critical to accurately characterize the surface of a revetment, and the reliability of damage signature generation from the derived point clouds is also equally important to damage recognition in revetments.
Revetments along urban rivers are usually designed as a relatively flat slope or curved surface, that is, the revetment surface is generally a simple irregular surface that can be modeled using a mathematical function. On this basis, this study proposes a dense point cloud-based approach derived from low-cost photogrammetry to extract the signatures of revetment damage from a slope intensity image instead of the prerequisite multitemporal data. For revetment damage recognition along urban rivers, information on damaged and nondamaged revetment surfaces is generally needed for comparison and analysis. In many cases, prior information on nondamaged surfaces is not typically obtained or finely reconstructed in municipal engineering management. Failure to accumulate historical data related to the surface of a revetment may result in poor revetment management due to unclear understanding of damage signatures. As an alternative to applications dependent on multitemporal data [20,25], we exploit an approach for revetment damage recognition that does not require nondamaged surface reconstruction or prior information. On the basis of the assumption that the surface of the revetment has roughly the same slope, dense point clouds are first transformed into a slope intensity image, in which feature extraction is then performed to generate the features of revetment damage. A self-adaptive and multiscale gradient operator (SMGO) is proposed for collecting damage information by using the omnidirectional (horizontal, vertical, and diagonal) operation, especially in feature extraction. SMGO is used to ensure that damage of different scales can be accurately extracted.
This study aims to exploit the workflow of revetment damage recognition along urban rivers through the dense point clouds derived from low-cost UAV photogrammetry, and the proposed point cloud and damage signature generation are both introduced to address damage recognition using UAV-based mapping. The main contribution of this study is the proposed approach based on photogrammetric point clouds, which offers new possibilities in revetment damage recognition. In our approach, pixel-by-pixel dense matching is simultaneously used with the combination of region growing and semi-global matching (SGM), which can reconstruct a finely detailed surface of a revetment by considering the contributions of adjacent 3D object points. In particular, feature image generation based on the proposed SMGO is suitable for recognizing the damage signatures on the surface of a revetment designed with slope protection under the assumption that the majority of the 3D points on the revetment surface remain unchanged and prior information is unnecessary.

2. Study Area and Materials

2.1. Test Site

Nanchang City (28°42’29”N, 115°48’58”E) in Jiangxi Province, China (Figure 1a,b), is the study area of this work. This study used a low-cost quadcopter UAV (i.e., DJI Mavic Air; DJI; Shenzhen, China) to investigate the revetments along urban rivers, and two parts (with lengths of 450 and 570 m, respectively) of the concrete revetment located in the west of Nanchang City were selected to test (Figure 1c) for the following reasons: different types of riverbank defense structures have been constructed along the different portions of the bank to manage the impact of lateral fluvial erosion, among such structures, the revetment is typically designed with a slope angle to protect the riverbanks and infrastructures in urban rivers [25]. The waterway is often covered with silt and gravel materials, and then a large number of sediments may mobilize and cause erosion to the revetment along with intense rainfall events. In addition to the presence of mass movements, complex external factors, such as groundwater penetration, also remarkably contribute to revetment erosion and damage. Revetments are characterized morphologically by using a slope approximately equal to 40°. The waterway basement geologically comprised unconsolidated sediments of clay, loose sand, and gravel deposits. Revetments are usually covered with weeds and continuously affected by lateral fluvial erosion, ground collapse, and riverbed degradation.

2.2. Acquisition of UAV Remote-Sensed Images and Measurement of Ground Control Points (GCPs)

Given its low-cost and flexible operation, a consumer-grade quadcopter DJI Mavic Air (DJI; Shenzhen, China) [27,28] (Figure 2a) was selected to capture high-resolution true-color remote sensing images in August 2019. In addition, the DJI Mavic Air is easily carried with its 430 g weight and folding design, it does not require a professional take-off and landing site, and the aircraft is simple to operate, allowing flexible flight plans for a variety of missions. The operator can safely monitor the revetment even under ultralow-altitude photogrammetry in urban areas through signature DJI technologies, such as obstacle avoidance and intelligent flight modes [27]. The DJI platform is inexpensive, efficient, and requires minimal expertise. Several user-friendly applications provided by DJI, including Ground Station Pro and a mission planning software package, were used to conduct autonomous flights with waypoints and nadir orientation of the consumer-grade camera during the acquisition of UAV-based images of stereo remote sensing [28]. Figure 2b (Part 1) shows an example of the extent of the survey and some survey parameters in the graphical user interface of this UAV survey application. Parameters, such as flight altitude, flight speed, and image overlap, can be obtained on the basis of the survey mission. To acquire high-resolution and non-blurry remote sensing images, a low-altitude flight with an above-ground level of 30 m and 2.8 m/s flight speed was conducted to reduce the atmospheric and environmental limitations. Thus, the ground sample distance was approximately equal to 2.0 cm/pix. To ensure the reliability of image matching with large overlaps, the front and side image overlaps were set to 80% and 60%, respectively. Once the flight parameters were set, the UAV was largely automated with the operator acquiring remote sensing images under a wind speed <10 m/s and non-rainy conditions. The total surveying flight time of the UAV in the two parts (450 and 570 m) of the concrete revetment along the urban rivers was around 10 and 14 min (less than the maximum flight time of 21 min) and was achievable in one battery charge. The UAV took approximately 232 and 287 images for the two parts to cover the study area, which also includes a buffer extent of approximately 15 m near the revetment. Moreover, to improve the performance of image matching, the system errors and interior orientation of the consumer-grade camera were eliminated by using the methods in a previous study [28].
Additionally, 40 GCPs were evenly distributed on the revetment. The GCPs were placed across the study area and measured to validate the accuracy of the image-based DSM using RTK GPS. The GCPs were marked on the site, as shown in Figure 2c. Pixel-by-pixel dense point clouds were georeferenced with 5 and 7 GCPs for Parts 1 and 2, respectively. The other 28 GCPs (13 and 15 GCPs for Parts 1 and 2, respectively) were selected as check points (CPs), which were used to evaluate the accuracy of the surface reconstruction of the revetment.

3. Method

This study aims to exploit the workflow of revetment damage recognition along urban rivers through dense point clouds derived from low-cost UAV photogrammetry, and the proposed point cloud and damage signature generation are both introduced to address damage recognition using UAV-based mapping. The proposed approach demonstrated in Figure 3 mainly includes the following stages:
(1) Photogrammetric technologies are used to generate high-precision pixel-by-pixel dense point clouds for surface reconstruction of the revetment through a series of steps, that is, feature extraction and matching, incremental structure-from-motion (SfM), bundle adjustment, and region growing coupled with SGM.
(2) The slope intensity map of revetment is calculated and generated in terms of the height of the dense point clouds. The areas of revetment on both sides along the urban river are then extracted by segmenting and merging the superpixels, which are generated on the slope intensity map by using a simple linear iterative clustering (SLIC)-based algorithm.
(3) The signature of revetment damage is generated from the slope intensity image through vegetation removal, omnidirectional gradient operation and nonmaximum suppression, and denoising.
(4) Accuracy assessment is performed to validate the accuracy of the dense point clouds derived from the algorithm (i.e., region growing coupled with SGM) and evaluate the performance of revetment damage recognition along the urban rivers with quantitative analysis (e.g., indicators such as Precision, Recall, and F1_score) and visual assessment (i.e., ground field observation).

3.1. Surface Reconstruction of Revetment

The camera mounted on the low-cost UAVs (e.g., consumer-grade DJI Phantom quadcopters) has large perspective distortions and poor camera geometry [28,29], which may cause systematic errors that need to be eliminated by using distortion correction for each UAV remote sensing image. Similar to previous studies [18], the digital camera should be calibrated strictly before the operation of aerial photography. Distortion correction is then performed by using the camera parameters and two radial and two tangential distortion coefficients, which are calculated from several views of a two-dimensional (2D) calibration pattern. These parameters will be further optimized by the following self-calibrating bundle adjustment.
Similar to previous studies, feature extraction and matching are performed using a sub-Harris operator coupled with the scale-invariant feature transform algorithm, which can find evenly distributed corresponding points even in the overlapping areas of remote sensing images with illumination change and weak texture [30]. In traditional aerial photogrammetry, the poses of the airborne camera, that is, positions and orientations, must be known to provide the parameters of initial exterior orientation for performing aerial triangulation. However, low-altitude platforms, such as low-cost consumer-grade UAVs, are usually not mounted on high-precision equipment when obtaining the information of positions and orientations of cameras. Hence, traditional aerial triangulation relying on the parameters of initial exterior orientation may be unavailable for UAV-based aerial triangulation. UAV-based SfM algorithms have been applied to bank retreats at streams, and this study can generate DSM with smaller errors compared with the use of terrestrial laser scanning [31]. Therefore, SfM is used to estimate the poses of the airborne camera and reconstruct a sparse 3D geometry for the overlapping images without the help of initial exterior orientation parameters [18]. Notably, incremental SfM [32,33] is employed in this study to reconstruct the sparse 3D model increasingly and iteratively because it allows 3D reconstruction in an incremental process for repeated self-calibrating bundle adjustments (i.e., sparse bundle adjustment software [34]) to optimize the 3D model and interior and exterior orientation parameters.
Unlike DSM generation via interpolation of point-based elevation data [35], a novel method of region growing coupled with SGM for dense matching illustrated in Figure 4 is exploited to generate the pixel-by-pixel dense point clouds and reconstruct the finely detailed surface of the revetment. The SGM algorithm is a popular technique to minimize the image matching cost along several one-dimensional path directions through the images for image-based 3D reconstruction, and it may significantly increase the computational expense for most mapping applications which mainly deal with sets of overlapping images. Damage recognition of revetments along urban rivers requires high implementation efficiency, which is also highly valuable in enabling maintenance and improvements in advance before further deterioration occurs. SGM-based matching is one of the most time-consuming steps in photogrammetric point cloud generation, and thus it is of great significance to improve the efficiency of this step.
In this paper, to reduce the computational expense of redundant point clouds, an object space-based dense matching approach is exploited to satisfy the need of rapid 3D reconstruction for revetments. Unlike in photogrammetry software, such as Agisoft Metashape and Pix4Dmapper, the height of each grid in the revetment is utilized to calculate the pixel-by-pixel dense matching while considering the impact of adjacent 3D object points and obtaining the finely detailed surface of revetments. Different from many computer vision applications, our study on the surface of the revetment is focused on 3D construction (i.e., height value) of the top surface of the ground. This approach is not to generate normalized image stereo pairs, but to perform dense matching in the voxel object space. On this basis, there is no need to derive the matching cost of image space. Instead, all images can be used to represent the matching cost directly on each voxel, which is more suitable for UAV-based mapping applications. That is, semi-global optimization can be performed in voxel space, and image-based dense point clouds can be obtained directly. Unlike most SGM-based image matching [36,37], the sparse point clouds obtained from bundle adjustment can be used to simplify the image-based point cloud generation procedure. To be specific, by using prior knowledge of reconstructed objects, the search scope of the corresponding points can be narrowed in the voxel space, which contributes to improving the accuracy and efficiency of reconstruction. In this regard, the height values on the vertical sides are not essential. The object space-based approach can only compute the height values on the top surface and helps to reduce the computational expense of redundant point clouds. That is, the object space-based dense matching approach used in this paper is suitable for accurate, rapid, and cost-effective revetment damage recognition. The proposed region growing coupled with SGM mainly includes: (a) a triangulated irregular network (TIN) in 3D space is generated to initialize the 3D object surface; (b) inverse distance weighted interpolation is used to obtain initial height values; and (c) a region growing strategy is explored to gradually generate the pixel-by-pixel dense point clouds for surface reconstruction of revetment considering accuracy and efficiency.
The innovation of our method is that we assume that the set of 3D sparse points o b j set sparse is derived from SfM, and then we denote it by using the P obj i X , Y , Z cell in the set o b j set sparse in the ith 3D point at the object position X , Y , Z with i 1 , , N . As shown in Figure 5a, an object point obtained from the sparse 3D model is relevant to n 2D UAV remote sensing images, where n could be ≥ 2. We then denote p img i j x , y position in the jth UAV remote sensing images at the image position x , y . In this study, the corresponding points in the UAV remote sensing images reprojected from the 3D sparse points are considered salient correspondence and seeds, which are extended through region growing in the four neighborhoods illustrated in Figure 5b. Then, the pixel-by-pixel dense point clouds are iteratively determined by using the cost function and SGM algorithm with a known epipolar geometry, shown in Figure 5c.
Specifically, a set o b j set dense of dense point clouds is initially assigned by using the set o b j set sparse . Assuming that all 3D points P obj i X , Y , Z i 1 , , N in the o b j set dense are seeds and backprojected onto the relevant images, the p img i j x , y position in the first relevant image is considered the seed and extended with region growing in the four neighborhoods 4 . For example, the query point p x is fixed in Image 1 , and the correspondences in other relevant images are determined by using the SGM algorithm with a known epipolar geometry. On the basis of the SGM algorithm [36,38], the 3D points in the direction of region growing are determined and saved in set o b j set dense . We repeat these dense matching steps until no 3D point can be added into the set o b j set dense . Although the SGM algorithm can appropriately generate dense point clouds, some local areas with weak texture are likely reconstructed poorly.
In this study, we introduce a novel approach of 3D scene patching to generate the 3D points in these local areas. A triangulated irregular network (TIN) is established by using the set o b j set dense of dense point clouds. Subsequently, the coordinates of 3D points within the TIN can be calculated by using the weighted interpolation of inverse distance, which is expressed as
Z = 1 m i = 1 m w i Z i ,   w i = 1 d i ,
where Z is the height value of an unknown 3D point, m is the number of surrounding 3D points of the unknown 3D point, Z i is the height value of the ith surrounding 3D point, w i is the weight corresponding to Z i , and d i is the distance between the unknown 3D point and the ith known surrounding 3D point. The algorithm of the proposed dense matching method is expressed below.
Algorithm 1: Region growing coupled with SGM
Input: 3D sparse points o b j set sparse , exterior orientation parameters E O P , and UAV remote sensing images.
Parameters: 3D dense points o b j set dense , four neighborhoods 4 , query point p x , relevant images p img i j x , y , disparity d , minimum cost path L r p x , d , new 3D point P obj new X , Y , Z , and unknown 3D points o b j set dense , unknown .
Initialize o b j set dense o b j set sparse .
repeat
    for each 3D point P obj i X , Y , Z i 1 , , N in set o b j set dense do
        Assign P obj i X , Y , Z as a seed.
        Reproject P obj i X , Y , Z onto p img i j x , y .
        Compute epipolar geometry based on E O P .
        for k = 1 to 4 do
          Calculate L r p x , d corresponding to p x in p img i j x , y with known epipolar geometry.
        end for
        Compute the coordinate of P obj new X , Y , Z using SGM and aerial triangulation.
        Update o b j set dense by adding the new 3D point.
    end for
until no 3D point need to be added.
Find o b j set dense , unknown in the local areas that have not been reconstructed well.
Establish TIN.
for each 3D point in o b j set dense , unknown do
    Compute the coordinate of the 3D point by inverse distance weighted interpolation.
    Update o b j set dense by adding the 3D point.
end for

3.2. Damage Signature Generation

Dense point clouds derived from UAV photogrammetry can generate a finely detailed geometry structure of the revetment and be regarded as an alternative to the visual inspection method. The categories of revetment damage are mainly collapse and crack. On flat ground, the place of collapse is usually characterized by an uneven region below the surface height of the ground with an irregular boundary, and a crack is typically shown as a linear object.
Unlike flat ground, revetments along urban rivers are built in a sloping pattern. Therefore, we attempt to transform the dense point clouds into a slope intensity image for damage recognition in this study on the basis of the assumption that the revetment is constructed with a fixed slope angle. Ideally, the values of the slope intensity image located in the revetment regions are approximately equal in this case. Then, a slope intensity image is generated via slope calculation, which is performed to identify the slope in each cell of the rasterized surface of the dense point clouds using the slope module of the ArcGIS software. A portion of the revetment surface may likely be covered with vegetation (e.g., grass), which appears as the 3D points of the fluctuating height values within the dense point clouds. UAV photogrammetry clearly has limitations in surveying the surface of the revetment in the presence of vegetation, thus possibly affecting the accuracy of revetment damage recognition. To eliminate the influence of vegetation in the slope intensity image, vegetation removal is preliminarily conducted with a gamma-transform green leaf index [39]. Subsequently, damage recognition is performed using a proposed operator called SMGO, which is designed to extract the abnormal regions with different sizes in the slope intensity image. Specifically, omnidirectional (horizontal, vertical, and diagonal) gradient operation is conducted using a self-adaptive operator with degraded weights. Hence, a variable gradient operator is used in each cell to determine whether it belongs to the damaged or nondamaged region. A multiscale architecture is introduced into this operator for the recognition of damaged regions with different sizes.
The main goal of this study is to identify the damaged areas of the revetment. Thus, automatic revetment recognition is an essential task in determining the dense point clouds. In this study, we extract the area of interest (AOI) or the area covered by the revetment from the slope intensity image. On the basis of the assumption that the AOIs of the revetment have approximately equal slope angles, previous studies [39] using SLIC and superpixel merging are jointly used to extract the revetment regions from the intensity map, as shown in Figure 6. First, the slope intensity image is segmented into a set of superpixels in terms of similar slope values. Second, the superpixels are merged into a series of regions on the basis of the approximately equal slope values. Third, the AOI of the revetment is determined by using the average slope value of the slope intensity image. The following are the main steps of revetment region extraction.
Step 1: The dense point clouds derived from low-cost UAV photogrammetry are rasterized using the grid size Δ d × Δ d , where Δ d is the resolution of the UAV remote sensing images.
Step 2: The slope of the rasterized image is computed, and the intensity image I slope x , y of the slope is generated using ArcGIS software.
Step 3: The intensity image I slope x , y is segmented into superpixels with the SLIC algorithm, and the superpixels are merged into a series of regions on the basis of the approximately equal slope values.
Step 4: The AOI of the revetment is determined by using the slope value of the region within [ s l o p e _ v a l u e −10°, s l o p e _ v a l u e +10°], where s l o p e _ v a l u e is the average slope of multiple samples in this study.
Then, the feature image I damage x , y of damage is generated from the intensity I slope x , y via the proposed operator SMGO. Mathematically, the gradients g r a d x , y in each cell x , y are computed as
g r a d x , y = I slope x i + I slope y j + I slope d i a g L k + I slope d i a g R l ,
where I slope x , I slope y , and I slope d i a g are the gradients in the horizontal, vertical, and diagonal directions, respectively. The multiscale architecture in SMGO is illustrated in Figure 7, and two scales are shown. The adjacent areas surrounding the cell p I slope x , y are defined on the basis of the following equation:
r = INT k σ + 0.5 ,
where r is the radius of the area surrounding the cell p , INT . is the integer operation, and σ is the initial scale factor of the SMGO, set to 1.6 in this study. k 1 , 2 , 3 , s s 2 is the set of multiple factors, which are key values in determining the scope of the area surrounding the cell p . Gradient calculation is performed on each scale on the basis of suboperators, which are illustrated in Figure 7c–f (Scale 1) and Figure 7h–k (Scale 2). The gradient of each suboperator is mathematically calculated using the following convolutional operation:
g r a d x , y = G x , y , k σ I slope x , y ,
where G . denotes the matrices of weights in the gradient operator and is defined by the nonlinear inverse distance as
G x , y , k σ 8 π k 2 σ 2 e Δ x 2 + Δ y 2 2 k 2 σ 2 ,
where Δ x , Δ y is the shift between the adjacent cell and the center x , y . Then, the matrices of the suboperators can be determined. For example, Figure 7c–f are represented by the matrices 1.91 2.32 1.91 0 0 0 1.91 2.32 1.91 , 1.91 0 1.91 2.32 0 2.32 1.91 0 1.91 , 0 0 1.29 0 0 0 1.91 0 0 0 1.29 0 0 0 1.29 0 0 0 1.91 0 0 0 1.29 0 0 , and 0 0 1.29 0 0 0 0 0 1.91 0 1.29 0 0 0 1.29 0 1.91 0 0 0 0 0 1.29 0 0 , respectively. Similar to the output of the neural network, a max activation function is utilized to determine the gradient of cell p by using the maximum value of all the suboperators. The mathematical expression of the max activation function is
max g r a d x , y = max I slope x , I slope y , I slope d i a g L , I slope d i a g R .
Notably, the number of scales is not fixed but adaptive. If the gradient G is less than the given value t gradient , then the value k is not increased. In this study, at least two scales of SMGO are needed to establish the multiscale architecture. The algorithm of the proposed SMGO is given as Algorithm 2.
Algorithm 2: Gradient calculation using SMGO
Input: intensity image I slope x , y with width W and height H, constant value σ , and gradient threshold t gradient .
Parameters: multiple factor k and radius r of the area surrounding the cell p .
forcol = 1 to W do
    for row = 1 to H do
      repeat
         r INT k σ + 0.5
        suboperators 8 π k 2 σ 2 e Δ x 2 + Δ y 2 2 k 2 σ 2
        Compute the gradients g r a d x , y using suboperators.
        Gradient G located in x , y max I slope x , I slope y , I slope d i a g L , I slope d i a g R .
         k k + 1 .
      until gradient G < t gradient and k > 2 .
    end for
end for
After the damage signature generation, the damaged regions damage are determined by a binary operation based on a given condition, which can be defined as
g r a d x , y m e a n g r a d img > 3.0 s t d g r a d img ,   x , y damage ,
where m e a n and s t d denote the calculation of mean and standard deviation in the damage signature map. If the g r a d x , y satisfies this condition, it is considered to be within a damage region. Then, the collapse and crack are separated from the damaged regions via two criteria, i.e., if A r e a damage > 0.25 m 2 and P e r i m e t e r damage / A r e a damage < 1.5 , then the damaged region damage is defined as a collapse; otherwise, the damage is considered to be a crack, where A r e a and P e r i m e t e r denote the calculation of area and perimeter that can be conducted using ArcGIS software.

4. Results

In the experiments, dense point clouds are generated by using the proposed method and implemented through C++ programming and an open-source library (i.e., OpenCV). Our software mainly includes distortion correction, sparse matching, dense matching, absolute orientation, image stitching, DSM generation, and orthophoto generation. The sparse matching module includes two ways, i.e., match all images without any supporting information and GPS/IMU supported trajectory matching, and the dense matching module is run based on the sparse matching results. The performance of low-cost UAV-based (i.e., DJI Mavic Air) mapping is critical in the accurate reconstruction of the revetment surface for damage recognition. Take Part 1 as an example, the distribution of the GCPs and CPs (i.e., check points) are laid out widely and evenly in the survey areas, as shown in Figure 8. The residual error and root mean square error (RMSE) were calculated on the basis of 13 and 15 CPs for Parts 1 and 2, respectively, and measured on RTK GPS. Their corresponding 3D points were determined from the dense point clouds. The X, Y, and Y RMSE values are calculated using Equation (8), and the error statistics of the CPs are summarized in Figure 9 and Table 1. Additionally, the re-projection errors R M S E img of the check points (CPs) are calculated using Equation (9), and the error statistics are summarized in Table 2. Figure 10 (Part 1) illustrates the pixel-by-pixel dense point clouds textured with colors from the UAV remote sensing images. The revetment consists of 1.96 × 10 7 points, which correspond to the density of approximately 963 points/m2 and the grid size of 3.2 cm × 3.2 cm. The use of the proposed dense matching method reconstructs the fine details of the revetment surface. The results show that the X and Y RMSE values obtained via the proposed dense matching method were less than 4 cm, which is a relatively small horizontal error. Moreover, the vertical RMSE value or the Z RMSE value was less than 6 cm and the re-projection errors are less than one pixel. Therefore, these RMSE values seemed fairly satisfactory for high-precision reconstruction of the revetment surface. The accuracy was deemed sufficient for recognizing damage signatures on the surface of the revetments along urban rivers.
R M S E X = X dense X GCP 2 n R M S E Y = Y dense Y GCP 2 n R M S E Z = Z dense Z GCP 2 n ,
R M S E img = i = 1 n j = 1 m ρ i j P X i , C j x i j 2 / i = 1 n j = 1 m ρ i j ,
where X i and C j denote a 3D point and a camera, correspondingly; P X i , C j is the predicted projection of point X i on camera C j ; x i j is the observed image point; · denotes the operation of L2-norm; ρ i j is an indicator function with ρ i j = 1 if point X i is visible in camera C j ; otherwise, ρ i j = 0 .
We also compared the performance of surface reconstruction with that of the commercial software such as Agisoft Metashape Professional 1.5.3 (www.agisoft.com) and Pix4Dmapper 4.4 (www.pix4d.com), which are widely used photogrammetric software for 3D surface reconstruction and revetment monitoring [19,23]. In order to balance accuracy and efficiency, medium precision is set to perform sparse and dense matching in Agisoft Metashape Professional 1.5.3, and the default settings are used in Pix4Dmapper 4.4. To evaluate the effects of the multiscale architecture in the proposed SMGO, the non-multiscale gradient operator (NMGO) is compared with our method. The gradient intensity images with the values normalized from 0 to 1 shown in Figure 11h,i are correspondingly generated by the non- and multiscale gradient operators, respectively.
The indicators Precision, Recall and F1_score are used to evaluate the proposed method in our experiments as follows:
P r e c i s i o n = T P T P + F P ,
R e c a l l = T P T P + F N ,
F 1 _ s c o r e = 2 P r e c i s i o n r e c a l l P r e c i s i o n + r e c a l l ,
where T P is the number of damaged regions that are correctly identified, F P is the number of damage regions that are incorrectly identified, and F N is the number of unrecognized damaged regions. Table 3 lists the statistical results of Precision, Recall and F1_score for collapse and crack recognition. Furthermore, the field visual inspection, NMGO-based damage recognition, and the proposed method are compared in Table 3. It should be noted that the true value of the collapse and crack is obtained through manual inspection. To be specific, the experimental areas are divided into grids on the map, and then the professionals check in detail whether there is any collapse or crack in each grid. If there is, the coordinates are marked using a GPS measuring instrument. For a fair comparison, field visual inspection is conducted according to the commonly used process by three different surveyors, and the average values of T P , F P , and F N are calculated.

5. Discussion

In this study, the proposed dense matching method performs better at surface reconstruction than Pix4Dmapper and Agisoft Metashape in terms of the RMSE values shown in Table 1 and Table 2. Notably, the time consumption of the proposed method is only 87% of Pix4Dmapper and Agisoft Metashape in the same operating environment. These results can be attributed to the dense matching achieved by the object space-based approach, which only computes the height values on the top surface of the ground and reduces the computational expense of redundant point clouds. To offer a detailed description, as shown in Figure 10a, the extracted subarea illustrates the details of the geometry structure, and Figure 11a,b present the corresponding results of the subarea. Two examples of cross-sections of dense point clouds derived from UAV mapping are demonstrated in Figure 11c,d with (marked by a red line) and without damage (marked by a yellow line), respectively. Subsequently, Figure 11e,f show the slope intensity image generated via the slope calculation and the superpixels segmented with the SLIC-based algorithm [39], respectively. Figure 11g exhibits the revetment regions obtained through superpixel merging on the basis of similar gradients and adjacency, and the enlarged three damaged regions of I, II, and III from the ground field observation (i.e., RGB ground photos) are also shown.
In terms of visual assessment, the profile in Figure 11d is the geometry structure corresponding to region I. In Figure 11h,i, it can be seen that the region detected by the SMGO algorithm is more consistent with the geometric structure than that detected by the NMGO. The NMGO has difficulty identifying all the damaged regions and ignores the spatial continuity of cracks or can even cause edges in over-recognition. By comparison, the proposed SMGO can extract the damaged regions within accurate boundaries and improve the accuracy of the revetment damage recognition by highlighting the gap between the damaged and nondamaged areas. Therefore, the proposed SMGO enables collapse and crack with a height drop relative to the surrounding areas to be detected. This finding is attributed to the revetment damage recognition using the proposed SMGO, which can achieve feature extraction in all the orientations with the multiscale operator in the horizontal, vertical, and diagonal directions. Notably, the strip regions with vertical drop (e.g., region III shown in Figure 11h) close to the river are also detected but not considered damaged regions in this study. In addition, the proposed method achieves better performance than the two other methods in terms of Precision, Recall and F1_score, especially in crack recognition. For field visual inspection, the inconspicuous cracks may easily be ignored and manual recognition is easily affected in this case, that is, crack damage often exists in other types of damage (e.g., collapse). As mentioned above, the NMGO-based method ignores the feature of damage and performs poorly.

6. Conclusions

This study aims to achieve revetment damage recognition along urban rivers through dense point clouds derived from low-cost UAV photogrammetry. In this study, two improvements of the proposed approach confirm that our method can be used as an effective alternative to field visual inspection for revetment (with slope protection) damage recognition along urban rivers. (1) Region growing coupled with SGM is proposed to generate the pixel-by-pixel dense point clouds from UAV remote sensing images and reconstruct the fine details of the high-precision revetment surface. This reconstruction is considered satisfactory in terms of the horizontal error <4 cm and vertical error <6 cm relative to GCPs. (2) On the basis of the in situ visual assessment and quantitative analysis (e.g., at least 90% of the Precision, Recall, and F1_score values), the accuracy of revetment damage recognition is confirmed after comparing the results of the field visual inspection and the NMGO-based method. Notably, UAV-based mapping can offer a new possibility in fully measuring, monitoring, and understanding revetment damage with low-cost operation. UAV-based mapping presents a technology that has the potential to transform how revetment damage recognition is observed and investigated. Furthermore, it could help the government and local authorities develop revetment management plans and provide evidence for maintenance or improvements.
This study is suitable for recognizing the damage signatures in revetments designed with slope protection. The use of the proposed method on revetments with steep slopes still needs further investigation because the nadir orientation of a camera for photogrammetry has difficulty achieving high-precision surface reconstruction of steep revetments. In future studies, we will optimize the proposed approach by using oblique photogrammetry and deep learning to achieve satisfactory damage recognition of steep revetments.

Author Contributions

Ting Chen proposed the framework of revetment damage recognition and the paper. Haiqing He wrote the source code of dense matching. Dajun Li designed the experiments and revised the paper. Puyang An and Zhenyang Hui generated the datasets and performed the experiments. All authors have read and agreed to the published version of the~manuscript.

Funding

This study was financially supported by the National Natural Science Foundation of China (41861062 and 41401526) and the Natural Science Foundation of Jiangxi Province of China (20171BAB213025 and 20181BAB203022).

Acknowledgments

The authors thank Jing Yu for providing datasets. The authors also want to thank the anonymous reviewers for their constructive comments that significantly improved our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chibana, T. Urban river management: Harmonizing river ecosystem conservation. In Urban Environmental Management Technology; Springer: Tokyo, Japan, 2008; pp. 47–66. [Google Scholar]
  2. Osman, A.M.; Thorne, C.R. Riverbank stability analysis. I: Theory. J. Hydraul. Eng. 1988, 114, 134–150. [Google Scholar] [CrossRef]
  3. Hesp, P. Foredunes and blowouts: Initiation, geomorphology and dynamics. Geomorphology 2002, 48, 245–268. [Google Scholar] [CrossRef]
  4. Tarrant, O.; Hambidge, C.; Hollingsworth, C.; Normandale, D.; Burdett, S. Identifying the signs of weakness, deterioration, and damage to flood defense infrastructure from remotely sensed data and mapped information. J. Flood Risk Manag. 2017, 11, 317–330. [Google Scholar] [CrossRef] [Green Version]
  5. Baghdadi, N.; Gratiot, N.; Lefebvre, J.-P.; Oliveros, C.; Bourguignon, A. Coastline and mudbank monitoring in French Guiana: Contributions of radar and optical satellite imagery. Can. J. Remote Sens. 2004, 30, 109–122. [Google Scholar] [CrossRef]
  6. Royet, P. Rapid and Cost-Effective Dike Condition Assessment Methods: Geophysics and Remote Sensing. FloodProbe. 2012. Available online: http://www.floodprobe.eu/partner/assets/documents/Floodprobe-D3.2_V1_3Dec2012.pdf (accessed on 1 September 2019).
  7. Hagenaars, G.; Luijendijk, A.; de Vries, S.; de Boer, W. Long term coastline monitoring derived from satellite imagery. Coast. Dyn. 2017, 122, 1551–1562. [Google Scholar]
  8. Choi, C.E.; Cui, Y.; Au, K.Y.K.; Liu, H.; Wang, J.; Liu, D.; Wang, H. Case study: Effects of a partial-debris dam on riverbank erosion in the Parlung Tsangpo river, China. Water 2018, 10, 250. [Google Scholar] [CrossRef] [Green Version]
  9. Rosser, N.J.; Petley, D.N.; Lim, M.; Dunning, S.A.; Allison, R.J. Terrestrial laser scanning for monitoring the process of hard rock coastal cliff erosion. Q. J. Eng. Geol. Hydrogeol. 2005, 38, 363–375. [Google Scholar] [CrossRef]
  10. Longoni, L.; Papini, M.; Brambilla, D.; Barazzetti, L.; Roncoroni, F.; Scaioni, M.; Ivanov, V.I. Monitoring riverbank erosion in mountain catchments using terrestrial laser scanning. Remote Sens. 2016, 8, 241. [Google Scholar] [CrossRef] [Green Version]
  11. Cheng, Y.-J.; Qiu, W.; Lei, J. Automatic extraction of tunnel lining cross-sections from terrestrial laser scanning point clouds. Sensors 2016, 16, 1648. [Google Scholar] [CrossRef]
  12. Thoma, D.P.; Gupta, S.C.; Bauer, M.E.; Kirchoff, C.E. Airborne laser scanning for riverbank erosion assessment. Remote Sens. Environ. 2005, 95, 493–501. [Google Scholar] [CrossRef]
  13. Yang, B.; Hwang, C.; Cordell, H.K. Use of LiDAR shoreline extraction for analyzing revetment rock beach protection: A case study of Jekyll island state park, USA. Ocean Coast. Manag. 2012, 69, 1–15. [Google Scholar] [CrossRef]
  14. Pye, K.; Blott, S.J. Assessment of beach and dune erosion and accretion using LiDAR: Impact of the stormy 2013-14 winter and longer term trends on the Sefton coast, UK. Geomorphology 2016, 266, 146–167. [Google Scholar] [CrossRef]
  15. Ternate, J.R.; Celeste, M.I.; Pineda, E.F.; Tan, F.J.; Uy, F.A.A. Floodplain modelling of Malaking-ilog river in southern Luzon, Philippines using LiDAR digital elevation model for the design of water-related structures. In Proceedings of the 2nd International Conference on Civil Engineering and Materials Science, Seoul, Korea, 26–28 May 2017; pp. 1–9. [Google Scholar]
  16. Drummond, H.; Weiner, H.M.; Kaminsky, G.M.; McCandless, D.; Hacking, A. Assessing bulkhead removal and shoreline restoration using boat-based lidar. In Proceedings of the Salish Sea Ecosystem Conference, Seattle, WA, USA, 5 April 2018. [Google Scholar]
  17. Cook, K.L. An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology 2017, 278, 195–208. [Google Scholar] [CrossRef]
  18. He, H.; Chen, T.; Zeng, H.; Huang, S. Ground control point-free unmanned aerial vehicle-based photogrammetry for volume estimation of stockpile carried on barges. Sensors 2019, 19, 3534. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Pitman, S.J.; Hart, D.E.; Katurji, M.H. Application of UAV techniques to expand beach research possibilities: A case study of coarse clastic beach cusps. Cont. Shelf Res. 2019, 184, 44–53. [Google Scholar] [CrossRef]
  20. Hemmelder, S.; Marra, W.; Markies, H.; De Jong, S.M. Monitoring river morphology & bank erosion using UAV imagery-a case study of the river Buёch, Hautes-Alpes, France. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 428–437. [Google Scholar]
  21. Hallermann, N.; Morgenthal, G.; Rodehorst, V. Vision-based deformation monitoring of large scale structures using unmanned aerial systems. IABSE Symp. Rep. 2014, 102, 2852–2859. [Google Scholar] [CrossRef]
  22. Nakagawa, M.; Yamamoto, T.; Tanaka, S.; Noda, Y.; Kashimoto, K.; Ito, M.; Miyo, M. Location-based infrastructure inspection for sabo facilities. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-3/W3, ISPRS Geospatial Week, La Grande Motte, France, 28 September–3 October 2015; pp. 257–262. [Google Scholar]
  23. Kubota, S.; Kawai, Y.; Kadotani, R. Accuracy validation of point clouds of UAV photogrammetry and its application for river management. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W6, International Conference on Unmanned Aerial Vehicles in Geomatics, Bonn, Germany, 4–7 September 2017; pp. 195–199. [Google Scholar]
  24. Starek, M.J.; Giessel, J. Fusion of UAS-based structure-from-motion and optical inversion for seamless topo-bathymetric mapping. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 2999–3002. [Google Scholar]
  25. Jayson, P.-N.; Appeaning Addo, K.; Amisigo, B.; Wiafe, G. Assessment of short-term beach sediment change in the Volta Delta coast in Ghana using data from Unmanned Aerial Vehicles (Drone). Ocean Coast. Manag. 2019, 182, 104952. [Google Scholar] [CrossRef]
  26. Pires, A.; Chaminé, H.I.; Piqueiro, F.; Pérez-Alberti, A.; Rocha, F. Combing coastal geoscience mapping and photogrammetric surveying in maritime environments (Northwestern Iberian Peninsula): Focus on methodology. Environ. Earth Sci. 2016, 75, 196. [Google Scholar] [CrossRef]
  27. DJI. Mavic Air User Manual. 2018. Available online: https://dl.djicdn.com/downloads/phantom_4_pro/Phantom+4+Pro+Pro+Plus+User+Manual+v1.0.pdf (accessed on 15 December 2018).
  28. He, H.; Yan, Y.; Chen, T.; Cheng, P. Tree height estimation of forest plantation in mountainous terrain from bare-earth points using a DoG-coupled radial basis function neural network. Remote Sens. 2019, 11, 1271. [Google Scholar] [CrossRef] [Green Version]
  29. Puliti, S.; Ørka, H.O.; Gobakken, T.; Næsset, E. Inventory of small forest areas using an unmanned aerial system. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef] [Green Version]
  30. He, H.; Chen, X.; Liu, B.; Lv, Z. A sub-Harris operator coupled with SIFT for fast images matching in low-altitude photogrammetry. Int. J. Signal Process. Image Process. Pattern Recognit. 2014, 7, 395–406. [Google Scholar] [CrossRef]
  31. Hamshaw, S.D.; Bryce, T.; Rizzo, D.M.; O’Neil-Dunne, J.; Frolik, J.; Dewoolkar, M.M. Quantifying streambank movement and topography using unmanned aircraft system photogrammetry with comparison to terrestrial laser scanning. River Res. Appl. 2017, 33, 1354–1367. [Google Scholar] [CrossRef]
  32. Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the 3DV-Conference, International Conference on IEEE Computer Society, Seattle, WA, USA, 29 June–1 July 2013; pp. 127–134. [Google Scholar]
  33. Schönberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
  34. Sba: A Generic Sparse Bundle Adjustment C/C++ Package. 2018. Available online: http://users.ics.forth.gr/~{}lourakis/sba/ (accessed on 5 August 2019).
  35. Bhattacharya, A.; Arora, M.; Sharma, M. Usefulness of adaptive filtering for improved digital elevation model generation. J. Geol. Soc. India 2013, 82, 153–161. [Google Scholar] [CrossRef]
  36. Hirschmuller, H. Accurate and efficient stereo processing by semi-global matching and mutual information. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 807–814. [Google Scholar]
  37. Humenberger, M.; Engelke, T.; Kubinger, W. A census-based stereo vision algorithm using modified semi-global matching and plane fitting to improve matching quality. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 77–84. [Google Scholar]
  38. Viola, P.; Wells, W.M. Alignment by maximization of mutual information. Int. J. Comput. Vis. 1997, 24, 137–154. [Google Scholar] [CrossRef]
  39. He, H.; Zhou, J.; Chen, M.; Chen, T.; Li, D.; Cheng, P. Building extraction from UAV images jointly using 6D-SLIC and multiscale Siamese convolutional networks. Remote Sens. 2019, 11, 1040. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Nanchang City located in Jiangxi Province in southeast China, (b) location of the study area in Nanchang City, and (c,d) landscape of the study area.
Figure 1. (a) Nanchang City located in Jiangxi Province in southeast China, (b) location of the study area in Nanchang City, and (c,d) landscape of the study area.
Ijgi 09 00283 g001
Figure 2. Unmanned aerial vehicle (UAV) data acquisition and ground control point measurement. (a) DJI Mavic Air, (b) DJI graphical user interface for mission planning, (c) ground control point (GCP) marked by a white cross with a pink center, and (d) field measurement of GCPs.
Figure 2. Unmanned aerial vehicle (UAV) data acquisition and ground control point measurement. (a) DJI Mavic Air, (b) DJI graphical user interface for mission planning, (c) ground control point (GCP) marked by a white cross with a pink center, and (d) field measurement of GCPs.
Ijgi 09 00283 g002
Figure 3. Workflow of the revetment damage recognition along urban rivers through dense point clouds derived from low-cost UAV photogrammetry.
Figure 3. Workflow of the revetment damage recognition along urban rivers through dense point clouds derived from low-cost UAV photogrammetry.
Ijgi 09 00283 g003
Figure 4. Object space-based surface reconstruction of revetment.
Figure 4. Object space-based surface reconstruction of revetment.
Ijgi 09 00283 g004
Figure 5. Region growing coupled with semi-global matching (SGM). (a) Object point backprojected onto multiple views. (b) Region growing in the four neighborhoods. (c) Process of candidate matches determined by using the cost function under the constraint of epipolar lines in two views.
Figure 5. Region growing coupled with semi-global matching (SGM). (a) Object point backprojected onto multiple views. (b) Region growing in the four neighborhoods. (c) Process of candidate matches determined by using the cost function under the constraint of epipolar lines in two views.
Ijgi 09 00283 g005
Figure 6. Area of interest (AOI) extraction.
Figure 6. Area of interest (AOI) extraction.
Ijgi 09 00283 g006
Figure 7. Multiscale architecture of the proposed self-adaptive and multiscale gradient operator (SMGO) and damage signature generation. Scope of the area surrounding a cell in (a) Scales 1 and 2 for gradient computation and (b) Scale 1. (cf) Kernels of the gradient computation in Scale 1. (g) Scope of the area surrounding a cell in Scale 2. (hk) kernels of the gradient computation in Scale 2. The white and nonwhite grids denote the null and nonzero values, respectively.
Figure 7. Multiscale architecture of the proposed self-adaptive and multiscale gradient operator (SMGO) and damage signature generation. Scope of the area surrounding a cell in (a) Scales 1 and 2 for gradient computation and (b) Scale 1. (cf) Kernels of the gradient computation in Scale 1. (g) Scope of the area surrounding a cell in Scale 2. (hk) kernels of the gradient computation in Scale 2. The white and nonwhite grids denote the null and nonzero values, respectively.
Ijgi 09 00283 g007
Figure 8. Placements of ground control points and check points.
Figure 8. Placements of ground control points and check points.
Ijgi 09 00283 g008
Figure 9. Residuals of 28 check points (CPs) for Part 1 and Part 2 measured on real-time kinematic (RTK) GPS and their corresponding 3D points determined from the dense point clouds. Residuals X, Y, and Z are shown in (a), (b), and (c) respectively.
Figure 9. Residuals of 28 check points (CPs) for Part 1 and Part 2 measured on real-time kinematic (RTK) GPS and their corresponding 3D points determined from the dense point clouds. Residuals X, Y, and Z are shown in (a), (b), and (c) respectively.
Ijgi 09 00283 g009aIjgi 09 00283 g009b
Figure 10. Dense point clouds derived from UAV photogrammetry. (a) Top view of the dense point clouds. (b) Oblique view of the dense point clouds. The viewpoints of the camera are marked by blue dots.
Figure 10. Dense point clouds derived from UAV photogrammetry. (a) Top view of the dense point clouds. (b) Oblique view of the dense point clouds. The viewpoints of the camera are marked by blue dots.
Ijgi 09 00283 g010
Figure 11. Revetment damage recognition along an urban river. The dense point clouds, depth map, and slope intensity image of the subarea in Figure 10a are shown in (a,b,e), respectively. (c) Cross-section without damage. (d) Cross-section with a collapse. (f) Superpixels marked by the cyan boundaries that are generated via the simple linear iterative clustering (SLIC)-based algorithm. The true color (RGB) point cloud of the revetment is exhibited in (g), and the enlarged three damaged regions of I, II, and III from the ground field observation are also shown in (g). In addition, the gradient intensity images generated using the non-multiscale gradient operator (NMGO) and SMGO are shown in (h) and (i), respectively.
Figure 11. Revetment damage recognition along an urban river. The dense point clouds, depth map, and slope intensity image of the subarea in Figure 10a are shown in (a,b,e), respectively. (c) Cross-section without damage. (d) Cross-section with a collapse. (f) Superpixels marked by the cyan boundaries that are generated via the simple linear iterative clustering (SLIC)-based algorithm. The true color (RGB) point cloud of the revetment is exhibited in (g), and the enlarged three damaged regions of I, II, and III from the ground field observation are also shown in (g). In addition, the gradient intensity images generated using the non-multiscale gradient operator (NMGO) and SMGO are shown in (h) and (i), respectively.
Ijgi 09 00283 g011aIjgi 09 00283 g011b
Table 1. Comparison of the obtained RMSE values of CPs via Pix4Dmapper 4.4, Agisoft Metashape Professional 1.5.3, and the object space-based approach.
Table 1. Comparison of the obtained RMSE values of CPs via Pix4Dmapper 4.4, Agisoft Metashape Professional 1.5.3, and the object space-based approach.
AreaMethodRMSE X (cm)RMSE Y (cm)RMSE Z (cm)Total RMSE (cm)
Part 1Pix4Dmapper3.853.845.704.56
Agisoft Metashape5.084.536.595.47
object space-based3.763.725.674.48
Part 2Pix4Dmapper3.834.275.074.42
Agisoft Metashape4.894.636.435.38
object space-based3.493.305.314.13
Table 2. Comparison of the re-projection errors R M S E img of CPs via Pix4Dmapper 4.4, Agisoft Metashape Professional 1.5.3, and the object space-based approach.
Table 2. Comparison of the re-projection errors R M S E img of CPs via Pix4Dmapper 4.4, Agisoft Metashape Professional 1.5.3, and the object space-based approach.
AreaMethodRMSE (Pixel)
Part 1Pix4Dmapper0.611
Agisoft Metashape0.679
object space-based0.597
Part 2Pix4Dmapper0.752
Agisoft Metashape0.783
object space-based0.730
Table 3. Comparison of the three indicators obtained through field visual inspection, NMGO-based method, and our method.
Table 3. Comparison of the three indicators obtained through field visual inspection, NMGO-based method, and our method.
SiteCategoryNumberIndicator (%)Method
Field Visual InspectionNMGO-BasedOur Method
Part 1Collapse14Precision86.6773.3392.85
Recall92.8578.5792.85
F1_score89.6675.8692.85
Crack36Precision91.1879.4189.18
Recall86.1175.0091.67
F1_score88.5777.1490.41
Part 2Collapse18Precision84.2173.6889.47
Recall88.8977.7894.44
F1_score86.4975.6791.89
Crack54Precision88.4682.9790.91
Recall85.1872.2292.59
F1_score86.7977.2391.74

Share and Cite

MDPI and ACS Style

Chen, T.; He, H.; Li, D.; An, P.; Hui, Z. Damage Signature Generation of Revetment Surface along Urban Rivers Using UAV-Based Mapping. ISPRS Int. J. Geo-Inf. 2020, 9, 283. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9040283

AMA Style

Chen T, He H, Li D, An P, Hui Z. Damage Signature Generation of Revetment Surface along Urban Rivers Using UAV-Based Mapping. ISPRS International Journal of Geo-Information. 2020; 9(4):283. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9040283

Chicago/Turabian Style

Chen, Ting, Haiqing He, Dajun Li, Puyang An, and Zhenyang Hui. 2020. "Damage Signature Generation of Revetment Surface along Urban Rivers Using UAV-Based Mapping" ISPRS International Journal of Geo-Information 9, no. 4: 283. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9040283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop