Next Article in Journal
The Effect of Whey Protein Films with Ginger and Rosemary Essential Oils on Microbiological Quality and Physicochemical Properties of Minced Lamb Meat
Previous Article in Journal
Psychometric Analysis of a Scale to Assess Social Participation of Chinese Adults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Mine Change Using 3D Spatial Information Based on Drone Image

1
Department of Civil Engineering, Donga University, Busan 49315, Korea
2
Department of Drone and Spatial Information Engineering, Youngsan University, Yangsan 50510, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(6), 3433; https://0-doi-org.brum.beds.ac.uk/10.3390/su14063433
Submission received: 17 January 2022 / Revised: 3 March 2022 / Accepted: 11 March 2022 / Published: 15 March 2022

Abstract

:
Mine development requires continuous management because it causes rapid topographic changes and environmental damage. Drones can be used to produce three-dimensional spatial information by quickly and accurately photographing areas that are difficult or dangerous for humans to approach. In this study, we investigated the possibility of using drone photogrammetry for determining changes and recovery in mines. The accuracy of the drone photogrammetry results was analyzed using checkpoints, and the earthwork volume was calculated and compared with that obtained through a field survey. We determined whether the results were consistent with the mountain recovery plan using drone images. The RMSE was 0.085–0.091 m in the plane and 0.121–0.128 m along the elevation, as determined by analyzing the checkpoint accuracy by creating an orthoimage and a digital surface model based on the drone images; these results satisfy the tolerance range of the 1/1000 digital map descriptions. The drone photogrammetry generated an average error of 11.9% using the conventional measurement method. The possibility of use was proved by confirming the vegetation and rock prevention nets using photographed images. The usability of drone photogrammetry in mines is expected to increase if substantial spatial information is produced and analyzed in the future.

1. Introduction

Drones have recently played a critical role in the generation of spatial information. The production and utilization of spatial information using drone images requires the positional accuracy of the drones to be supported. Therefore, studies have been actively conducted to verify drones’ positional accuracy by producing orthoimages and a digital surface model (DSM) with images acquired from sensors mounted on drones and comparing the field-acquired results using checkpoints [1,2,3]. The usability and limitations of drone images were examined by implementing stereoscopic vision based on drone images [4,5].
Furthermore, drone images are being applied in a variety of areas because drones can be used to photograph desired sites and provide the images quickly and accurately. In addition, drones can accommodate various sensors to acquire high-resolution images as needed, which can be used for various purposes. Consequently, drones mounted with high-resolution sensors are being used in the safety inspection of large structures, such as in crack detection of bridges [6,7,8]. The traditional method of visual inspection by people has the risk of accidents, but drones have no risk of personal accidents, making them more suitable than people [9].
Mines undergo continuous topographic changes due to mining and explosions, and surveys are indispensable for estimating quantity. However, if mine surveying is performed using the conventional surveying method, large-scale equipment is required, and a safety accident may occur because the volume of the mined rock may be large. Therefore, a method is needed to obtain and utilize safe and accurate survey results [10].
Various studies have been conducted to utilize drones in mines. In 2017, Esposito et al. [11] produced three-dimensional (3D) spatial data of the point-cloud form by acquiring multi-temporal images for open-pit mines in Italy. The results of calculating the volumes of mined mineral resources using three-dimensional (3D) spatial data suggested the possibility of accurate change analysis.
In 2020, Le Van Canh et al. [12] captured images of open-pit mines using a post processed kinematic (PPK) drone and produced a highly accurate DSM. In addition, they experimentally proved the possibility of a topographic survey for open-pit mines using drones by analyzing the accuracy improvement based on the increased number of ground control points (GCPs). In 2020, Rushikesh et al. [13] developed an application for flying along automatic topography on an optimized flying route, and they planned to produce a high-resolution 3D model of an open slope and verified the possibility of a high-resolution 3D model created using it. In 2020, Medinac et al. [14] asserted the need for periodic surveys of road conditions in mines for safety, productivity, and maintenance cost reduction purposes. Moreover, they proved the possibility of evaluating road conditions by creating orthoimages and a DEM using an unmanned aerial vehicle (UAV). In addition, mine monitoring using drones, technical forecasting, and the evaluation and review of drones and programs have been conducted [15].
Here, we aim to determine and prove whether drone photogrammetry is available at a mine development site. In this study, we comprehensively analyzed the checkpoints, performed volume calculation, and explained the status of mountain restoration (field management). For this purpose, 3D point clouds, orthoimages, and DSM were produced using drone-acquired images. In addition, the XY RMSE and Z RMSE were obtained using the checkpoints. The volume analysis was conducted by comparing the amount calculated by generating a cross-sectional map with the amount calculated in the field. In addition, the recovery of the mountain was determined using orthoimages and original images.

2. Theory

2.1. UAV

UAVs or drones can survey inaccessible areas without risking the safety of individuals by simply mounting sensors on them. In addition, they can rapidly photograph desired areas because there are fewer constraints compared to traditional manned aerial photogrammetry, and they can acquire high-resolution images by capturing images at low altitudes. Image processing is required to convert the images acquired from drones to 3D spatial data [16,17].
The representative image processing program used is produced based on the following process: the image input into the program is matched using the scale invariant feature transform (SIFT) technique, and a high-density point cloud is created through the structure form motion (SfM) process. In this process, aerotriangulation is performed, and then mesh, orthoimages, and DSM are produced. Figure 1 shows the image processing method of commercial programs, such as Pix4D mapper and Metashape [18,19].

2.2. SIFT

In 2004, Lowe developed the SIFT technique, which can extract feature points using drone images without exterior orientation parameters and find feature points that do not change even when the image is rotated. These feature points are generated in the form of point clouds and used in image matching. SIFT comprises four stages: scale-space extrema detection, keypoint localization, orientation assignment, and keypoint descriptor. In the scale-space extrema detection step, adjacent images are differentiated using a Gaussian filter, and candidates for feature points are determined by finding extreme values in a scale space where enlarged and reduced images are gathered. In this step, candidates for feature points are selected. Subsequently, in the keypoint localization step, the incomplete features among the candidates for feature points are removed using the Taylor second series. Meanwhile, in the orientation assignment step, the slope size and direction are determined by setting a window around the feature points, and the direction with the highest ratio is determined as the direction of the feature points. Finally, a keypoint descriptor is generated for automatic image matching. The images are matched using the SIFT process, which comprises these four steps [20].

2.3. SfM

SfM is a method of implementing 3D shapes by extracting the same points as feature points from multiple images based on collinearity and coplanarity conditions. It can determine the position of the camera even without exterior orientation parameters, and it uses SIFT as a representative method for extracting feature points [21].
The process of producing 3D shapes using SfM consists of extracting feature points employing a matching technique, such as SIFT, and it determines the optimal value by finding the number of feature points in a bundle using the luminous flux adjustment method. This is the process for determining the orientation parameters of the image. In addition, 3D point clouds with absolute coordinates are generated using ground reference points after the determination of the orientation parameters of the image. Through this process, dense point clouds are created using all images, referred to as multi-view stereo (MVS) [22,23,24]. Figure 2 shows the concept of reconfiguring 3D positions using SfM [25].

3. Materials and Methods

We aimed in this study to determine the possibility of using 3D spatial data produced using drone images for mines in quantity estimation and mountain restoration. The accuracy of the production result was analyzed, the volume of earthwork change was calculated using multi-temporal images, and the calculation results were compared and analyzed. In addition, the usability of drone images was determined by analyzing the performance of vegetation restoration using the original multi-temporal images of drones. The flowchart of this study is shown in Figure 3.

3.1. Study Site

The study site is an open-pit mine located in Gyeongsangnam-do, South Korea, where active mining is being performed. Data were collected by selecting a place where vegetation restoration is taking place for mountain restoration in the upper part where mining has been completed. Figure 4 shows the images captured at the study site and its location on Google Maps.

3.2. Measurement of Ground Control Points

Ground reference points were occupied and measured to produce spatial data with accurate absolute coordinates before image acquisition. The measurement of ground reference points was used in aerial triangulation to calculate absolute coordinates. This is essential for improving the accuracy of the position coordinates. The error is large if the ground control point measurement results are not used in image processing because the coordinates are generated using the metadata of the original image. Therefore, this study used the results of image processing by a GCP survey.
A pre-fabricated square GCP target was installed at the site, and the ground reference point coordinates were obtained using the global positioning system (GPS) network real-time kinematic–virtual reference station (RTK–VRS) method. Twelve ground reference points were installed in the mine, including five ground reference points and seven checkpoints. Figure 5 shows the process of acquiring the coordinates of the ground reference points using GPS at the GCP target and site; Figure 6 shows the layout of the ground reference points and checkpoints.

3.3. Image Acquisition

The images were captured twice with six-month intervals to analyze topographic changes and mountain restoration changes using the DJI drone Inspire 2. Images were captured for an area of approximately 0.4 km2 of the entire mine by setting an automatic path at an altitude of 200 m from the middle part of the mine. The upper part of the mine was photographed manually to determine whether the vegetation had been restored. However, the resolution of the lower part of the mine is low because the images were taken at the initially set altitude and captured by automatic navigation through a preset route. Therefore, images of approximately 6 cm GSD were acquired by additionally taking photographs at the lower part of the center of the open-pit mine. The overlap for image acquisition consisted of 65% and 80% for end overlap and side lap, respectively. The second image acquisition was performed for the same flying route stored in the first image acquisition using the same GCP. Figure 7 shows the flying route and positions of the acquired images. Table 1 lists the detailed specifications of the drone and the sensors used for image acquisition and photographing conditions.

3.4. Image Processing and Data Acquisition

Image processing for spatial data production was performed using the Pix4D Mapper, which is a representative program for processing drone images and is often used for drone photogrammetry. The image processing consists of initial point cloud generation, orthoimages and DSM production, and 3D modeling. The ground reference points measured during image acquisition were accurately input to the image before image processing to assign absolute coordinates and improve accuracy. Moreover, the orthoimages and DSM of the mine were produced via multi-temporal image processing, as shown in Figure 8.
As shown in Figure 8, the cross-section was drawn using the smart construction application based on the 3D point clouds created after image processing to acquire data for the analysis of topographic change (mining output). To compare the quantity, the reference data were acquired. The status diagram was drawn using the GPS-RTK method before creating the cross-section. Figure 9 shows the cross-sections created using point clouds and the GPS-RTK method.

4. Result

4.1. Analysis of Accuracy

The accuracy of the 3D spatial data of the mine generated using drone images was analyzed with seven checkpoints for both the first and second photographs. The 3D coordinates acquired using the orthoimages and DSM produced after image processing were compared with the 3D coordinates acquired using GPS. The analysis was divided into plane (XY) and elevation (Z), and the RMSE value was calculated and analyzed using Equations (1–4), where R M S E x , R M S E y , R M S E z , and R M S E x y are the root mean square errors of the X, Y, Z, and XY planes, respectively. Moreover, x c i , y c i , and z c i are the coordinates acquired in the image, and x v i , y v i , and z v i are the coordinates acquired in the field [26].
R M S E x = i n ( x c i x v i ) 2 n
R M S E y = i n ( y c i y v i ) 2 n
R M S E z = i n ( z c i z v i ) 2 n
R M S E x y = R M S E x 2 + R M S E y 2
Based on the results of the analysis of accuracies of the first and second images, listed in Table 2, the RMSEs of the first and second images were 0.085 and 0.091 m in the plane and 0.121 and 0.128 m at the elevation, respectively. Furthermore, the largest errors of the first and second images were 0.107 m and 0.099 m in the plane and 0.138 m and 0.147 m at the elevation, respectively. These results satisfy the tolerance of the 1/1000 detail drawing description errors in the aviation photogrammetry rules of the National Geographic Information Institute. Therefore, the data could be applicable to open-pit mine development sites.

4.2. Analysis of Topographic Changes

To analyze the topographic changes of the mine, the point clouds created by drone photogrammetry were input to the smart construction, and the cross-section was created by generating chains. The cross-sections at two times were acquired twice using point clouds produced with images acquired with intervals of three months. Figure 10 shows topographic changes of the mine.
The area of mining was divided into five sections and analyzed for comparison. Figure 11a shows the cross-sectional view extracted from 3D spatial information produced by drone images, and Figure 11b shows the status survey results obtained directly by humans in the actual field. The red line was extracted and produced using the first acquisition performance, and the blue line is a cross-sectional view made using the second acquisition performance.
The mining output was calculated using a double-sided averaging method. The intervals between route cross-sections were set to 20 m, and five route cross-sections with large topographic changes due to mining were compared. Using the previously acquired cross-sectional view, the area was calculated as the difference between the primary and secondary performances, and both cross-sections were averaged and multiplied by the interval for each section to obtain the amount of earthwork. Equation (5) expresses the cross-sectional mean method [27].
V = L   A 1 + A 2 2
where V is the volume, A 1 and A 2 are both cross-sections, and L is the interval between sections.
In addition, in this mine, mining and covering are performed together, and the calculation of the amount of change was calculated only for the part where mining was performed. In Figure 11, the part where the red line is higher than the blue line is where the earthwork amount is calculated. As a result of calculating the earthwork volume, as shown in Table 3, using 3D spatial data based on drone images produced a mining output larger by 41–179 m3, which was calculated as an average of 11.9% compared to the field survey. The reason for the difference in the quantity calculation result between the field survey and the drone photogrammetry is that the DSM produced with drone photogrammetry reproduced the height value of the surface, including small stones. In field surveys using GPS, it is estimated that there will be errors because field surveying is performed mainly on inflection points. Therefore, a more detailed analysis was performed to analyze accuracy only with checkpoints. However, for a more accurate analysis, it is necessary to compare the results obtained by equipment such as a terrestrial laser scanner, producing more precise 3D spatial information.

4.3. Analysis of Usability for Mountain Restoration

A mountain restoration plan for the mine development site was qualitatively analyzed to evaluate the usability of drone images in mountain restoration and safety management. According to the mountain restoration plan, the upper part was already mined, and the installation of vegetation and rockfall prevention nets was planned. To verify this, orthoimages and original images of the corresponding parts were analyzed. The analysis results show that the rockfall prevention net was installed in both the first and second images. The vegetation distribution according to mountain restoration also confirmed that forestation was in progress in both the first and second images. However, the vegetation color in the second image was different because it was winter. Figure 12a shows the rockfall prevention net installed according to the mountain restoration plan and Figure 12b shows the vegetation changes. In this study, there was no problem because it was judged whether to restore the mountainous area in the upper part of the mine. However, it is necessary to acquire high-resolution images through flight at the lower part considering the height difference between the upper and lower parts of the mine.

5. Discussion

5.1. Analysis of Mining Changes Using 3D Spatial Information Based on Drone Images

Based on previous studies, the need for drone utilization has been continuously raised because the production of 3D spatial information on mines using drone images is efficient in terms of stability and economy. Accordingly, analyzing the accuracy of the produced 3D spatial information is an important issue, and many studies have been conducted on this topic. The most representative method can compare 3D absolute coordinates using checkpoints and extract and compare the coordinates of the point from 3D spatial information acquired and manufactured using equipment such as GPS and TS in the field. In addition, this method is suitable for determining accuracy. However, simply comparing and analyzing coordinates for a specific point has limitations in measuring linearity, area, volume, and accuracy. In addition, previous studies have compared and analyzed volumes, as shown in Figure 13, where the volume for a small site was obtained and compared with the field performance.
However, this method alone was insufficient. A detailed analysis of the topography is required to ensure that the approach is practically usable in the analysis of topographic changes. Therefore, in this study, the shape of the topography was extracted by designating a certain section for 3D spatial information, and the shape of the actual topography was acquired in the field for a comparative analysis. The greatest difference between the 3D spatial information produced via drone images and the field acquisition performance is the linearity. Regarding the information acquired directly by an individual in the field, the points are acquired based on the inflection points; hence, a smooth alignment, similar to a blueprint, appears, as shown in Figure 14. In contrast, in the case of the information extracted from the 3D spatial information produced via drone images, the surface of the terrain is expressed as it is in reality. Although the results with these differences can be considered accurate, such differences can occasionally lead to errors. In this study, an average difference of 11.9% was noted between the drone performance and the field acquisition performance, where the drone performance was higher. This is a result of expressing the topography in detail when viewed because of the accuracy analysis using checkpoints. However, one of the disadvantages of drone photography is that the skill level of the operator plays a significant role. Therefore, the proficiency and experience of the worker are important factors when acquiring and processing these images.

5.2. High Usability of Drone Images

Drone images can be used in various manners, unlike the way humans directly measure them. In the case of direct surveying, images are mainly acquired as three-dimensional coordinates and can be used in computer-aided design (CAD). However, there is little visual effect because there are no video data, and the record of the field may be distorted over time because it relies on the memory of the worker. However, drone images can be stored continuously, unless they are deleted, to ensure that they can be implemented as they are at the time of filming, regardless of time. One major advantage is that additional information can be obtained without going directly to the site. When surveying the top of a mine to obtain its information, drones are a suitable alternative to humans because they can capture the images without requiring personnel going to the area. Therefore, in this study, the images of the upper part of the mine, which had already been mined and restored, were manually acquired and analyzed. The upper part can be checked in detail using the acquired image shown in Figure 15.

6. Conclusions

In this study, we sought to determine whether 3D spatial information based on drone images is useful for the development of mines that are dangerous and difficult to survey directly. A detailed analysis was performed to determine whether drone images can be applied to tasks such as development plans (change), current status, and quantity confirmation. First, an accuracy analysis using the test point was performed. Subsequently, the volume calculation results through cross-sectional map production were compared. Finally, the following conclusions were drawn by determining whether to recover the mountainous areas using images.
First, by analyzing the accuracy using checkpoints, the RMSEs of the first and second images were 0.085 m and 0.091 m in the plane and 0.121 and 0.128 m at the elevation, respectively. These data could be used in mine development because the results satisfied the tolerance for 1/1000 detail drawing description error according to the aviation photometry rules.
Second, topographic changes were analyzed by creating cross-sections and calculating the mining output. The results revealed that the calculated quantity was larger by 11.9% when the 3D spatial data based on drone images were used, compared to the reference data in five chains. This is because the lost volume is smaller than that of the GPS-RTK method as drone photogrammetry can perform more detailed measurements than GPS-RTK. This method could replace the existing method if the accuracy and error range of the quantity estimation satisfy the standards of the corresponding industry. In addition, this method is applicable in determining whether the planned line is complied with.
Third, the drone images were compared with the actual mountain restoration plan to determine whether they can be used in mountain restoration. The results showed that although the rockfall prevention net and vegetation distribution could be determined, it was difficult to distinguish the vegetation types. Therefore, closer images are needed to solve this problem.
We evaluated the usability of 3D spatial data based on drone images in mine development sites to determine whether they can be used in the maintenance of mines while avoiding the safety problems of direct surveys and measurements. The 3D spatial data created using drone images were determined to have sufficient applicability, demonstrating the possibility of safe, fast, and accurate survey and measurement. This study can provide basic data for determining the applicability of detailed design in the future because the accuracy and usability of drone photogrammetry were analyzed in various ways. In addition, it will be possible to expand the utilization of drone photogrammetry based on 3D spatial information, such as Smart City and BIM. Moreover, this work can be used as data to prove accuracy in confirming the amount of cut and fill by drone photogrammetry at construction sites. Therefore, the results of this study can provide useful data to formulate methods and standards for applying drone photogrammetry to mines.
However, it is necessary to analyze the accuracy of drones by acquiring images and generating 3D spatial data by mounting better sensors to identify quantity estimation errors and vegetation types. Further research must be conducted using the latest drone light detection and ranging (LIDAR) technology.

Author Contributions

Conceptualization, S.-B.K.; methodology, K.-S.B.; software, D.-P.K.; validation, K.-S.B.; formal analysis, D.-P.K.; investigation, K.-S.B.; resources, S.-B.K.; data curation, S.-B.K.; writing—original draft preparation, S.-B.K.; writing—review and editing, D.-P.K.; visualization, S.-B.K.; supervision, S.-B.K.; project administration, D.-P.K.; funding acquisition, S.-B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sung, S.M.; Lee, J.O. Accuracy assessment of parcel boundary surveying with a fixed-wing UAV versus rotary-wing UAV. J. Korea Soc. Surv. Geod. Photogramm. Cartogr. 2017, 35, 535–543. [Google Scholar] [CrossRef]
  2. Soares, G.; Inocencio, L.C.; Veronez, M.R.; Veronez, M.R.; da Silveira, L.G.; Bordin, F.; Marson, F.P. Analysis of Positional and Geometric Accuracy of Objects in Survey with Unmanned Aerial Vehicle (UAV). In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, IGARSS, Valencia, Spain, 22–27 July 2018. [Google Scholar] [CrossRef]
  3. Lee, J.O.; Kim, D.P. Analysis of three dimensional positioning accuracy of vectorization using UAV-photogrammetry. J. Korea Soc. Surv. Geod. Photogramm. Cartogr. 2019, 37, 525–533. [Google Scholar] [CrossRef]
  4. Rhee, S.; Kim, T. Investigation of 1:1000 Scale Map Generation by Stereo Plotting using UAV Images. Int. Arch.Photogramm. Remote Sens. Spat. Inf.Sci. 2017, XLII-2/W6, 319–324. [Google Scholar] [CrossRef] [Green Version]
  5. Lee, J.O.; Kim, D.P. Accuracy assessment of feature collection method with unmanned aerial vehicle images using stereo plotting program StereoCAD. Korean Soc. Civ. Eng. 2020, 40, 257–264. [Google Scholar] [CrossRef]
  6. Yu, H.; Yang, W.; Zhang, H.; He, W. A UAV-based Crack Inspection System for Concrete Bridge Monitoring. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium, IGARSS, Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar] [CrossRef]
  7. Kucuksubasi, F.; Sorguc, A.G. Transfer Learning-Based Crack Detection by Autonomous UAVs. In Proceedings of the 35th ISARC, Berlin, Germany, 20–25 July 2018. [Google Scholar] [CrossRef] [Green Version]
  8. Lei, B.; Wang, N.; Xu, P.; Song, G.B. New crack detection method for bridge inspection using UAV incorporating image processing. J. Aerosp. Eng. 2018, 31, 04018058. [Google Scholar] [CrossRef]
  9. Lee, S.J.; Choi, Y.S. Topographic survey at small-scale open-pit mines using a popular rotary-wing Unmanned Aerial Vehicle (Drone). Korean Soc. Rock Mech. 2015, 25, 462–469. [Google Scholar] [CrossRef]
  10. Park, J.K.; Um, D.Y. Evaluation of accuracy and utilization of the drone photogrammetry for open-pit mine monitoring. J. Digit. Converg. 2019, 17, 191–196. [Google Scholar] [CrossRef]
  11. Esposito, G.; Mastrocco, G.; Salvini, R.; Oliveti, M.; Starita, P. Application of UAV photogrammetry for the multi-temporal estimation of surface extent and volumetric excavation in the Sa Pigada Bianca open-pit mine, Sardinia, Italy. Environ. Earth 2017, 76, 103. [Google Scholar] [CrossRef]
  12. VAN Canh, L.; Cuong, C.X.; Long, N.Q.; Ha, L.T.T.; Anh, T.T.; Bui, X.-N. Experimental investigation on the performance of DJI phantom 4 RTK in the PPK mode for 3D mapping open-pit mines. Inz. Miner. 2020, 1, 35–74. [Google Scholar] [CrossRef]
  13. Rushikesh, B.; Garrett, W.; Jorge, V.; Masoud, Z.N.; Bijan, P.; Behrooz, A.; Bahram, P.; Javad, S. A practical methodology for generating high-resolution 3D models of open-pits slopes using UAVs: Flight path planning and optimization. Remote Sens. 2020, 12, 2283. [Google Scholar] [CrossRef]
  14. Medinac, F.; Bamford, T.; Hart, M.; Kowalczyk, M.; Esmaeili, K. Haul road monitoring in open pit mines using unmanned aerial vehicles: A case study at Bald Mountain Mine Site. Min. Metall. Explor. 2020, 73, 1877–1883. [Google Scholar] [CrossRef]
  15. Shahmoradi, J.; Talebi, E.; Roghanchi, P.; Hassanalian, M. A comprehensive review of applications of drone technology in the mining industry. Drones 2020, 4, 34. [Google Scholar] [CrossRef]
  16. Karel, P.; Jaroslav, S.; Eva, M. High resolution drone surveying of the Pista geoglyph in Palpa, Peru. Geosciences 2018, 8, 479. [Google Scholar] [CrossRef] [Green Version]
  17. David, C.P.; Elliott, C. How to use a drone safely and effectively for geological studies. Geol. Today 2020, 36, 146–155. [Google Scholar] [CrossRef]
  18. Simon, M.; Loredana, C.; Popescu, C.; Luminita, C. 3D mapping of a village with a wingtraone VTOL tailsiter drone using pix4d mapper. Res. J. Agric. Sci. 2021, 53, 228–237. [Google Scholar]
  19. Marion, J.; Sophie, P.; Pascal, A.; Nicolas, L.D.; Philippe, G.; Christophe, D. Suggestions to limit geometric distortions in the reconstruction of linear coastal landforms by SfM photogrammetry with PhotoScan® and MicMac® for UAV surveys with restricted GCPs pattern. Drones 2019, 3, 2. [Google Scholar] [CrossRef] [Green Version]
  20. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  21. Yan, L.; Zhu, R.; Mo, N.; Liu, Y. Improved class-specific codebook with two-step classification for scene level classification of high resolution remote sensing images. Remote Sens. 2017, 9, 233. [Google Scholar] [CrossRef] [Green Version]
  22. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism; Exploring photo collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
  23. Westoby, M.J.; Brassington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  24. Johnson, K.; Nissen, E.; Saripalli, S. Rapid mapping of ultrafine fault zone topography with structure from motion. Geosphere 2014, 10, 969–986. [Google Scholar] [CrossRef]
  25. Nissen, E.; Arrowsmith, J.R.; Crosby, C. SfM Acquisition Concepts & Applications. UNAVCO, c2016 [Cited 27 September 2016]. Available online: https://kb.unavco.org/kb/article/2016-gsa-introduction-to-structure-from-motion-sfm-photogrammetry-for-earth-science-research-and-education-short-course-859.html (accessed on 1 November 2021).
  26. Sergio, I.J.; Waldo, O.B.; Mariana, J.M.; Juan, E. Digital terrain models generated with low-cost UAV photogrammetry: Methodology and accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar]
  27. Autodesk. 2021. Available online: https://knowledge.autodesk.com/support/civil-3d/learn-explore/caas/CloudHelp/cloudhelp/2017/ENU/Civil3D-UserGuide/files/GUID-5D4A56B4-2438-452B-BC75-DC9F0D82D7E8-htm.html (accessed on 12 November 2018).
Figure 1. Flowchart of commercial image processing.
Figure 1. Flowchart of commercial image processing.
Sustainability 14 03433 g001
Figure 2. Concept of SfM [25].
Figure 2. Concept of SfM [25].
Sustainability 14 03433 g002
Figure 3. Workflow of data acquisition and processing for the analysis of results.
Figure 3. Workflow of data acquisition and processing for the analysis of results.
Sustainability 14 03433 g003
Figure 4. Study area in Gyeongsangnam-do, South Korea.
Figure 4. Study area in Gyeongsangnam-do, South Korea.
Sustainability 14 03433 g004
Figure 5. GCP survey and target.
Figure 5. GCP survey and target.
Sustainability 14 03433 g005
Figure 6. Arrangement of GCPs and CPs.
Figure 6. Arrangement of GCPs and CPs.
Sustainability 14 03433 g006
Figure 7. Flight trajectory and image location of UAV.
Figure 7. Flight trajectory and image location of UAV.
Sustainability 14 03433 g007
Figure 8. Outcomes of multi-temporal drone image processing: (a,b) orthoimage each of time; (c,d) digital surface model each of time.
Figure 8. Outcomes of multi-temporal drone image processing: (a,b) orthoimage each of time; (c,d) digital surface model each of time.
Sustainability 14 03433 g008
Figure 9. Extraction of the cross-section map using smart construction.
Figure 9. Extraction of the cross-section map using smart construction.
Sustainability 14 03433 g009
Figure 10. Geomorphological changes due to mining: (a,c) initial acquisition data; (b,d) acquisition data after interval of three months.
Figure 10. Geomorphological changes due to mining: (a,c) initial acquisition data; (b,d) acquisition data after interval of three months.
Sustainability 14 03433 g010
Figure 11. Cross-section map: (a) produced by drone data; (b) produced by status map (field surveying).
Figure 11. Cross-section map: (a) produced by drone data; (b) produced by status map (field surveying).
Sustainability 14 03433 g011
Figure 12. Changes of rockfall prevention network and vegetation: (a) rockfall prevention of first (up) and second (down) data; (b) vegetation of first (up) and second (down) data.
Figure 12. Changes of rockfall prevention network and vegetation: (a) rockfall prevention of first (up) and second (down) data; (b) vegetation of first (up) and second (down) data.
Sustainability 14 03433 g012
Figure 13. Volume calculation using an image processing program, which is generally used to calculate the volume of objects stacked over a small range.
Figure 13. Volume calculation using an image processing program, which is generally used to calculate the volume of objects stacked over a small range.
Sustainability 14 03433 g013
Figure 14. Difference between the cross-sectional views derived from drone acquisition data and field survey data. In the upper picture, the image data were extracted unevenly, and in the lower picture, they were extracted in a smooth, straight line.
Figure 14. Difference between the cross-sectional views derived from drone acquisition data and field survey data. In the upper picture, the image data were extracted unevenly, and in the lower picture, they were extracted in a smooth, straight line.
Sustainability 14 03433 g014
Figure 15. Image taken at the top of the mine. Vegetation is growing on the terrain as part of the mountain restoration process.
Figure 15. Image taken at the top of the mine. Vegetation is growing on the terrain as part of the mountain restoration process.
Sustainability 14 03433 g015
Table 1. Specification of UAV and camera.
Table 1. Specification of UAV and camera.
Sustainability 14 03433 i001 Sustainability 14 03433 i002
DJI Inspire 2FC6520
Specification
Image resolution5280 × 3956 pix
Focal length15 mm
Flight timeApprox. 25 min
Weight3.44 kg
Table 2. Residuals of checkpoints (CPs).
Table 2. Residuals of checkpoints (CPs).
Point No.Residuals
FirstSecond
XY (m)Z (m)XY (m)Z (m)
CP10.0950.1090.0950.127
CP20.1070.1280.0990.125
CP30.0890.1220.0910.138
CP40.068−0.1330.088−0.116
CP50.060−0.1380.079−0.112
CP60.0810.1030.0970.147
CP70.088−0.1110.084−0.125
RMSE0.0850.1210.0910.128
Table 3. Analysis of mining earthworks.
Table 3. Analysis of mining earthworks.
Point No.(A) Drone Photogrammetry
(m3)
(B) Field Surveying
(m3)
(A−B) Residuals
(m3)
(%)
Section 13272864112.5
Section 291877913916.6
Section 3110892917915.7
Section 4179116271646.5
Section 515731514598.0
Average 11.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, D.-P.; Kim, S.-B.; Back, K.-S. Analysis of Mine Change Using 3D Spatial Information Based on Drone Image. Sustainability 2022, 14, 3433. https://0-doi-org.brum.beds.ac.uk/10.3390/su14063433

AMA Style

Kim D-P, Kim S-B, Back K-S. Analysis of Mine Change Using 3D Spatial Information Based on Drone Image. Sustainability. 2022; 14(6):3433. https://0-doi-org.brum.beds.ac.uk/10.3390/su14063433

Chicago/Turabian Style

Kim, Doo-Pyo, Sung-Bo Kim, and Ki-Suk Back. 2022. "Analysis of Mine Change Using 3D Spatial Information Based on Drone Image" Sustainability 14, no. 6: 3433. https://0-doi-org.brum.beds.ac.uk/10.3390/su14063433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop