Next Article in Journal
Pre-Archaeological Investigation by Integrating Unmanned Aerial Vehicle Aeromagnetic Surveys and Soil Analyses
Next Article in Special Issue
The Bathy-Drone: An Autonomous Uncrewed Drone-Tethered Sonar System
Previous Article in Journal
Quantifying Understory Vegetation Cover of Pinus massoniana Forest in Hilly Region of South China by Combined Near-Ground Active and Passive Remote Sensing
Previous Article in Special Issue
A Time-Efficient Method to Avoid Collisions for Collision Cones: An Implementation for UAVs Navigating in Dynamic Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure-from-Motion 3D Reconstruction of the Historical Overpass Ponte della Cerra: A Comparison between MicMac® Open Source Software and Metashape®

by
Matteo Cutugno
1,*,
Umberto Robustelli
2 and
Giovanni Pugliano
1
1
DICEA, University of Naples Federico II, 80125 Naples, Italy
2
Department of Engineering, University of Naples Parthenope, 80143 Naples, Italy
*
Author to whom correspondence should be addressed.
Submission received: 3 August 2022 / Revised: 2 September 2022 / Accepted: 4 September 2022 / Published: 6 September 2022
(This article belongs to the Special Issue Unconventional Drone-Based Surveying)

Abstract

:
In recent years, the performance of free-and-open-source software (FOSS) for image processing has significantly increased. This trend, as well as technological advancements in the unmanned aerial vehicle (UAV) industry, have opened blue skies for both researchers and surveyors. In this study, we aimed to assess the quality of the sparse point cloud obtained with a consumer UAV and a FOSS. To achieve this goal, we also process the same image dataset with a commercial software package using its results as a term of comparison. Various analyses were conducted, such as the image residuals analysis, the statistical analysis of GCPs and CPs errors, the relative accuracy assessment, and the Cloud-to-Cloud distance comparison. A support survey was conducted to measure 16 markers identified on the object. In particular, 12 of these were used as ground control points to scale the 3D model, while the remaining 4 were used as check points to assess the quality of the scaling procedure by examining the residuals. Results indicate that the sparse clouds obtained are comparable. MicMac® has mean image residuals equal to 0.770 pixels while for Metashape® is 0.735 pixels. In addition, the 3D errors on control points are similar: the mean 3D error for MicMac® is equal to 0.037 m with a standard deviation of 0.017 m, whereas for Metashape®, it is 0.031 m with a standard deviation equal to 0.015 m. The present work represents a preliminary study: a comparison between software packages is something hard to achieve, given the secrecy of the commercial software and the theoretical differences between the approaches. This case study analyzes an object with extremely complex geometry; it is placed in an urban canyon where the GNSS support can not be exploited. In addition, the scenario changes continuously due to the vehicular traffic.

1. Introduction

In recent years, technological advancements in the unmanned aerial vehicle (UAV) industry have drastically transformed survey techniques for 3D model reconstruction. These improvements exploit the evolution of algorithms from computer vision, which once was considered the bottleneck of such techniques. Now, they require less time and are mostly automated.
Moreover, the performance of free-and-open-source software (FOSS) for image processing is increasing, allowing users to conduct the entire photogrammetric process without being obliged to purchase an expensive software license. These two crucial trends present opportunities for both researchers and surveyors. Indeed, with consumer UAVs and FOSS, one can potentially survey an object at a considerably lower cost compared with the past, when UAVs were available to only a few, and for professional results, one had to purchase expensive software licenses. The current tendency in the photogrammetric community is to employ an increased number of images to ensure high overlap values. Today, several photogrammetric software solutions are available, both commercial and FOSS. Table 1 presents an overview of the available photogrammetric software packages. It was demonstrated that Structure-from-Motion (SfM) algorithms perform well in many applications. At this time, SfM 3D surveys became a viable cost-effective alternative surveying method to aerial LiDAR [1]. In addition, 3D photogrammetry has improved in terms of point density and geometric accuracy through increased overlap between images, improved radiometry, and significant progress in multi-view matching; moreover, the graphics processing unit (GPU) computation power has increased [2] continuously; regardless, the advantages and disadvantages of photogrammetry and LiDAR compensate for each other. Horizontal errors in photogrammetry are usually smaller than those in LiDAR, whereas LiDAR can obtain higher vertical than horizontal accuracy [3].
In the past decade, several photogrammetric applications employing UAVs have been documented. UAVs have been efficiently employed used in various fields, including geomorphological analyses [26,27,28], hydraulics modelling [29], agriculture and forest analyses [30,31,32,33], emergency management support [34,35,36], infrastructure monitoring [37,38,39], and cultural heritage monitoring and 3D reconstruction [40,41,42,43]. The literature also provides numerous examples of combined photogrammetric and LiDAR surveys [44,45,46].
This research aims to determine if MicMac® can be employed for 3D complex object reconstruction in non-optimal survey conditions, obtaining results comparable with a commercial software solution. MicMac® has been employed by various researchers both for aerial and terrestrial photogrammetry. Griffiths and Burningham [47] compared PhotoScan® and MicMac® concluding that the latter can generate significantly more accurate Digital Surface Models (DSMs) because a more complex lens distortion model is included. Altman et al. [48] tested MicMac® for terrestrial photogrammetry highlighting that feature extraction results from MicMac® are comparable with those from Photoscan®. It is possible to produce complete models in MicMac® as well, but it likely requires a more refined image set. Jaud et al. [49] compared the DSM obtained by MicMac® and Photoscan® concluding that both software can provide satisfying results; nonetheless Photoscan® is more straightforward to use but its source code is not open, whereas MicMac® is recommended for experimented users as it is more flexible. In the present work, we focused our attention on the capabilities of MicMac® to reconstruct a complex object with aerial imagery obtained with consumer UAV. For comparison, the same dataset was processed by either the FOSS and the commercial software package. The qualities of the generated tie point (TP) clouds were analyzed by evaluating the results of the bundle block adjustment (BBA) examining the image residuals and the 3D errors on ground control points (GCPs). Subsequently, the 3D errors on check points (CPs), which are not included in the BBA, are investigated. Additionally, the quality of the results was assessed based on the statistical analysis of the Cloud-to-Cloud distances and the relative accuracy.
The remainder of this paper is organized as follows: Section 2 reports a brief introduction to photogrammetry highlighting the differences and similarities of approach by the two software packages investigated. Section 3 presents the case study of the Ponte della Cerra overpass, and Section 4 provides the comparison of the results of the reconstructed models obtained with and a Agisoft Metashape®. Section 5 draws conclusions, including a discussion of the results as well as future developments. Lastly, Appendix A and Appendix B define the workflow of the FOSS and the commercial software package, respectively.

2. Material and Methods

Nowadays, SfM photogrammetry has mainly replaced traditional photogrammetry. The principles of Sfm photogrammetry can be found in [50]. Figure 1 depicts the main stages of this process along with the identification of the roles in the investigated software packages, namely Agisoft Metashape® Pro and MicMac®. Regarding the acquisition scheme, it is known that image acquisition depends primarily on the type of object. For approximately planar objects the parallel method is useful. In this acquisition scheme, none of the images will cover the entire scene; instead, every image is a tile of the scene. The present work is about the 3D reconstruction of a historical overpass; given the complexity of the object, it was split into three separate parts, namely the north facade, the south facade, and the extrados. When surveying each facade also images capturing the corresponding side of the intrados were collected for a total number of collected images equal to 222. The facades, as well as the extrados, were treated as planar objects taking horizontal and vertical images, respectively. Then, to improve object reconstruction, to reduce shadow areas, and to include some common elements for projects merging, 52 oblique images were collected. Lastly, one of the most important parameters to consider when planning a survey using the parallel method is the overlap between images. Two different overlaps must be considered: the along-track overlap, and the cross-track overlap. In the present survey, the mean along-overlap was equal to 80%, while the cross-track was equal to 55%.

2.1. MicMac® Photogrammetric Processing

The photogrammetric process (TP search, estimation of camera poses, densification) is conducted using MicMac® [13,51,52]. MicMac® is a FOSS (Cecill-B license) photogrammetric suite developed by IGN® (French National Geographic Institute) and ENSG® (French National School for Geographic Sciences). It can be used in a variety of 3D reconstruction scenarios [13]. MicMac® allows the creation of both 3D models and ortho-imagery. The software, which aims to be a cross-platform project, can run on all main operating systems (Windows, Mac OS, and Linux), although, in our experience, has revealed more stable and complete under Linux environment. MicMac® processing chain is completely under user control and most of the parameters can be fine-tuned. MicMac® comprises several tools, each of which is described in the dedicated wiki page [53]. For TP extraction MicMac® uses the Pastis algorithm that is no more than an interface to the well-known SIFT++, a lightweight distribution of the Scale-invariant feature transform (SIFT) [54]. Based on advances in image feature recognition, characteristic image objects can be automatically detected, described, and matched between images. After that, Apero® starts from tie points generated by Pastis and computes external and internal orientations compatible with these measurements [55]. The knowledge of the interior orientation of a camera used for image acquisition is a fundamental requisite for precise photogrammetric object reconstruction. Parameters such as principal distance, principal point coordinates regarding the image coordinate system, and some correction terms for lens distortion, etc., are determined by camera calibration [56]. Nowadays, photogrammetric camera calibration is usually carried out along with the calculation of object coordinates within a self-calibrating bundle adjustment. In MicMac® several distortion models are implemented, including Radial, Fraser, and Fisheye. All theoretical and practical aspects concerning bundle block adjustment with MicMac® are described in MicMac® official documentation [13,55]. The main processing steps of MicMac® can be summarized as follows:
  • Tie point computation: the Pastis tool uses the SIFT++ algorithm [57] for the tie point pair generation. This algorithm creates an invariant descriptor that can be used to identify the points of interest matching them even under a variety of perturbing conditions (scale changes, rotation, changes in illumination, viewpoints, or image noise). In this work, this was achieved with Tapioca, a tool interface of SIFT++,
  • External orientation: in this step external orientations of the cameras are computed. The relative orientations were computed with the Tapas tool following the free-network approach; this approach involves a calculation of the exterior parameters in an arbitrary coordinate system [58],
  • Bundle Block Adjustment: this step includes also the internal parameters, and, for this reason is know as “Self-Calibration”; this is conducted by introducing at least three control points and integrate them within the computation matrix. MicMac® solves the BBA with the Levenberg–Marquardt (L-M) method [59]. The L-M method is in essence the Gauss–Newton method enriched with a damping factor to handle rank-deficient Jacobian matrices [60]. This stage was achieved by exploiting GCPBascule and Campari tools.

2.2. Agisoft Metashape Photogrammetric Processing

To compare the quality of the generated point cloud, the same dataset was also processed in commercial software, namely Metashape® software package by Agisoft (ver. 1.7.2 build 12070, 2021) [6]. Metashape® is a stand-alone software product that performs the photogrammetric processing of digital images and generates 3D spatial data to be used in GIS applications, cultural heritage documentation, and visual effects production as well as for indirect measurements of objects of various scales [6]. The standard photogrammetric pipeline of Metashape is reported below:
  • The first step of the photogrammetric processing starts with feature matching across the images: Metashape detects points in the source images which are stable under viewpoint and lighting conditions and generates a descriptor for each point based on its local neighborhood; then, these descriptors are used later to detect correspondences across the photos. This is similar to the well-known SIFT approach but uses different algorithms for a slightly higher alignment quality [61],
  • The second stage comprehends the computation of camera intrinsic and extrinsic orientation parameters: Metashape uses a proprietary algorithm to find approximate camera locations and refines them later using a bundle-adjustment algorithm. This should be similar to Bundler algorithm by Snavely et al. (see [62,63]).
The specific algorithms implemented in the software package are not detailed in the manual for business reasons; nevertheless, a description of the Structure-from-Motion (SfM) procedure in Metashape and commonly used parameters are described in [64,65]. It can be noted that during the alignment step, the accuracy was set to high and not to highest; this was to provide the same conditions with respect to MicMac® processing (TP search on full resolution images); in fact, according to Metashape manual [6], at high accuracy setting the software works with the images at the original size, while highest accuracy setting provides an upscaling of the images by a factor of 4. The software is user-friendly, but the adjustment of parameters is limited to pre-defined values. It can be noted that the user manual describes mainly the general workflow and gives only very limited details regarding the theoretical basis of the underlying calculations and the associated parameters as confirmed in [49,58]. On the contrary, at each step of MicMac® processing, the user can choose any numerical value, whereas Metashape® only offers preset values; for example, during the alignment step one can choose among ultra-low, low, medium, high, and ultra-high, whereas MicMac® allows a multitude of choices. According to [66], the alignment time depends mainly on the number of images while densification step depends also on their resolution and overlap. Those characteristics of the dataset determine the RAM capacity needed, which often is the hardware bottleneck when dealing with photogrammetric commercial software packages. In contrast to commercial solutions, MicMac® is generally less demanding in RAM given that it writes all temporary files on the physical memory instead of storing them in the cache memory. In the present study, both photogrammetric processes were computed on the same workstation, namely an Asus computer with 16 Gb Ram, 1 Tb SSD storage, Intel Core i5-3330 CPU at 3.00GHz processor, and Nvidia Quadro 600 GPU. Metashape® does not support the graphic processing unit of the workstation employed in this work [66]; on the other hand, according to [53], MicMac® provides support for GPU processing starting from dense matching. Hence, to assure the same processing conditions, we do not use GPU support at all.

3. Case Study

3.1. Ponte della Cerra Overpass (Italy)

The object surveyed is an overpass in the Vomero neighborhood of Naples, Italy, as shown in Figure 2; it was built in the seventeenth century and is functional to realize the overpass of Suarez street over Conte della Cerra street. The overpass structure comprises Neapolitan yellow tuff blocks; the geometric model can be assimilated to a barrel vault whose generating curve is a lowered arch with a span light of 16.50 m with a belt deflection of 3.20 m. The whole width, comprising the left and right abutments, is equal to 21.40 m. The overpassing roadway comprises a single carriageway, with one lane in each direction and a total width of 13.45 m, including parking areas, bus stops, and two sidewalks each of which 3.27 m wide. The structure is subject to degradation phenomena and, for this reason, was secured by applying a tessellated mesh to prevent the detachment of the damaged parts. It also has some prestigious coats of arms on both facades. The survey presented here is finalized to the geometric measurements and 3D modeling useful for the structural studies conducted in [67].

3.2. Unmanned Aerial Vehicle Photogrammetric Flight

For the field test, as shown in Figure 3, a consumer UAV from DJI model Mavic 2 Pro was used to survey the test area. It weighs about 1 kg and is equipped with a 1 CMOS Hasselblad optic sensor. The characteristics of the sensor are reported in Table 2. For exhaustive technical documentation, one can refer to [68]. The UAV is equipped with a dual-frequency multi-constellation (GPS and GLONASS) receiver which in normal scenarios delivers geotagged images with an accuracy of about 10 m; in this particular application, since the location of the survey is a highly degraded urban canyon where the UAV was surrounded by tall buildings, the geotagging procedure failed for several images; moreover, part of the survey was conducted under the overpass removing all chance to fix the GNSS position at all; therefore, after some failures in the alignment step, the coordinates stored in the EXIF file attached to each image was erased. This procedure allows the success of the photogrammetric process but drastically increases computing times, since the software, ignoring the position of each picture taken must search homologous points for every pair of images. In normal conditions, the availability of the position of the takes speeds up the processing since the software searches for homologous points only in overlapped pairs of images.

3.3. Dataset Description

The acquired dataset comprises both horizontal, nadiral, and oblique images. According to [69], to improve detail reconstruction and minimize holes in the 3D model, it is convenient to collect also oblique images. Indeed, oblique images are useful to reduce the shadow areas in which data can not be acquired, especially when dealing with complex objects, as in this case. In particular, the following types of images were acquired, depending on the part of the overpass being reconstructed:
  • For the south facade, 111 images (12 oblique and 99 horizontal) of which there are:
    21 images at 10 m from the object;
    90 images at 4 m from the object.
  • For the north facade, 70 images (29 oblique and 41 horizontal) of which there are:
    32 images at 10 m from the object;
    38 images at 4 m from the object.
  • For the extrados, 41 images at a flying altitude of 40 m of which there are:
    29 nadiral;
    12 oblique.
The photogrammetric survey was conducted according to Comité International de Photogrammétrie (CIPA) 3 × 3 Rules (see [70]). These simple rules, written, tested, and published at the CIPA Symposium in Sofia in 1988, should be observed for photography with non-metric cameras. The rules are divided into geometric rules (control, wide-area stereo photo cover, and detail stereo photo cover), camera rules (camera properties, camera calibration, and image exposure), and procedural rules (record photo layout, log the metadata, and archive). To reach a proper level of detail on the facades presenting some details such as coats-of-arms, images were taken at two different distances, as suggested in [70]; at 10 m of distance the wide-area stereo photo cover while at 4 m close-up images for detail stereo photo cover.
A critical aspect when dealing with the 3D reconstruction of such complex objects where images of facades were taken at very low height as if they were captured like a terrestrial acquisition scheme is to ensure that the algorithm can identify the same key points as tie points between images. To achieve this, the survey strategy focused on having a noteworthy image overlap, both cross- and along-track. Therefore the mean along-overlap was equal to 80% while the cross-track to 55%.

3.4. Ground Control Points

The scaling procedure of the 3D model was created based on a support survey conducted with a professional TOPCON total station. This choice was influenced by two main factors: the impracticability of employing a GNSS receiver due to the highly degraded urban canyon, as well as the type of object we were attempting to model. Thus, an open polygonal was created, consisting of five points on the ground, as shown in Figure 4; then, making station on those points, a detailed survey was carried out to measure several support points, from which 12 of them were selected to model scaling: 4 points on the south facade, 4 points on the north facade, and 4 points on the extrados. The GCPs taken were chosen with a preference for well-identifiable features such as tuff edges and net-holding anchors as shown in Figure 5, Figure 6 and Figure 7. All the points surveyed were taken using a local coordinate system. Attention was paid to be sure that points that were gathered were also visible in the model. Given the historicity of the overpass, we were not allowed to place dedicated targets during the survey; thus, the accuracy related to the natural points chosen was set to 0.005 m, to take into account the uncertainty associated with the type of feature.

4. Results

4.1. Internal and External Orientation Results

In Figure 8a, visual representation of the orientation results of BBA by Metashape® and MicMac® is shown. Panel (a) reports the image poses represented with blue rectangles oriented based on Metashape® external orientation; panel (b) depicts the camera poses, represented with pyramid-shaped 3D objects, computed by MicMac®. The images reported here have a different visualization given that the two software use different strategies to display camera poses information.
To choose the best camera calibration model for the specific case study, various relative orientation processes were carried out: Radial, Fraser, Four15x4, Brown, and Ebner. Some of them did not converge at all while others exhibit high image residuals. For this specific dataset, it was found that the best trade-off between convergence achievement and residual minimization was the Fraser model [71]. It is a radial model for camera self-calibration, with decentric and affine parameters. In particular, 12 degrees of freedom are taken into account: 1 for focal length (F), 2 for principal point ( P P 1 and P P 2 ), 2 for distortion center ( C d i s t 1 and C d i s t 2 ), 3 for coefficients of radial distortion ( r 3 , r 5 , and r 7 ), 2 for decentric parameters ( P 1 and P 2 ), and 2 for affine (in-plane distortion) parameters ( b 1 and b 2 ). The parameters computed by the self-calibration algorithm implemented in MicMac® and then imposed in Metashape® are reported in Table 3.
Parameters computed with the self-calibration method were then imposed in Metashape® camera calibration to set the same conditions for results comparison. The Apero® tool of MicMac® generates external and internal orientations of the camera. The relative orientations were computed with the Tapas tool in two steps: first on a small set of images and then by using the calibration obtained as an initial value for the global orientation of all images. Then, the Campari command is used to compensate for heterogeneous measurements [13]. In Metashape®, camera alignment by bundle adjustment is achieved by detecting common tie points and match them on images to compute the external camera orientation parameters for each picture. Then, it proceeds to solve for camera internal and external orientation parameters using an algorithm to find approximate camera locations refining them later using the BBA. Figure 9 depicts the probability density estimation of image residuals expressed in pixels. It was obtained by calculating the normalized histogram of image residuals. The height of each bar is the number of samples of images residual falling in that bin divided by the width of the bin and the total number of elements in the sample. In this way, the area of each bar is the relative number of observations. This type of normalization is commonly used to estimate the PDF. Both histograms were calculated considering a bin width of 0.03 pixels. By observing the figure it can be noticed that MicMac® image residual distribution is long-tailed, whereas Metashape® is short-tailed. Indeed, the image residuals standard deviation obtained with MicMac® is higher. In particular, the image residual standard deviation for MicMac® is equal to 0.277 pixels, whereas for Metashape®, it is equal to 0.173 pixels. Regarding the mean image residuals, the two software show similar values: MicMac® reaches the value of 0.770 pixels and Metashape® attests to 0.735 pixels.

4.2. TP Clouds Results

The point cloud obtained with the commercial software was used as a reference for the quality assessment of the TP cloud generated by MicMac®. The analysis presented was conducted with the FOSS CloudCompare ver. 2.11.3 (Anoia 64-bits) and Matlab R2021b Statistic toolbox. Various properties of the clouds were compared: number of points, point densities, GCP errors, and CP errors. Table 4 depicts the main properties of the resulting TP clouds.
Table 4 demonstrates that the MicMac® TP cloud has significantly more points than the Metashape® cloud; this is addressed to the way the points are managed in the two software packages. Indeed, Metashape® merges TPs to obtain a single point via the gradual selection algorithm, whereas MicMac® does not realize this merging; in fact, if one zooms in a MicMac® TP and/or dense cloud, a cluster of points will be found. A post-processing decimation can be conducted in MicMac® via the tool Schnaps to clean and reduce tie points. This reflects in the other metrics reported here (computed taking into account a sphere with a radius of 0.1 m): the surface density of the TP cloud generated by MicMac® has a mean of 7503 points/m 2 , 8863 points/m 2 , and 32,192 points/m 2 for the extrados, the north, and the south facades, respectively. The corresponding values for the Metashape® TP cloud are 1410 points/m 2 , 1947 points/m 2 , and 2428 points/m 2 . Concerning the standard deviations of the surface’s densities, the MicMac® TP cloud shows values equal to 8276 points/m 2 , 7537 points/m 2 , and 25,250 points/m 2 , for extrados, north, and south facades, respectively. Those values for the Metashape® TP cloud are 1004 points/m 2 , 984 points/m 2 , and 1489 points/m 2 , respectively. Lastly, another key aspect to take into account is the computation time. According to the Metashape® manual [66], it does not support the graphic processing unit of the workstation employed in this work; on the other hand, according to [53], MicMac® provides support for GPU processing starting from dense matching. Hence, to assure the same conditions, we do not use GPU support at all. MicMac® took 23.48 h to complete the photogrammetric pipeline, while Metashape® took 45% less time, namely 12.34 h.
In Figure 10 are shown the TP clouds obtained with both software. As confirmed by graphical visualization, the MicMac® cloud is denser, especially on the facades. Moreover, Metashape® reveals a more homogeneous reconstruction of the whole model, whereas MicMac® reveals some holes in intrados reconstruction.
Table 5 and Table 6 depict the 3D errors in GCPs and CPs for both software packages. Regarding MicMac® GCPs errors, except for marker P60, all 3D errors are less than 0.018 m; on the other hand, regarding Metashape®, markers P07, P09, P18, and P60 exhibit a 3D error of about 0.025 m. Three of these belong to the south facade suggesting that this was the most difficult part to reconstruct. This is confirmed by analyzing the MicMac® errors on the same markers: except for these and for marker P68, they are the only errors greater than 0.010 m. Table 6 shows that, except for marker P50, 3D errors on CPs are comparable between software packages. Table 7 contains the corresponding error statistics. The errors are of the same order of magnitude. Regarding the GCPs, MicMac® shows smaller error statistics; in fact, the mean 3D error and the standard deviation are 0.010 m and 0.007 m, respectively; the same metrics for Metashape® are 0.015 m and 0.008 m. Conversely, regarding the CPs, the mean 3D error for MicMac® is 0.037 m with a standard deviation of 0.017 m while the mean 3D error on CPs obtained with Metashape® is 0.031 m with a standard deviation equal to 0.015 m.

4.3. Relative Accuracy

To assess the relative accuracy of both reconstructed models, four measurements were computed between various GCPs and CPs. The true distances were calculated on the basis of the total station support survey; afterward, the same measurements were realized on the 3D models exploiting CloudCompare software. In Figure 11 is reported an example for the measurement P7-P9 while Table 8 summarizes all the measurements carried out. The errors obtained are comparable; however, it can be noted that Metashape® shows always positive values, whereas for MicMac®, three out of four values are negative. Moreover, the errors for Metashape® ranges from 0.007 m to 0.018 m while for MicMac® from 0.004 m to 0.018 m.

4.4. Cloud-to-Cloud Distance

In this section, the Cloud-to-Cloud distance, computed exploiting the M3C2 plugin of CloudCompare software package [72], is shown. The principle of nearest neighbour distance is used to compute distances between two points: for each point in the compared cloud, the nearest point in the reference cloud is searched and their Euclidean distance is computed [73]. Figure 12 shows the results of Cloud-to-Cloud distance computation. In the present case, the radius of the sphere where the algorithms search for the nearest neighbors was set to 0.100 m, the reference cloud chosen is the Metashape® one, and the graphic results are visualized in color scale. The figure provides a visual representation of the Cloud-to-Cloud distance; by analyzing the figure, it emerges that the Cloud-to-Cloud distances are homogeneous mainly on the facades while some misalignment between TP clouds is present under the intrados; nonetheless, this misalignment is under 0.002 m. Figure 13 shows the Cloud-to-Cloud distance probability density function (PDF) estimate between MicMac® and Metashape® TP clouds; the mean distance value between the two clouds is equal to 0 mm while the standard deviation is equal to 0.254 mm. Observing the figure, it can be noted that more than 95% of distance values are included in the interval [−1, 1] mm. These results confirm the high-level performance reachable by MicMac®, given that the two TP clouds are averagely superimposable.

4.5. Other Products

Both software can produce several other products derived from the TP cloud. First, the densification algorithms can generate the dense cloud. After that, textured and tiled models can be obtained. In addition, raster products are obtainable, such as Digital Elevation Model (DEM) and Orthophotos. In this section, for example, the dense clouds generated by both software are shown and their main features are compared. In Figure 14 and Figure 15 are reported the 3D views of the dense clouds obtained with MicMac® and Metashape®, respectively.
Table 9 shows the number of points and the surface densities after densification. According to the Metashape® manual [66] and the MicMac wiki [53], to ensure proper comparability, the same parameters were employed: in Metashape®, the accuracy was set to high, whereas in MicMac®, when launching the Malt tool, the parameter ZoomF was set to 2. As can be seen in Table 9, for the extrados, the dense cloud obtained with MicMac® has about 20% more points than the Metashape® dense cloud. Regarding the north facade the FOSS-derived dense cloud has two times the points than the commercial one. Lastly, the MicMac® dense cloud of the south facade has about 35% more points than the corresponding from Metashape®. However, a comparison between dense clouds from different software packages is something hard to achieve for the different algorithms and strategies employed; hence, for this product, we limit to show the resulting values. To deepen this specific aspect a dedicated study should be conducted.

5. Conclusions and Future Works

Unmanned aerial vehicles are playing a major role in the acquisition of geospatial information. With the significantly lower costs associated with UAV surveys and the high quality of the derived products, this is a technique that proves to be useful for several applications and most certainly a valid alternative to traditional techniques. Due to the payload limitation of prosumer UAVs, the reliability of results may not be as high as other techniques, e.g., classical aerial photogrammetry and LiDAR. For this reason, an accuracy assessment of different software packages can reveal truly interesting. The present work represents a case study of the reconstruction of a historical overpass in Italy carried out with MicMac® open source software and then compared to Metashape® commercial photogrammetry solution. Particular attention was paid to the image acquisition procedure since, as well-known in literature, it has a relevant impact on the precision of the results.
The quality of the photogrammetric results were assessed by analyzing image residuals, the statistics of GCPs and CPs errors, the relative accuracy assessment, and the Cloud-to-Cloud distance. Regarding the mean image residuals, the two software show similar values: MicMac® reaches the value of 0.770 pixels and Metashape® attests to 0.735 pixels. The standard deviations are 0.277 pixels and 0.173 pixels for MicMac® and Metashape®, respectively. Analyzing the 3D errors on CPs, MicMac® revealed slightly worse statistics; in fact, the mean 3D error is equal to 0.037 m with a standard deviation of 0.017 m while the mean 3D error on CPs obtained with Metashape® is 0.031 m with a standard deviation equal to 0.015 m. For what it concerns the relative accuracy assessment, four linear distances were measured on the 3D models revealing mean errors of 0.008 m and 0.011 m for MicMac® and Metashape®, respectively. Lastly, the Cloud-to-Cloud distance analysis was carried out between the TP clouds: it can be noted that more than 95% of distance values are included in the interval [−1, 1] mm. The probability density estimate has a zero mean. Besides this numerical analysis that demonstrates similar results, a visual investigation reveals that the Metashape® TP cloud presents fewer holes than MicMac® TP cloud, especially under the overpass. A relevant advantage of MicMac® resides in the accessibility to intermediate results and in the capability of finely tuning practically all parameters. The trade-off of this extreme manageability lies in the difficulty of usage. Conversely, the nontrivial license cost of the commercial software is justified by its ease of use and its user-friendly interface, which allows anyone with minimal experience to perform the photogrammetric 3D model reconstruction but has few intermediate output capabilities and limited parameters control. It should be highlighted that a comparison between software is something hard to achieve given the secrecy of the commercial software, the long computation times, and the theoretical difference underlying the approaches. Moreover, the object reconstructed is particularly complex being a historical overpass with several details. Lastly, the overpass is placed in an urban canyon where the GNSS support can not be exploited and the scenario changes continuously due to vehicular traffic. The analyses conducted show that the two software packages have comparable products qualities even if the software approaches are different: an open-source project fully customizable versus a commercial black-box software; hence the experiment confirms the potentiality of FOSS for photogrammetric applications. We can state that MicMac® can produce professional results comparable with products generated by a market-leading commercial solution. The future work will consider other types of objects to reconstruct via photogrammetry. This will define more completely the behaviour of the FOSS investigated when dealing with different geometries. Moreover, in-depth analyses of other products, e.g., DEM and Orthomosaic, will be carried out. Another interesting future goal will be to further improve the cost-effectiveness of the equipment; this can be achieved by employing a compact ultra low-cost UAV. Indeed, in the literature, there are few relevant studies concerning the utilization of a cost-effective platform for surveying and 3D reconstruction, so far. This can represent a promising avenue of research to define a correct procedure for 3D model generation and quality assessment employing ultra-low-cost equipment.

Author Contributions

Conceptualization, M.C., U.R. and G.P.; methodology, M.C., U.R. and G.P.; software, M.C.; validation, M.C. and U.R.; formal analysis, M.C., U.R. and G.P.; investigation, M.C., U.R. and G.P.; data curation, M.C. and G.P.; writing—original draft preparation, M.C. and U.R.; writing—review and editing, M.C., U.R. and G.P.; visualization, M.C. and U.R.; supervision, U.R. and G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CIPAComité International de Photogrammétrie Architecturale
CMOSComplimentary Metal-Oxide-Semiconductor
CPCheck Point
DEMDigital Elevation Model
ENSGFrench National School for Geographic Sciences
EXIFExchangeable Image File
FOSSFree and Open-Source Software
GCPGround Control Point
GISGeographical Information System
GLONASSGlobal’naja Navigacionnaja Sputnikovaja Sistema
GPSGlobal Positioning System
GPUGraphics Processing Unit
GNSSGlobal Navigation Satellite System
IGNFrench National Geographic Institute
LiDARLaser Detection and Ranging
SIFTScale-Invariant Feature Transform
TPTie Point
UAVUnmanned Aerial Vehicle

Appendix A. MicMac® Processing Pipeline

Once the image dataset is ready for processing (one must manually remove inappropriate images), the first command to run is Tapioca where the software computes the TPs:
mm3d Tapioca MulScale "DJI.*.JPG” 500 -1
The argument MulScale means that MicMac® computes TPs for images in low resolution (500) and then for the highest resolution (-1). This implementation should speed up the process. The next command is Tapas for internal and external orientation parameter computation:
mm3d Tapas Fraser "DJI.*.JPG" Out=All-Rel
When dealing with large datasets a convenient procedure is to run the Tapas command over a sub-dataset or another dataset with the same camera setting; in this manner, when running the command for the whole dataset, the process will be less time-consuming. Tapas creates a named directory that contains the camera calibration file, with camera parameters (focal length, PPP, distortion parameters), and the orientation file for each image, with camera orientation, TPs used for orientation, and rotation parameters. Further important information is stored in the file named “Residuals.xml” (image residual, number of TPs used per image, etc.). Then, the command Apericloud is needed to generate a visualization of the 3D TP cloud previously generated by Tapas along with cameras position:
mm3d Apericloud "DJI.*.JPG" All-Rel
At this point, the measurement process must be conducted and it can be conducted with the graphic tool SaisieAppuisInitQT as follows:
mm3d SaisieAppuisInitQT "DJI_[8||9].JPG" All-Rel 0001 GCP.xml
where “All-Rel” is the previously generated oriented model folder, “0001” is the name of the marker and “GCP.xml” is a properly-formatted XML file containing the information about the markers (name, coordinate X, coordinate Y, coordinate Z, uncertainties). A minimum number of three support points must be defined, each of which on at least two images. The computation of the absolute orientation follows:
mm3d GCPBascule ".*JPG" Ori-All Ori-All-Basc GCP.xml GCP-S2D.xml
where “Ori-All-Basc” is the output name of the GCPs orientation folder. The previously defined GCPs must be validated on the other image that composes the dataset; this is conducted with the command SaisieAppuisPredictQT, as follows:
mm3d SaisieAppuisPredicQT "DJI*.*JPG" Ori-All-Basc GCP.xml GCP-Final.xml
The computation of the absolute orientation is updated on the basis of the GCPs validated on the whole dataset with the following command:
mm3d GCPBascule ".*JPG" Ori-All Ori-All-Basc2 GCP.xml GCP-Final-S2D.xml
The last step of the measurement process is computing the final adjustment with Campari:
mm3d Campari ".*JPG" Ori-All-Basc2 Ori-Terrain GCP=[GCP.xml,0.02,GCP-Final-S2D.xml,0.5]
where 0.02 is the support point accuracy, and 0.5 is the pixel accuracy of the linking points. Once the measurement process is accomplished, the AperiCloud tool is launched to create the 3D TP cloud on which the user can define the 3D mask to limit the densification area, as follows:
mm3d AperiCloud ".*JPG" Ori-Terrain
mm3d SaisieMasqQT AperiCloud_Ori-Terrain.ply
Once the mask is created, the 3D reconstruction (densification) can be conducted via the following:
mm3d C3DC MicMac "DJI_*.*JPG" Ori-Terrain Masq3D=AperiCloud_Ori-Terrain.ply Out=C3DC_MicMac_ponte.ply
Once the photogrammetric process is finished, products other than TP and dense clouds can be built, primarily the following:
  • Tawny, which creates the orthorectified depth maps image;
  • GrShade, which creates a faded relief image; and,
  • to8Bits, which creates a hypsometric color image.
In Figure A1 is depicted the pipeline of MicMac® processing limited to the TP cloud generation and scaling. Since the generation of further products are not of interest of the present work, the flowchart does not take into account the subsequent steps, e.g., dense cloud and orthomosaic generation.
Figure A1. Flowchart of MicMac® processing pipeline. Please note that the pipeline is limited to the TP cloud generation and scaling.
Figure A1. Flowchart of MicMac® processing pipeline. Please note that the pipeline is limited to the TP cloud generation and scaling.
Drones 06 00242 g0a1

Appendix B. Agisoft Metashape® Processing Pipeline

The described standard workflow can be summarized as follows:
  • Add photos to the project;
  • Align photos to create the TP cloud (accuracy = high, key point limit = none, TP limit = none);
  • Build dense cloud to densify the TP cloud;
  • Build mesh to create a triangular mesh on the point cloud and to obtain a surface model;
  • Build texture to create a texture and wrap it on the model; and,
  • Insert markers and define which are GCPs and which CPs.

References

  1. Berrett, B.E.; Vernon, C.A.; Beckstrand, H.; Pollei, M.; Markert, K.; Franke, K.W.; Hedengren, J.D. Large-Scale Reality Modeling of a University Campus Using Combined UAV and Terrestrial Photogrammetry for Historical Preservation and Practical Use. Drones 2021, 5, 136. [Google Scholar] [CrossRef]
  2. Leberl, F.; Bischof, H.; Pock, T.; Irschara, A.; Kluckner, S. Aerial Computer Vision for a 3D Virtual Habitat. IEEE Comput. 2010, 43, 24–31. [Google Scholar] [CrossRef]
  3. Lichti, D.; Gordon, S.; Stewart, M.; Franke, J.; Tsakiri, M. Comparison of digital photogrammetry and laser scanning, laser scanner behaviour and accuracy, close-range imaging, long-range vision. In Proceedings of the ISPRS Commission V, Symposium, Corfu, Greece, 2–6 September 2002; pp. 39–44. [Google Scholar]
  4. 3DFlow SRL Website. Available online: https://www.3dflow.net/it/software-di-fotogrammetria-3df-zephyr (accessed on 10 March 2022).
  5. Autodesk Inc. Available online: https://www.autodesk.it/products/recap/overview (accessed on 10 March 2022).
  6. Agisoft Website. Available online: https://www.agisoft.com/ (accessed on 11 March 2022).
  7. BAE Systems. Available online: https://www.geospatialexploitationproducts.com/content/ (accessed on 31 March 2022).
  8. Bentley Systems Inc. Available online: https://www.bentley.com/it/products/brands/contextcapture (accessed on 10 March 2022).
  9. ColMap Documentation. Available online: https://colmap.github.io/index.html (accessed on 10 March 2022).
  10. DroneDeploy. Available online: https://www.dronedeploy.com/ (accessed on 10 March 2022).
  11. Planetek Italia s.r.l. Available online: https://www.planetek.it/prodotti/tutti_i_prodotti/imagine_photogrammetry_lps (accessed on 1 August 2022).
  12. AliceVision. Available online: https://alicevision.org/#meshroom (accessed on 10 March 2022).
  13. MicMac Wiki. Available online: https://micmac.ensg.eu/index.php/Accueil (accessed on 10 March 2022).
  14. University of Darmstadt. Available online: https://www.gcc.tu-darmstadt.de/home/proj/mve/ (accessed on 10 March 2022).
  15. Photometrix Software. Available online: https://www.photometrix.com.au/it/iwitness/ (accessed on 31 March 2022).
  16. Photomodeler Technologies. Available online: https://www.photomodeler.com/ (accessed on 10 March 2022).
  17. Pix4d. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetry-software (accessed on 10 March 2022).
  18. PMS, AG. Available online: https://en.elcovision.com/ (accessed on 31 March 2022).
  19. OpenDroneMap. Available online: https://www.opendronemap.org/webodm/ (accessed on 10 March 2022).
  20. OpenMVG Github Page. Available online: https://github.com/openMVG/openMVG/wiki (accessed on 10 March 2022).
  21. Capturing Reality Website. Available online: https://https://www.capturingreality.com/ (accessed on 10 March 2022).
  22. SimActive Inc. Available online: https://www.simactive.com/correlator3d-mapping-software-features.html (accessed on 10 March 2022).
  23. Regard3D. Available online: https://www.regard3d.org/ (accessed on 10 March 2022).
  24. Trimble Inc. Available online: https://geospatial.trimble.com/products-and-solutions/inpho (accessed on 10 March 2022).
  25. Wu, C. Available online: http://ccwu.me/vsfm/ (accessed on 10 March 2022).
  26. Śledź, S.; Ewertowski, M.; Piekarczyk, J. Applications of unmanned aerial vehicle (UAV) surveys and Structure from Motion photogrammetry in glacial and periglacial geomorphology. Geomorphology 2021, 378, 107620. [Google Scholar] [CrossRef]
  27. Tomczyk, A.M.; Ewertowski, M.W. UAV-based remote sensing of immediate changes in geomorphology following a glacial lake outburst flood at the Zackenberg river, northeast Greenland. J. Maps 2020, 16, 86–100. [Google Scholar] [CrossRef]
  28. Fabbri, S.; Grottoli, E.; Armaroli, C.; Ciavola, P. Using High-Spatial Resolution UAV-Derived Data to Evaluate Vegetation and Geomorphological Changes on a Dune Field Involved in a Restoration Endeavour. Remote Sens. 2021, 13, 1987. [Google Scholar] [CrossRef]
  29. Lama, G.F.C.; Crimaldi, M.; Pasquino, V.; Padulano, R.; Chirico, G.B. Bulk Drag Predictions of Riparian Arundo donax Stands through UAV-Acquired Multispectral Images. Water 2021, 13, 1333. [Google Scholar] [CrossRef]
  30. Goodbody, T.R.; Coops, N.C.; Marshall, P.L.; Tompalski, P.; Crawford, P. Unmanned aerial systems for precision forest inventory purposes: A review and case study. For. Chron. 2017, 93, 71–81. [Google Scholar] [CrossRef]
  31. Feng, Q.; Liu, J.; Gong, J. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  32. Liang, X.; Wang, Y.; Pyörälä, J.; Lehtomäki, M.; Yu, X.; Kaartinen, H.; Kukko, A.; Honkavaara, E.; Issaoui, A.E.I.; Nevalainen, O.; et al. Forest in situ observations using unmanned aerial vehicle as an alternative of terrestrial measurements. For. Ecosyst. 2019, 6, 20. [Google Scholar] [CrossRef]
  33. Belcore, E.; Pittarello, M.; Lingua, A.; Lonati, M. Mapping Riparian Habitats of Natura 2000 Network (91E0*, 3240) at Individual Tree Level Using UAV Multi-Temporal and Multi-Spectral Data. Remote Sens. 2021, 13, 1756. [Google Scholar] [CrossRef]
  34. Rajan, J.; Shriwastav, S.; Kashyap, A.; Ratnoo, A.; Ghose, D. Chapter 6—Disaster management using unmanned aerial vehicles. In Unmanned Aerial Systems; Advances in Nonlinear Dynamics and Chaos, (ANDC); Koubaa, A., Azar, A.T., Eds.; Academic Press: Cambridge, MA, USA, 2021; pp. 129–155. [Google Scholar] [CrossRef]
  35. Luo, C.; Miao, W.; Ullah, H.; McClean, S.; Parr, G.; Min, G. Geological Disaster Monitoring Based on Sensor Networks. In Unmanned Aerial Vehicles for Disaster Management; Geological Disaster Monitoring Based on Sensor Networks: Singapore, 2019; pp. 83–107. [Google Scholar] [CrossRef]
  36. Erdelj, M.; Natalizio, E. UAV-assisted disaster management: Applications and open issues. In Proceedings of the 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, USA, 15–18 February 2016; pp. 1–5. [Google Scholar] [CrossRef]
  37. Congress, S.S.; Puppala, A.J.; Lundberg, C.L. Total system error analysis of UAV-CRP technology for monitoring transportation infrastructure assets. Eng. Geol. 2018, 247, 104–116. [Google Scholar] [CrossRef]
  38. Maltezos, E.; Skitsas, M.; Charalambous, E.; Koutras, N.; Bliziotis, D.; Themistocleous, K. Critical infrastructure monitoring using UAV imagery. In Proceedings of the RSCy 2016 Fourth International Conference on Remote Sensing and Geoinformation of the Environment, Paphos, Cyprus, 4–8 April 2016. [Google Scholar] [CrossRef]
  39. Ham, Y.; Han, K.; Lin, J.; Golparvar-Fard, M. Visual monitoring of civil infrastructure systems via camera-equipped Unmanned Aerial Vehicles (UAVs): A review of related works. Vis. Eng. 2016, 4, 1. [Google Scholar] [CrossRef]
  40. Lo Brutto, M.; Garraffa, A.; Meli, P. UAV Platforms for Cultural Heritage Survey: First Results. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-5. [Google Scholar] [CrossRef]
  41. Themistocleous, K. The Use of UAVs for Cultural Heritage and Archaeology. In Remote Sensing for Archaeology and Cultural Landscapes: Best Practices and Perspectives Across Europe and the Middle East; Springer International Publishing: Cham, Switzerland, 2020; pp. 241–269. [Google Scholar] [CrossRef]
  42. Bakirman, T.; Bayram, B.; Akpinar, B.; Karabulut, M.F.; Bayrak, O.C.; Yigitoglu, A.; Seker, D.Z. Implementation of ultra-light UAV systems for cultural heritage documentation. J. Cult. Herit. 2020, 44, 174–184. [Google Scholar] [CrossRef]
  43. Barazzetti, L.; Binda, L.; Cucchi, M.; Scaioni, M.; Taranto, P. Photogrammetric reconstruction of the My Son G1 temple in Vietnam. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2009, 38, 8. [Google Scholar]
  44. Righetti, G.; Serafini, S.; Brondi, F.; Church, W.; Garnero, G. Survey of a Peruvian Archaeological Site Using LiDAR and Photogrammetry: A Contribution to the Study of the Chachapoya. In Computational Science and Its Applications, 21st ed.; Springer: Cham, Switzerland, 2021; pp. 613–628. [Google Scholar]
  45. Liu, Q.; Li, S.; Tian, X.; Fu, L. Dominant Trees Analysis Using UAV LiDAR and Photogrammetry. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 4649–4652. [Google Scholar] [CrossRef]
  46. Marques Freguete, L.; Chu, T.; Starek, M. Mapping with LIDAR and structure-from-motion photogrammetry: Accuracy assessment of point cloud over multiple platforms. In Proceedings of Remote Sensing Technologies and Applications in Urban Environments VI; SPIE Remote Sensing; SPIE: Bellingham, WA, USA, 2021; p. 11. [Google Scholar] [CrossRef]
  47. Griffiths, D.; Burningham, H. Comparison of pre-and self-calibrated camera calibration models for UAS-derived nadir imagery for a SfM application. Prog. Phys. Geogr. Earth Environ. 2019, 43, 215–235. [Google Scholar] [CrossRef]
  48. Altman, S.; Xiao, W.; Grayson, B. Evaluation of low-cost terrestrial photogrammetry for 3d reconstruction of complex buildings. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 199–206. [Google Scholar] [CrossRef]
  49. Jaud, M.; Passot, S.; Le Bivic, R.; Delacourt, C.; Grandjean, P.; Le Dantec, N. Assessing the Accuracy of High Resolution Digital Surface Models Computed by PhotoScan® and MicMac® in Sub-Optimal Survey Conditions. Remote Sens. 2016, 8, 465. [Google Scholar] [CrossRef]
  50. Smith, M.; Carrivick, J.; Quincey, D. Structure from motion photogrammetry in physical geography. Prog. Phys. Geogr. Earth Environ. 2016, 40, 247–275. [Google Scholar] [CrossRef]
  51. Deseilligny, M.; Clery, I. APERO, an open source bundle adjusment software for automatic calibration and orientation of set of images. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2011, XXXVIII-5/W16, 269–276. [Google Scholar] [CrossRef]
  52. Pierrot-Deseilligny, M. MicMac, Apero, Pastis and Other Beverages in a Nutshell! Available online: http://logiciels.ign.fr/IMG/pdf/docmicmac-2.pdf (accessed on 11 April 2022).
  53. MicMac Wiki. Available online: https://micmac.ensg.eu/index.php/MicMac_tools (accessed on 1 March 2022).
  54. Lowe, D. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91. [Google Scholar] [CrossRef]
  55. Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C. UAV onboard photogrammetru and GPS positioning for earthworks. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, XL-3/W3, 293–298. [Google Scholar] [CrossRef]
  56. Peipe, A.J.; Tecklenburg, B.W. Photogrammetric Camera Calibration Software—A Comparison; ISPRS Commission V, WG V/1; Elsevier: Hannover, Germany, 2006. [Google Scholar]
  57. Vedaldi, A. An Open Implementation of the SIFT Detector and Descriptor; UCLA CSD Technical Report 070012; UCLA CSD: Los Angeles, CA, USA, 2007. [Google Scholar]
  58. Murtiyoso, A.; Grussenmeyer, P.; Börlin, N.; Vandermeerschen, J.; Freville, T. Open Source and Independent Methods for Bundle Adjustment Assessment in Close-Range UAV Photogrammetry. Drones 2018, 2, 3. [Google Scholar] [CrossRef]
  59. Rupnik, E.; Daakir, M.; Deseilligny, M.P. MicMac—A free, open-source solution for photogrammetry. Open Geospat. Data Softw. Stand. 2017, 2, 1–9. [Google Scholar] [CrossRef] [Green Version]
  60. Nocedal, J.; Wright, S.J. Conjugate gradient methods. InNumerical Optimization; Springer: Berlin/Heidelberg, Germany, 2006; pp. 101–134. [Google Scholar]
  61. Chiabrando, F.; Donadio, E.; Rinaudo, F. SfM for orthophoto to generation: A winning approach for cultural heritage knowledge. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 91. [Google Scholar] [CrossRef]
  62. Snavely, N.; Seitz, S.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
  63. Snavely, N.; Seitz, S.; Szeliski, R. TI—Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef]
  64. Verhoeven, G. Taking computer vision aloft: Archaeological three-dimensional reconstructions from aerial photographs with PhotoScan. Archeol. Prospect. 2011, 18, 67–73. [Google Scholar] [CrossRef]
  65. Doneus, M.; Verhoeven, G.; Fera, M.; Briese, C.; Kucera, M.; Neubauer, W. From deposit to point cloud: A study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations. Geoinformatics 2011, 6, 81–88. [Google Scholar] [CrossRef]
  66. Agisoft LLC. Available online: https://www.agisoft.com/pdf/metashape-pro_1_7_en.pdf (accessed on 15 September 2021).
  67. Colarullo, M. Rilievo Topografico e Valutazione della Sicurezza del Ponte della Cerra in Napoli. Bachelor’s Thesis, Federico II University of Naples, Napoli, Italy, 2021. [Google Scholar]
  68. DJI. Available online: https://www.dji.com/it/mavic-2/info#specs (accessed on 13 March 2022).
  69. Pádua, L.; Adão, T.; Hruška, J.; Marques, P.; Sousa, A.; Morais, R.; Lourenço, M.; Sousa, J.; Peres, E. UAS-based photogrammetry of cultural heritage sites: A case study addressing Chapel of Espírito Santo and photogrammetric software comparison. In Proceedings of the GARSS 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020. [Google Scholar]
  70. Atkinson, K.B. Close Range techniques and machine vision. Photogramm. Rec. 1994, 14, 1001–1003. [Google Scholar] [CrossRef]
  71. Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  72. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef]
  73. Shen, Y.; Lindenbergh, R.; Wang, J. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method. Sensors 2016, 17, 26. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Comparison between the commands in the two software packages investigated. The top row indicates commands for the FOSS, the middle row refers to commercial software, and the bottom row reports the relative photogrammetric processing stages.
Figure 1. Comparison between the commands in the two software packages investigated. The top row indicates commands for the FOSS, the middle row refers to commercial software, and the bottom row reports the relative photogrammetric processing stages.
Drones 06 00242 g001
Figure 2. Test area: (a) location map; (b) localization in Southern Italy; (c) Conte della Cerra overpass.
Figure 2. Test area: (a) location map; (b) localization in Southern Italy; (c) Conte della Cerra overpass.
Drones 06 00242 g002
Figure 3. DJI Mavic 2 pro.
Figure 3. DJI Mavic 2 pro.
Drones 06 00242 g003
Figure 4. Open polygonal created for the topographic support survey projected on cartography.
Figure 4. Open polygonal created for the topographic support survey projected on cartography.
Drones 06 00242 g004
Figure 5. View of the south facade with markers locations. GCPs are represented in red while CP is represented in green.
Figure 5. View of the south facade with markers locations. GCPs are represented in red while CP is represented in green.
Drones 06 00242 g005
Figure 6. View of the north facade with markers locations. GCPs are represented in red while CP is represented in green.
Figure 6. View of the north facade with markers locations. GCPs are represented in red while CP is represented in green.
Drones 06 00242 g006
Figure 7. View of the north side part of the extrados with markers locations. The left image refers to the left part of the extrados while the right image to the right part. GCPs are represented in red while CPs are represented in green.
Figure 7. View of the north side part of the extrados with markers locations. The left image refers to the left part of the extrados while the right image to the right part. GCPs are represented in red while CPs are represented in green.
Drones 06 00242 g007
Figure 8. Results of the external orientation process showing the positions and attitudes of each camera station, together with the TP clouds generated from the respective feature matching process: (a) Metashape®; (b) Apero®.
Figure 8. Results of the external orientation process showing the positions and attitudes of each camera station, together with the TP clouds generated from the respective feature matching process: (a) Metashape®; (b) Apero®.
Drones 06 00242 g008
Figure 9. Image residuals probability density function estimates. On x-axis are reported images residuals expressed in pixels while on y-axis is the probability density estimation. Blu and red bins represent MicMac® and Metashape® respectively.
Figure 9. Image residuals probability density function estimates. On x-axis are reported images residuals expressed in pixels while on y-axis is the probability density estimation. Blu and red bins represent MicMac® and Metashape® respectively.
Drones 06 00242 g009
Figure 10. The 3D view of the RGB TP clouds obtained with MicMac® and Metashape®. Panel (a) refers to MicMac® TP cloud. Panel (b) refers to Metashape® TP cloud.
Figure 10. The 3D view of the RGB TP clouds obtained with MicMac® and Metashape®. Panel (a) refers to MicMac® TP cloud. Panel (b) refers to Metashape® TP cloud.
Drones 06 00242 g010
Figure 11. Relative model accuracy measurement between markers P7 and P9 displayed with model screenshots: (a) MicMac®; (b) Metashape®.
Figure 11. Relative model accuracy measurement between markers P7 and P9 displayed with model screenshots: (a) MicMac®; (b) Metashape®.
Drones 06 00242 g011
Figure 12. Cloud-to-Cloud distance of MicMac® and Metashape® TP clouds computed with M3C2 plugin in CloudCompare.
Figure 12. Cloud-to-Cloud distance of MicMac® and Metashape® TP clouds computed with M3C2 plugin in CloudCompare.
Drones 06 00242 g012
Figure 13. MicMac® and Metashape® Cloud-to-Cloud distance probability density estimate. On the x-axis are reported Cloud-to-Cloud distances expressed in pixels while on the y-axis the probability density estimation.
Figure 13. MicMac® and Metashape® Cloud-to-Cloud distance probability density estimate. On the x-axis are reported Cloud-to-Cloud distances expressed in pixels while on the y-axis the probability density estimation.
Drones 06 00242 g013
Figure 14. 3D view of the RGB dense cloud obtained with MicMac®.
Figure 14. 3D view of the RGB dense cloud obtained with MicMac®.
Drones 06 00242 g014
Figure 15. 3D view of the RGB dense cloud obtained with Metashape®.
Figure 15. 3D view of the RGB dense cloud obtained with Metashape®.
Drones 06 00242 g015
Table 1. Overview of available photogrammetric software packages.
Table 1. Overview of available photogrammetric software packages.
NameOSPricing
3DFlow Zephyr [4]Windows360 EUR/month
Autodesk Recap [5]Windows55 EUR/month
Agisoft Metashape [6]Windows, macOS, Linux4075 EUR
BAE Systems SOCET GXP [7]WindowsOn request
Bentley ContextCapture [8]Windowsfrom 211 EUR/month
ColMap [9]Windows, macOS, LinuxFree
Drone Deploy [10]Windows, macOS, Android, iOS299 EUR/month
Planetek IMAGINE [11]WindowsOn request
Meshroom [12]Windows, LinuxFree
MicMac [13]Windows, macOS, LinuxFree
Multi-view environment [14]Windows, macOSFree
Photometrix IWitness Pro [15]Windows986 EUR
PhotoModeler [16]Windowsfrom 50 EUR/month
Pix4D Mapper [17]Windows, macOS, Android, iOSfrom 185 EUR/month
PMS AG Elcovision 10 [18]WindowsOn request
OpenDroneMap WebODM [19]Windows, macOSfrom 50 EUR
OpenMVG [20]Windows, macOS, LinuxFree
RealityCapture [21]Windows3220 EUR
SimActive Correlator 3D [22]Windowsfrom 250 EUR/month
Regard3D [23]Windows, macOS, LinuxFree
Trimble InPho [24]WindowsOn request
VisualSFM [25]Windows, macOS, LinuxFree
Table 2. Camera features and image settings.
Table 2. Camera features and image settings.
Camera ModelHasselblad L1D-20c
Focal length10.3 mm
Image formatjpeg
Image width5472 pixel
Image height3648 pixel
Exposure time1/80 s
ISO sensitivity400
Pixel size2.41 μm × 2.41 μm
Table 3. Fraser camera model self-calibration parameters.
Table 3. Fraser camera model self-calibration parameters.
ParameterSymbolValue (pix)
Focal lengthF4276.067
Principal Point coordinates P P 1 2702.974
P P 2 1836.010
Distortion center coordinates C d i s t 1 2686.749
C d i s t 2 1809.302
Radial distortion coefficients r 3 8.259 × 10 10
r 5 7.756 × 10 17
r 7 4.838 × 10 24
Decentric parameters P 1 2.469 × 10 7
P 2 2.251 × 10 7
Affine parameters b 1 1.017 × 10 4
b 2 1.802 × 10 4
Table 4. Summary of the main properties of the TP clouds.
Table 4. Summary of the main properties of the TP clouds.
Number of Points (Points)Mean Surface Density (Points/m2)Std Surface Density (Points/m2)
MicMac®extrados648,19775038276
north facade660,72088637537
south facade2,265,02532,19225,250
Metashape®extrados243,21314101004
north facade263,9581947984
south facade430,17824281489
Table 5. Summary of MicMac® and Metashape® residuals on GCPs.
Table 5. Summary of MicMac® and Metashape® residuals on GCPs.
Marker LabelMicMac® 3D Err. (m)Metashape® 3D Err. (m)
P070.0170.026
P090.0140.023
P180.0180.026
P330.0050.013
P350.0040.004
P540.0050.006
P550.0050.009
P600.0230.025
P630.0060.009
P660.0070.012
P670.0030.013
P680.0130.009
Table 6. Summary of MicMac® and Metashape® residuals on CPs.
Table 6. Summary of MicMac® and Metashape® residuals on CPs.
Marker LabelMicMac® 3D Err. (m)Metashape® 3D Err. (m)
P200.0200.023
P500.0480.017
P640.0240.034
P690.0550.052
Table 7. Statistics on ground control points and check points errors.
Table 7. Statistics on ground control points and check points errors.
GCP 3D ErrorCP 3D Error
Mean (m)Std (m)Mean (m)Std (m)
MicMac®0.0100.0070.0370.017
Metashape®0.0150.0080.0310.015
Table 8. Relative accuracy measurements summary.
Table 8. Relative accuracy measurements summary.
Control Line MicMac®Metashape®
True Dist. (m)Meas. (m)3D Err. (m)Meas. (m)3D Err. (m)
P7–P92.3892.3940.0052.3960.007
P11–P1815.39615.390−0.00615.4050.009
P9–P355.4425.438−0.0045.4600.018
P56–P686.9596.941−0.0186.9680.009
Table 9. Summary of the main properties of the dense clouds.
Table 9. Summary of the main properties of the dense clouds.
Number of Points (Points)Mean Surface Density (Points/m2)Std Surface Density (Points/m2)
MicMac®extrados12,445,984170,38230,687
north facade19,490,308268,875175,817
south facade51,604,2501,000,355480,666
Metashape®extrados10,543,654133,463101,654
north facade9,341,169118,79565,411
south facade38,479,447534,579224,562
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cutugno, M.; Robustelli, U.; Pugliano, G. Structure-from-Motion 3D Reconstruction of the Historical Overpass Ponte della Cerra: A Comparison between MicMac® Open Source Software and Metashape®. Drones 2022, 6, 242. https://0-doi-org.brum.beds.ac.uk/10.3390/drones6090242

AMA Style

Cutugno M, Robustelli U, Pugliano G. Structure-from-Motion 3D Reconstruction of the Historical Overpass Ponte della Cerra: A Comparison between MicMac® Open Source Software and Metashape®. Drones. 2022; 6(9):242. https://0-doi-org.brum.beds.ac.uk/10.3390/drones6090242

Chicago/Turabian Style

Cutugno, Matteo, Umberto Robustelli, and Giovanni Pugliano. 2022. "Structure-from-Motion 3D Reconstruction of the Historical Overpass Ponte della Cerra: A Comparison between MicMac® Open Source Software and Metashape®" Drones 6, no. 9: 242. https://0-doi-org.brum.beds.ac.uk/10.3390/drones6090242

Article Metrics

Back to TopTop