Next Article in Journal
Adjusting the Regular Network of Squares Resolution to the Digital Terrain Model Surface Shape
Previous Article in Journal
An Efficient Probabilistic Registration Based on Shape Descriptor for Heritage Field Inspection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structural Elements Detection and Reconstruction (SEDR): A Hybrid Approach for Modeling Complex Indoor Structures

1
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
2
Department of Land Surveying and Geo-Informatics, Smart Cities Research Institute, The Hong Kong Polytechnic University, Hong Kong 999077, China
3
Public Works Department, Faculty of Engineering, Cairo University, Giza 12316, Egypt
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(12), 760; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9120760
Submission received: 19 November 2020 / Revised: 15 December 2020 / Accepted: 18 December 2020 / Published: 19 December 2020

Abstract

:
We present a hybrid approach for modeling complex interior structural elements from the unstructured point cloud without additional information. The proposed approach focuses on an integrated modeling strategy that can reconstruct structural elements and keep the balance of model completeness and quality. First, a data-driven approach detects the complete structure points of indoor scenarios including the curved wall structures and detailed structures. After applying the down-sampling process to point cloud dataset, ceiling and floor points are detected by RANSAC. The ceiling boundary points are selected as seed points of the growing algorithm to acquire points related to the wall segments. Detailed structures points are detected using the Grid-Slices analysis approach. Second, a model-driven refinement is conducted to the structure points that aims to decrease the impact of point cloud accuracy on the quality of the model. RANSAC algorithm is implemented to detect more accurate layout, and the hole in structure points is repaired in this refinement step. Lastly, the Screened Poisson surface reconstruction approach is conducted to generate the model based on the structure points after refinement. Our approach was validated on the backpack laser dataset, handheld laser dataset, and synthetic dataset, and experimental results demonstrate that our approach can preserve the curved wall structures and detailed structures in the model with high accuracy.

1. Introduction

Three-dimensional (3D) indoor models with high quality are widely used in many applications, such as construction planning and monitoring [1], indoor location and navigation [2], and virtual reality [3]. Up-to-date drawings of a 3D model of indoor scenarios are potentially required in the whole closed-loop lifecycle of a building. The lifecycle includes design and planning, progress monitor, construction quality control, facilities management, refurbishment, and deconstruction [4]. According to Volk et al. [5], most as-built buildings are not maintained, refurbished, or deconstructed with BIM yet, and there are still many problems with uncertainties of building conditions and deficient documentation prevalent in existing buildings. Despite different sources are used for data acquisition of interior structures, the point cloud dataset is the main source to reconstruct the interior model. Acquiring point cloud for the indoor scenario cloud be conducted by using different platforms. We focus on the 3D laser scanner in this paper, and those laser scanners can be categorized as terrestrial laser scanner (TLS) and mobile laser scanner (MLS) base on the movement of platform. Point clouds generated from terrestrial laser scanner have the benefits of the high accuracy but time-consuming [1]. It requires multiple setting to acquire the point cloud for the whole indoor scene. On the contrary, the mobile laser scanner overcomes the disadvantage of fixed scanning station, but with low accuracy compared with TLS [6]. The backpack laser scanner is manufactured to acquire data at a different level or even in small spaces of indoor scenes. In terms of the price of equipment, the laser scanners are more expensive than RGB-D camera [7], which is not laser-based scanner. The indoor environment is characterized by the complexity of the layout and existence of various furniture or objects, and the current practice still demands the manual or interactive process, which is time-consuming and requires professional skills. Therefore, the key challenge is to automatically reconstruct interior models with complex structures.
Reconstruction of 3D interior models from the acquired point cloud depends mainly on the detection and modeling of structural elements. The procedures of reconstruction could be classified into two categories: plane-based reconstruction and line-based reconstruction approaches. Plane-based reconstruction approaches classify the whole scene to the main primitives and represent the structural elements by the planar surface. When the orthogonal wall surface project to the horizontal plane, it becomes a straight line or curved line and using that line to reconstruct the 3D model. This is the basic strategy of the line-based reconstruction method. The generated 3D models from those approaches can be categorized as the surface model [8,9] and the volume model [4,10], and the main difference between those two models is the description of wall structures. The BIM model is the semantically rich representation of buildings, which, including not only geometry, but also semantics and topology [9], and the Industry Foundation Classes (IFC) format is one of the common indoor modeling standards. Semi-automatic software such as Trimble RealWorks [11], CloudCompare [12], and 3D reshaper [13] were developed to generate the meshes or geometric primitives from the point cloud [14]. However, to generate the BIM model from the reconstructed 3D model, the conversion processing is essential. Murali et al. [15] used the interior design software Planner5D [16] to correct small errors from automatic modeling and add furniture to the BIM model. In the work from Previtali et al. [17], the commercial software Rhinoceros [18] was implemented to transfer the surface model to volumetric model in IFC format. The obj format developed by Wavefront Technologies was chosen by Macher et al. [14] as the transition format towards IFC format. All the structural elements were saved in a file in obj format, and the opensource 3D CAD software FreeCAD [19] was used to transfer the obj file to IFC format.
Line-based reconstruction approaches represent the main planar surfaces by line segments. Wang et al. [20] proposed an approach that is based on the decomposition-and-reconstruction strategy to process unorganized point clouds and the corresponding trajectory of the trolley, which can identify each room, and reconstruct the 3D building model. The average errors of the reconstructed model are 0.394 cm to 3.528 cm. Although this approach can detect holes and classify them as the opened doors, the trajectory of point cloud data still essential for the input dataset. The same dilemma can be found in the approach proposed by Mura et al. [8]. This approach started with the segmentation of planar patches, and normal deviation and least-median-of-squares algorithm were used to detect the vertical planar patches as potential wall patches. A lightweight visibility test based on the position of the scanner and infinite shadow volume was utilized to recover the unoccluded extent of the candidate wall patch. Then, the candidate wall patches were projected to xy-plane and the representative lines obtained from mean-shift clustering were constructed to a 2D cell complex and reconstructed the final model. For the quality of reconstructed models, the error is under 1mm for synthetic datasets and approximate 3 cm to 7 cm for real-world datasets. To deal with the challenge of occlusions caused by furniture, Previtali et al. [17] proposed the integrated approach of graph-cut and ray-tracing. The planar primitives were detected by a hybrid technique combining the RANSAC algorithm and connected component analysis. The floorplan was generated from the ceiling point cloud after projection and decomposed to 2D cells. A graph-cut algorithm was implemented to solve this labeling problem of 2D cells. Windows and doors could be detected by using the ray-tracing algorithm and differentiate by their position. The precision of reconstructed models is about 3 cm to 4 cm. Shi et al. [21] proposed a framework for the automatic reconstruction of the indoor building model from backpack laser scanner (BLS), which only requires the point cloud for processing. Based on the new hybrid segmentation approach and the enriched wall-surface object detection, this approach succeeded to generate the 3D semantic indoor model with doors and windows. The average error of reconstructed model is 0.5 cm to 2.5 cm. However, curved wall structures may cause the problem of wall extraction. Therefore, Yang et al. [9] focused on the implementation of the indoor reconstruction of multi-room with curved walls, and their study proposed a novel straight line and curved line tracking method to detect the boundary line of the wall. The quality of the models is evaluated by the distance from the position of wall corners to the points of the corners, and the average of the distance is about 5 cm. Nonetheless, the situation of specific structures made by decorating was still ignored in this study. In the approach developed by Xie et al. [22], several different heights of horizontal slices were acquired to detect the layout of the structure. The average fitting error of reconstructed models is 3.35 cm in real-world dataset, and 0.407 cm to 3.21 cm in synthetic data with different Gaussian noise. However, the same limitation of detecting the curved wall structure still existed in this approach.
The plane-based approach adopts the detection of main planar primitives, and PCA (principal component analysis) and RANSAC (random sample consensus) are frequently used algorithms in the detection of structural elements. Murali et al. [15] described the processing steps of the modeling approach to three sub-tasks: plane detection, Manhattan world fitting, and plane labeling. Plane detection was adopted using the RANSAC model fitting for modeling 3D plane surfaces. Manhattan world assumption assumes that most man-made construction follows the Cartesian reference system. It means that the building structures can be substituted by the planar surfaces parallel to one of the three principal planes of this reference coordinate system [9]. The generated models were evaluated by the absolute distances from the model to the ground truth, and the approach obtain a mean error of less than 10 cm on average. Indoor volume sweep reconstruction proposed by Budroni and Böhm [23] also assigned the point clouds to each plane surface of the structural element and the normal direction of the plane was adopted to recognize each part. The obvious limitation of this approach is the structures need to follow the Manhattan world assumption. Furthermore, clean interior space is required to apply that method, which is almost impossible for modeling as-built and into service for a long-time building. To overcome those drawbacks, Macher et al. [14] presented a semi-automatic approach based on segmentation to extract point clouds of structural elements. The maximum likelihood estimation sample consensus (MLESAC) was implemented to segment point clouds into several planes. In quality assessment of geometric of the reconstruction, the precision of the reconstructed walls is 1 cm on average, and all the mean of deviations of floor is under 2 cm. Sanchez and Zakhor [24] utilized PCA and classification strategies to divide the point cloud into ceiling points, wall points, floor points, and remaining points. After that, RANSAC was utilized to find the best fitted planar primitive to represent each part of the structural element. A 3D plane intersecting reconstruction approach was proposed by Ochmann et al. [25]. Planes were acquired by RANSAC shape detection, and the mutual visibility-based clustering approach was conducted to remove outliers and segment the room. All the clustered planes were intersected to generate the 3D cell and reconstruct the final model. Tran et al. [26] integrated the strategy of the 3D cell with grammar rules to merge or split the cell and reconstruct the topologic relation of each cell. The input point cloud was clustered by normal direction, and the multi-scale surface exaction was implemented to obtain the horizontal and vertical surface. After the arrangement of those surfaces decomposes into 3D cell, the indoor shape grammar contains geometric transformation rules, semantic conversion rules and topologic relation rules determine the final 3D model. The quantitative evaluation of the reconstructed models shows that the approach was to obtain the median absolute distance under 0.5 cm in synthetic dataset and around 2.5 cm in real-world dataset. Nikoohemat et al. [10] proposed an approach for segmenting the point cloud by using a planar surface growing algorithm and reconstruct the volumetric walls by detecting the parallel surface of a wall. For curved wall structures, the approach decomposed the curved wall to several smaller rectangular. The correctness and the completeness of the reconstructed model are 0.88 to 0.98 and 0.96 to 1.0, respectively. Obviously, the plane-based approached failed to detect and model detailed structures.
Despite the fact that this approach succeeded to reconstruct interior models, the following two drawbacks could be found: (1) based on Manhattan-world assumption or using the plane and straight line would be problematic when representing the curved wall structures. (2) Detailed structures are ignored in most proposed methods, as shown in Figure 1.
In contrast to existing approaches, structural elements detection and reconstruction (SEDR) is an approach that automatically reconstructs the full structures of the ceiling, walls, and floor from point clouds without any prior knowledge like trajectory or position of the scanner, and the structure is not required to conform to the Manhattan-world assumption. Detailed structures and curved walls both can be represented in the reconstructed model. The method utilizes the combination of data-driven and model-driven algorithms to preserve complete structures. The capacity of our approach is validated both on BLS (backpack laser scanner) datasets, HLS (handheld laser scanner) dataset and synthetic datasets. Compared to the existing approaches, the contribution of this paper can be summarized as follows:
  • A hybrid approach of data-driven and model-driven approach for reconstructing indoor structure elements is presented. The proposed approach detects and models curved wall structures in the 3D domain.
  • A fusion of grid and slice strategy to detect detailed structures of the indoor scenario.
  • An eight-connected domain algorithm that can keep the main structures not affected in outlier removal.
The remaining contents of this paper are organized as follows: the principles of SEDR and implementation steps are presented in Section 2. Section 3 demonstrates the experimental results of the proposed approach on different datasets and discussion. The main conclusions and future work are provided in Section 4.

2. Methodology

For curved wall structures and detailed structures, the existing algorithms cannot simultaneously handle each interior structure. Therefore, we choose the data-driven strategy that still using the point cloud to represent those structures. After detecting all point clouds of the ceiling, floor and wall, the model-driven post-processing of refinement is implemented to decrease the influence of point cloud quality on the final model. In this section, we briefly introduce the strategy and algorithm used in the approach in Section 2.1. The details about pre-process, structural elements detection, refinement and reconstruction are discussed in Section 2.2, Section 2.3 and Section 2.4.

2.1. Overview

SEDR is designed to process the unstructured point cloud without any additional information. As shown in Figure 2, the approach starts with voxel-based down-sampling, which can significantly improve the efficiency of the algorithm. RANSAC is applied for the detection of 3D planimetric surfaces, and the difference of height is conducted for distinguishing the floor and ceiling surfaces. A grid-based outlier removal algorithm is applied to remove the outliers from the ceiling and floor. Grid-slices analysis strategy is implemented to determine whether there are specific structures in structural elements or not. In the detection of wall segments, same as the line-based reconstruction approaches, the boundary points of the ceiling are traced by using the normal direction angle. Compared to existing approaches that use those boundary points for fitting the straight or curved line to represent the wall, the boundary points in our approach are conducted as the control points to detect wall points within a predefined distance threshold. After detecting all points that represent the structural elements, the model-driven refinement is implemented. At last, the reconstruction approach converts the point clouds to the watertight indoor model. The code of SEDR was written by C++ in Microsoft Visual Studio Community 2015. The voxel-based down-sampling algorithm and RANSAC algorithm was implemented by the opensource Point Cloud Library (PCL) [28] version 1.8.0. under BSD license.

2.2. Pre-Process

The only input of the approach is point cloud dataset, and 3D laser scanner is a more accurate and more reliable equipment in acquiring point cloud in the indoor scenario. On the one hand, terrestrial laser scanner requires careful planning of the scanner locations in data collection to acquire the complete point cloud in indoor space where contains occlusions of walls or other clutter [1]. The point cloud datasets scanned from several locations need the registration process to the same coordinate system. On the other hand, mobile laser scanner is able to quickly acquire the point cloud in large areas. Alignment of point cloud of each frame generated from the mobile scanner needs simultaneous localization and mapping (SLAM) for localization [29]. However, point cloud data obtained from 3D laser scanner usually have redundancy. To improve the efficiency of the algorithm, the voxel-based down-sampling algorithm is implemented to reduce the amount of point cloud. The boundary box of the unstructured point cloud dataset is divided into sub voxels. When a sub voxel contains more than one point, the algorithm will calculate the arithmetic average of all points’ coordinates in this voxel. Those points in the voxel are replaced by the new point of calculated coordinate. The size of the voxel is determined by point cloud data resolution and the computationally efficiency of the subsequent process. In our experiments, a voxel size of 5 cm is used.

2.3. Structural Elements Detection

The structural elements are the main description of the interior layout. Compared to existing approaches that represent the structural elements by planes or lines, our proposed approach detects the point clouds of all the structural elements. Details of the proposed algorithm will be discussed in the following subsections.

2.3.1. Ceiling and Floor Detection

The horizontal planar surface is the most common shape of the floor and ceiling, so RANSAC is applied to detect the 3D planimetric surfaces. The tolerance is an important parameter of the RANSAC algorithm, and it’s normally determined by the accuracy of the lidar scanner. The tolerance in our proposed approach is selected to cover the lidar scanner accuracy. Horizontal surfaces are excluded and alignment of the scene to gravitational direction assists to differentiate between floor and ceiling based on their reference level. In Figure 3b, the ceiling part points acquired from the RANSAC algorithm contain some outliers (red points), which do not belong to the ceiling part but are in the same plane detected by RANSAC. Those outliers in red rectangles are acquired from the scanning system for the object out of the existing opening of door or window. The opensource software CloudCompare [12] is adopted to visualize the point cloud. A simple and effective grid-based outlier removal algorithm is implemented. The algorithm starts with the regular grid process. All ceiling points detected by RANSAC are divided into the 2D grid based on the X and Y coordinates, and the size of the grid is determined by the resolution of the point cloud dataset. In our experiment, we set the same parameter as the voxel size in down-sampling process. If the number of points in the grid is greater than zero, we assign 1 to the grid’s value. After checking all grids, a binary grid map is generated and provided to the filter. The filter is implemented by the eight-connected domain algorithm. As shown in Figure 3b, the seed grid in the center of red arrows whose value is 1 is randomly selected to start the connection of eight directions, and each detected grid is classified into one region. This process is repeated until all the grids are checked. After that, the binary grid map becomes to grid region map in Figure 3c. Based on the difference of grid number in outlier regions and ceiling regions, the adaptive threshold is calculated by the maximum difference of sorted grid number of each region. The grid number of ceiling regions is bigger than the threshold, and the ceiling segment is detected from the ceiling part points based on the grid of ceiling regions. The outliers in grid of outlier regions are removed as shown in Figure 3d. Meanwhile, the layout of the ceiling structures is well preserved. Initial room segmentation is also accomplished in this step.
In modern architecture, the decoration of building structure is very common, so those specific structures should be not ignored. Inspired by Xie et al. [22], we choose several slices of different heights to get the real room layout. According to Yang et al. [9] and Previtali et al. [17], the ceiling is generally less influenced by clutter and occlusions caused by its location and there is a clean space of the wall between the ceiling and the top area of the door. The room layout is well preserved in this part of the wall structure. However, they neglected the fact that the offset space proposed by their literature is not correct in the room with specific structures. Thus, we introduce our grid-slices analysis approach. If there have specific structures, the ceiling part of the point cloud cannot describe the complete room layout, and our approach relies on the difference between the ceiling part and the wall slices part point cloud. The heights of different slices are designed by the modified bisection method. As the blue dotted line is shown in Figure 4a, half and quarter of room height from the ceiling are firstly checked. In consideration of the furniture in rooms, the slices below the half-height from the ceiling are ignored. The points of slices are assigned to each grid by their coordinates, and the binary grid map is updated by the newly added slice grid. If the grid of slice (the orange grid in Figure 4b) is over the coverage of the ceiling grid (the blue grid in Figure 4b) and the quantity of region of grid value equal to zero increases, it means that zero value grids (the white grid in Figure 4b) belong to this region are possibly located in the structural detail area. The threshold of the minimum of the grid in the zero-value region is utilized to remove the misclassification of the initial specific structural grid. In our experiments, a region with 20 grids is the minimum region that can be classified into specific structures. If structural details could not be detected by using 1/4 and 1/2 height slices (as shown in Figure 4a with blue dotted line), detailed slices 1/8 and 3/8 could solve this issue (as shown in Figure 4a with green dotted line). Our designed algorithm stops iterations after the third trial, and structural details are added if exist.
The existence of various furniture or objects in the indoor scenario is the main cause of the incompleteness of the floor part point cloud. After detecting the ceiling and the specific structures, those structures can describe the scope of the room, and the projection recovery method of floor part is available to process the missing floor part point cloud now. Firstly, all floor part point clouds are divided into the same 2D grid for comparison. The differences between room scope grids and floor grids occur in the grids of missing floor point cloud and outliers of the floor part. As shown in Figure 5, in the grids that do not have any floor part points, all points are detected from the ceiling and projected to average floor height to recover the part of floor points, and all points in outlier grids are removed.

2.3.2. Wall Detection

Unlike other algorithms that are using lines or planes to represent the walls, our approach extracts all the wall point clouds to preserve all wall structures. Points related to the wall segment are detected by the boundary points of ceiling part points after outlier removal, and the normal direction difference is conducted to detect the boundary points. The main idea is to find the angle difference in the normal direction. In other words, the angle difference of the normal direction of the neighborhood points is larger in the intersection of the ceiling and wall. Based on this theory, the boundary points are detected and selected as the seed point of the growing algorithm. The algorithm starts with the seed point and a certain distance of neighborhood points. The 2D distance between the seed point and neighborhood points is calculated by coordinate X and Y. If the distance is smaller than the threshold, the neighborhood points are classified as wall part points, and the threshold is estimated from the dataset quality. As shown in Figure 6, for the dataset that contains specific structures, the boundary points of the floor and slice height are used to divide the process to below the slice height part and above the slice height part. The specific structures are also added to the wall segment.

2.4. Refinement and Reconstruction

Our approach preserves all the details in the structural elements’ point clouds. To bring the structural detail to the model, a data-driven reconstruction approach is applied instead of line-based and plane-based reconstruction. Data-driven reconstruction approaches are widely used in outdoor scenarios. As the name implies, the approach is driven by the point cloud data. Due to the different accuracy of the point cloud, model-driven approaches have the advantage in terms of quality and visual effects of the model. So, we implement the integrated model-driven refinement to the initial structural elements’ points. The main idea of this method is to recover part of the points by the corresponding structural feature while keeping the remaining point the same. Ceiling and floor part points can be optimized by adjusting the location of points to the average height plane. Because of the bigger parameter in the detection of ceiling and floor, the RANSAC process with the small parameter is implemented to the ceiling part point cloud and floor part point cloud respectively to detect the more precise height before refinement. For wall part refinement, the 2D line or circle detected from boundary points of the ceiling segment by the 2D RANSAC algorithm composes the floorplan of wall structures. As shown in Figure 7a, if the dataset contains specific structures, the boundary points of the ceiling and floor segment both project to the 2D plane to detect the lines. The X and Y coordinates of wall points are adjusted to line or circle and keep the Z coordinate the same to refine the wall points. For the missing data in the wall points caused by opening doors, windows, and obstacles of furniture, grid analysis repair of the Z-axis is implemented to fill the openings. The wall segment and all structural elements after refinement are shown in Figure 7b,c.
After refinement of structural element points, we choose the Screen Poisson algorithm [30] to generate the final indoor 3D model. This implicit function-based approach using an indicator function to segment the inside space and outside space of the model and reconstruct the model’s surface. As shown in Equation (1), The gradient of the indicator function is equal to the inward surface normal at points near the surface. χ is the gradient of the indicator function, and the vector field is V . Therefore, for the vector filed V : R 3 R 3 , minimizing the energy function shown in Equation (2) by solving for scalar function V : R 3 R . To find the best indicator function whose gradient best approximates the inward surface normal vector filed, the divergence operator is utilized on both sides of Equation (1). In Equation (3), χ can transfer to Δ χ , and Δ χ is the Laplacian of indicator function, which equals to the divergence of the vector filed V . Now, the Equation (2) becomes to Poisson equation, and the best fitting indicator function can be obtained by solving this equation. Moreover, to overcome the drift of indicator function caused by the error of point clouds, the Screened Poisson Surface Reconstruction approach explicitly incorporates the points as interpolation constraints. The energy function is shown in Equation (4). Point (p) in set of input points (P) with weights w (p), and α is a weight that trades off two items in the energy function. A r e a ( P ) is the area of the reconstructed surface. To accurately estimate the indicator function of the near reconstructed surface, discretization of this problem is necessary. The octree of the point set P is utilized to divide the indicator function into each function to each node of the octree. In our experiments, the maximum tree depth is set to 9. After solving the indicator function, the Marching Cubes algorithm is implemented to extract isosurface to reconstruct the model.
χ = V
E ( χ ) = χ ( p ) V ( p ) 2 d p
Δ χ χ = V
E ( χ ) = χ ( p ) V ( p ) 2 d p + α A r e a ( P ) p P w ( p ) p P w ( p ) χ 2 ( p )

3. Experiments and Discussion

3.1. Datasets Description and Parameters Settings

We use three kinds of datasets to validate the capability of our approach, and these datasets have different layouts and are acquired with full of furniture. Table 1 shows the main parameters and values used in experiments. BLS dataset in Figure 8a was collected in a study room by the backpack mapping system presented in Fan et al. [6], and the area in the red circle is the same place as specific structures shown in Figure 1. HLS dataset is implemented to highlight the issues of curved wall structures which is shown in the red circle of Figure 9a. The synthetic dataset “synth1” in Figure 10a is presented in Mura et al. [8] to evaluate the accuracy of the reconstruction approach.

3.2. Reconstruction Quality

The point cloud and the reconstructed model are visualized by CloudCompare [12] and Meshlab, [27] respectively. The outliers generated from out of the door and the noise from glass reflection of windows exist in the BLS dataset, and our approach shows the robustness of those outliers and noisy. The comparison to the approach proposed in Shi et al. [21] is shown in Figure 8c,d. Detailed structures are well preserved in the model, and the other structural elements also have good quality because of the refinement of structural elements point clouds. The integrated modeling strategy is proven to keep the balance of model completeness and quality. As shown in Figure 9a,b, due to the existing high level of clutter, the occlusions of wall and floor are serious in the HLS dataset. The curved wall structure (in red circle) is decomposed to several planar primitives in the model generated from the approach of Shi et al. [21] in Figure 9c. Compared with our model in Figure 9d, the curved wall structure (in red circle) is well represented and smooth. Moreover, the pillar (in blue circle) also can be obtained by our approach of ceiling boundary point detection. It shows that our algorithm has the capacity in processing no-planar structure. However, As shown in the orange circle in Figure 9a,d, a special object was misclassified to pillar. Multi-room datasets like the synthetic dataset shown in Figure 10a,b contains three rooms and a corridor. In the models shown in Figure 10b,c, our approach is also available in the multi-room dataset. The outliers are removed automatically, and the dataset is segmented into four parts to process respectively. The obstacle caused by furniture and the opening of windows and doors is well repaired.
In addition to the visual validation of our approach, we also applied quantitative analysis to synthetic data to evaluate the accuracy of the reconstructed model from our approach. This synthetic model was generated manually in 3D modeling software, and the point cloud dataset is the simulation of virtually scanned by TLS in several positions. Gaussian noise with σ = 0.1 cm is added to the simulation of point cloud for making the simulation more realistic. The projected perpendicular distance from the point cloud to the reconstructed model is calculated and shown in Figure 11. Figure 11 is also generated from CloudCompare [12] software. The average distance is approximately 0.06 cm for the synthetic dataset. Due to the limitation of the Screen Poisson reconstruction algorithm, the wall intersection parts in red circle of Figure 11a cannot achieve the same high accuracy as the main part of the wall, but it still acceptable around 2 to 3 cm. Compared with the evaluation from Shi et al. [21] in Figure 11b, the accuracy is significantly improved. The maximum of error is 12.5 to 15 cm, and the error of the wall structure shown in blue circle of Figure 11b is around 5 to 7.5 cm.

4. Conclusions

Structural elements detection and reconstruction is a hybrid approach for modeling complex interior structures, and the proposed approach comprises the integrated modeling strategy, eight-dimension outlier removal algorithm, and grid-slices analysis approach to overcome the problem in detecting the detailed structures and curved wall structures of the indoor scenario. In the experiments, the outliers could be detected and removed by using the eight-dimension outlier removal algorithm. Moreover, the detailed structure around the ceiling was detected by the grid-slices analysis approach. The curved wall structure was well preserved in the models. From the visual validation, the SEDR approach shows the ability to detect all structural elements and keep the balance of model completeness and quality. In terms of accuracy, the average distance from point cloud to the reconstructed model is 0.06 cm and the maximum distance is approximately 3 cm in the SYN dataset. The indoor 3D models generated from the SEDR approach have plenty further uses in practice. The detailed structures that can be reconstructed by the approach could be used in change detection of indoor structures for facility management and structural healthy monitoring, which can be implement by comparing the difference of two models generated from the point cloud at two different times. The 3D models also support redesign by directly edit on the triangle mesh. In the future, we intend to enrich our approach with modeling of non-vertical wall segments and arbitrary ceiling shapes will be considered as well. Automatically processing the dataset of multi-floor is also a problem that needs to be considered in future work, and enriching the reconstructed models with openings like doors and windows will be considered as well.

Author Contributions

Conceptualization, Ke Wu; methodology, Ke Wu; validation, Ke Wu; resources, Wael Ahmed, Wenzhong Shi; writing—original draft preparation, Ke Wu; writing—review and editing, Wael Ahmed, Wenzhong Shi; Supervision, Wenzhong Shi; project administration, Wenzhong Shi; funding acquisition, Wenzhong Shi. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Hong Kong Polytechnic University, grant number (1-ZVN6, 4-BCF7) and The State Bureau of Surveying and Mapping, P.R. China (1-ZVE8).

Acknowledgments

The authors would like to thank Visualization and MultiMedia Lab at University of Zurich (UZH) and Claudio Mura for the synthetic dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lehtola, V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.; Virtanen, J.-P.; et al. Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef] [Green Version]
  2. Fellner, I.; Huang, H.; Gartner, G. “Turn Left after the WC, and Use the Lift to Go to the 2nd Floor”—Generation of Landmark-Based Route Instructions for Indoor Navigation. ISPRS Int. J. Geo Inf. 2017, 6, 183. [Google Scholar] [CrossRef] [Green Version]
  3. Natephra, W.; Motamedi, A.; Fukuda, T.; Yabuki, N. Integrating building information modeling and virtual reality development engines for building indoor lighting design. Vis. Eng. 2017, 5, 1–21. [Google Scholar] [CrossRef] [Green Version]
  4. Jung, J.; Stachniss, C.; Ju, S.; Heo, J. Automated 3D volumetric reconstruction of multiple-room building interiors for as-built BIM. Adv. Eng. Inform. 2018, 38, 811–825. [Google Scholar] [CrossRef]
  5. Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef] [Green Version]
  6. Fan, W.; Shi, W.; Xiang, H.; Ding, K. A Novel Method for Plane Extraction from Low-Resolution Inhomogeneous Point Clouds and its Application to a Customized Low-Cost Mobile Mapping System. Remote Sens. 2019, 11, 2789. [Google Scholar] [CrossRef] [Green Version]
  7. Tang, S.; Zhu, Q.; Chen, W.; Darwish, W.; Wu, B.; Hu, H.; Chen, M. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling. Sensors 2016, 16, 1589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Mura, C.; Mattausch, O.; Villanueva, A.J.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, F.; Zhou, G.; Su, F.; Zuo, X.; Tang, L.; Liang, Y.; Zhu, H.; Li, L. Automatic Indoor Reconstruction from Point Clouds in Multi-room Environments with Curved Walls. Sensors 2019, 19, 3798. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Nikoohemat, S.; Diakité, A.; Zlatanova, S.; Vosselman, G. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Autom. Constr. 2020, 113, 103109. [Google Scholar] [CrossRef]
  11. Trimble RealWorks. Available online: www.meteo.ru (accessed on 2 September 2020).
  12. CloudCompare. Available online: http://www.cloudcompare.org/ (accessed on 15 September 2020).
  13. 3DReshaper. Available online: http://www.3dreshaper.com/en/ (accessed on 15 September 2020).
  14. Macher, H.; Landes, T.; Grussenmeyer, P. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef] [Green Version]
  15. Murali, S.; Speciale, P.; Oswald, M.R.; Pollefeys, M. Indoor Scan2BIM: Building information models of house interiors. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 6126–6133. [Google Scholar]
  16. Planner 5D. Available online: http://www.planner5d.com/ (accessed on 15 September 2020).
  17. Previtali, M.; Díaz-Vilariño, L.; Scaioni, M. Indoor Building Reconstruction from Occluded Point Clouds Using Graph-Cut and Ray-Tracing. Appl. Sci. 2018, 8, 1529. [Google Scholar] [CrossRef] [Green Version]
  18. Rhinoceros. Available online: https://www.rhino3d.com/ (accessed on 24 January 2020).
  19. FreeCAD. Available online: https://www.freecadweb.org/ (accessed on 15 September 2020).
  20. Wang, R.; Xie, L.; Chen, D. Modeling Indoor Spaces Using Decomposition and Reconstruction of Structural Elements. Photogramm. Eng. Remote Sens. 2017, 83, 827–841. [Google Scholar] [CrossRef]
  21. Shi, W.; Ahmed, W.; Li, N.; Fan, W.; Xiang, H.; Wang, M. Semantic Geometric Modelling of Unstructured Indoor Point Cloud. ISPRS Int. J. Geo Inf. 2018, 8, 9. [Google Scholar] [CrossRef] [Green Version]
  22. Xie, L.; Wang, R.; Ming, Z.; Chen, D. A Layer-Wise Strategy for Indoor As-Built Modeling Using Point Clouds. Appl. Sci. 2019, 9, 2904. [Google Scholar] [CrossRef] [Green Version]
  23. Budroni, A.; Böhm, J. Automatic 3D Modelling of Indoor Manhattan-World Scenes from Laser Data. In Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXVIII, Part 5 Commission V Symposium, Newcastle Upon Tyne, UK, 1–5 October 2010; pp. 115–120. [Google Scholar]
  24. Sánchez, V.; Zakhor, A. Planar 3D modeling of building interiors from point cloud data. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1777–1780. [Google Scholar]
  25. Ochmann, S.; Vock, R.; Klein, R. Automatic reconstruction of fully volumetric 3D building models from oriented point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 151, 251–262. [Google Scholar] [CrossRef] [Green Version]
  26. Tran, H.; Khoshelham, K. Procedural Reconstruction of 3D Indoor Models from Lidar Data Using Reversible Jump Markov Chain Monte Carlo. Remote Sens. 2020, 12, 838. [Google Scholar] [CrossRef] [Green Version]
  27. MeshLab. Available online: https://www.meshlab.net/ (accessed on 15 September 2020).
  28. Point Cloud Library. Available online: https://pointclouds.org/ (accessed on 15 September 2020).
  29. Lauterbach, H.A.; Borrmann, D.; Heß, R.; Eck, D.; Schilling, K.; Nüchter, A. Evaluation of a Backpack-Mounted 3D Mobile Scanning System. Remote Sens. 2015, 7, 13753–13781. [Google Scholar] [CrossRef] [Green Version]
  30. Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph. 2013, 32, 1–13. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example of the approach that ignores detailed structures: (a) photo of real indoor scenario, (b) the reconstructed model based on the approach of Shi et al. [21], and visualization by the opensource software MeshLab [27].
Figure 1. Example of the approach that ignores detailed structures: (a) photo of real indoor scenario, (b) the reconstructed model based on the approach of Shi et al. [21], and visualization by the opensource software MeshLab [27].
Ijgi 09 00760 g001
Figure 2. Flowchart of the proposed structural elements detection and reconstruction (SEDR) approach.
Figure 2. Flowchart of the proposed structural elements detection and reconstruction (SEDR) approach.
Ijgi 09 00760 g002
Figure 3. Analysis of detected ceiling surface: (a) raw point cloud dataset for visualization of the outlier, (b) binary map of ceiling part points (“1” for the grid’s value, the red arrow for the process of eight-connected domain algorithm), (c) region map of ceiling part points (“C” for the ceiling grid, “O” for the grid of outliers), (d) detected ceiling segment after outlier removal (red points are the outliers).
Figure 3. Analysis of detected ceiling surface: (a) raw point cloud dataset for visualization of the outlier, (b) binary map of ceiling part points (“1” for the grid’s value, the red arrow for the process of eight-connected domain algorithm), (c) region map of ceiling part points (“C” for the ceiling grid, “O” for the grid of outliers), (d) detected ceiling segment after outlier removal (red points are the outliers).
Ijgi 09 00760 g003
Figure 4. Grid-slices analysis approach: (a) several slices of different distances to the ceiling, (b) region detection in slices (“C” for ceiling grid, “S” for slice grid, “0” for the empty grid).
Figure 4. Grid-slices analysis approach: (a) several slices of different distances to the ceiling, (b) region detection in slices (“C” for ceiling grid, “S” for slice grid, “0” for the empty grid).
Ijgi 09 00760 g004
Figure 5. Analysis of the detected floor surface: (a) initial detected floor segment (the points in red rectangle are outliers; the holes in blue ellipses are the occlusions of floor), (b) final floor segment.
Figure 5. Analysis of the detected floor surface: (a) initial detected floor segment (the points in red rectangle are outliers; the holes in blue ellipses are the occlusions of floor), (b) final floor segment.
Ijgi 09 00760 g005
Figure 6. Analysis of detected wall segment: (a) wall segment above the slice height, (b) wall segment below the slice height, (c) final wall segment.
Figure 6. Analysis of detected wall segment: (a) wall segment above the slice height, (b) wall segment below the slice height, (c) final wall segment.
Ijgi 09 00760 g006
Figure 7. Model-driven refinement of structural elements: (a) the boundary point of ceiling segment (points in black color) and floor segment (points in red color) after projection, (b) top view of wall segment after refinement, (c) all points of structural elements after refinement.
Figure 7. Model-driven refinement of structural elements: (a) the boundary point of ceiling segment (points in black color) and floor segment (points in red color) after projection, (b) top view of wall segment after refinement, (c) all points of structural elements after refinement.
Ijgi 09 00760 g007
Figure 8. Comparison of the models reconstructed from the backpack laser scanner (BLS) dataset: (a) raw point cloud dataset (the detailed structure points are in the red circle), (b) raw point cloud dataset without ceiling, (c) model generated from Shi’s approach (the detailed structure is ignored in the red circle), (d) model generated from our approach (the detailed structure is reconstructed in the red circle).
Figure 8. Comparison of the models reconstructed from the backpack laser scanner (BLS) dataset: (a) raw point cloud dataset (the detailed structure points are in the red circle), (b) raw point cloud dataset without ceiling, (c) model generated from Shi’s approach (the detailed structure is ignored in the red circle), (d) model generated from our approach (the detailed structure is reconstructed in the red circle).
Ijgi 09 00760 g008
Figure 9. Comparison of the model reconstructed from the handheld laser scanner (HLS) dataset with curved wall structure: (a) raw point cloud dataset (the curved wall structure points are in the red circle, and the pillar points and the specific object points are in the blue circle and orange circle respectively), (b) raw point cloud dataset without ceiling, (c) model generated from Shi’s approach (the curved wall structure is in the red circle), (d) model generated from our approach (the curved wall structure is in the red circle, and the pillar and the specific object are in the blue circle and orange circle respectively).
Figure 9. Comparison of the model reconstructed from the handheld laser scanner (HLS) dataset with curved wall structure: (a) raw point cloud dataset (the curved wall structure points are in the red circle, and the pillar points and the specific object points are in the blue circle and orange circle respectively), (b) raw point cloud dataset without ceiling, (c) model generated from Shi’s approach (the curved wall structure is in the red circle), (d) model generated from our approach (the curved wall structure is in the red circle, and the pillar and the specific object are in the blue circle and orange circle respectively).
Ijgi 09 00760 g009
Figure 10. Models reconstructed from the synthetic dataset: (a) raw point cloud dataset, (b) raw point cloud dataset without ceiling, (c) our model, (d) reference model.
Figure 10. Models reconstructed from the synthetic dataset: (a) raw point cloud dataset, (b) raw point cloud dataset without ceiling, (c) our model, (d) reference model.
Ijgi 09 00760 g010
Figure 11. Comparison of the accuracy of the model generated from the synthetic dataset (a histogram of errors is shown at the right side): (a) accuracy of our model (the accuracy of the wall intersection part in the red circle is around 2 to 3 cm), (b) accuracy of Shi’s model (the accuracy of the wall part in the blue circle is around 5 to 7.5 cm).
Figure 11. Comparison of the accuracy of the model generated from the synthetic dataset (a histogram of errors is shown at the right side): (a) accuracy of our model (the accuracy of the wall intersection part in the red circle is around 2 to 3 cm), (b) accuracy of Shi’s model (the accuracy of the wall part in the blue circle is around 5 to 7.5 cm).
Ijgi 09 00760 g011
Table 1. List of main parameters involved in the proposed approach and values in experiments.
Table 1. List of main parameters involved in the proposed approach and values in experiments.
ParameterDescriptionsBLSHLSSYN
Voxel sizeThe size of a voxel in down-sampling0.05 m0.05 m0.05 m
Tolerance of planeThe distance tolerance of RANSAC in detecting plane0.07 m0.1 m0.1 m
Grid sizeThe size of the grid in outlier removal and grid-slices0.05 m0.05 m0.05 m
Angle and neighborsThe angle and neighbor points of boundary estimation60° 20060° 20090° 100
Minimum of gridsThe minimum number of grids in the structural detail region202020
Tolerance of boundaryThe tolerance of RANSAC in wall refinement0.05 m0.05 m0.01 m
Tree depthThe maximum tree depth in Screen Poisson Reconstruction999
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, K.; Shi, W.; Ahmed, W. Structural Elements Detection and Reconstruction (SEDR): A Hybrid Approach for Modeling Complex Indoor Structures. ISPRS Int. J. Geo-Inf. 2020, 9, 760. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9120760

AMA Style

Wu K, Shi W, Ahmed W. Structural Elements Detection and Reconstruction (SEDR): A Hybrid Approach for Modeling Complex Indoor Structures. ISPRS International Journal of Geo-Information. 2020; 9(12):760. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9120760

Chicago/Turabian Style

Wu, Ke, Wenzhong Shi, and Wael Ahmed. 2020. "Structural Elements Detection and Reconstruction (SEDR): A Hybrid Approach for Modeling Complex Indoor Structures" ISPRS International Journal of Geo-Information 9, no. 12: 760. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9120760

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop