Open Access
29 June 2021 Pointwise classification of mobile laser scanning point clouds of urban scenes using raw data
Qiujie Li, Pengcheng Yuan, Yusen Lin, Yuekai Tong, Xu Liu
Author Affiliations +
Abstract

Mobile laser scanning (MLS), which can quickly collect a high-resolution and high-precision point cloud of the surroundings of a vehicle, is an appealing technology for three-dimensional (3D) urban scene analysis. In this regard, the classification of MLS point clouds is a common and core task. We focus on pointwise classification, in which each individual point is categorized into a specific class by applying a binary classifier involving a set of local features derived from the neighborhoods of the point. To speed up the neighbor search and enhance feature distinctiveness for pointwise classification, we exploit the topological and semantic information in the raw data acquired by light detection and ranging (LiDAR) and recorded in scan order. First, a two-dimensional (2D) scan grid for data indexing is recovered, and the relative 3D coordinates with respect to the LiDAR position are calculated. Subsequently, a set of local features is extracted using an efficient neighbor search method with a low computational complexity independent of the number of points in a point cloud. These features are further merged to produce a variety of binary classifiers for specific classes via a GentleBoost supervised learning algorithm combining decision trees. The experimental results on the Paris-rue-Cassette database demonstrate that the proposed approach outperforms the state-of-the-art methods with a 10% improvement in the F1 score, whereas it uses simpler geometric features derived from a spherical neighborhood with a radius of 0.5 m.

1.

Introduction

With the development of light detection and ranging (LiDAR) technology, mobile laser scanning (MLS) systems, which deploy one or multiple LiDARs on a ground-based vehicle,1 can quickly collect a high-resolution and high-precision point cloud of the surroundings of the vehicle and have gained increasing attention in three-dimensional (3D) urban scene analysis,2 including urban 3D modeling3 and automated urban driving.4

Classification of MLS point clouds, in which each point in an MLS point cloud is determined to belong to a specific class, e.g., ground,5 road,6 road markings,7 vehicles,8 power lines,9 and street trees,10,11 is a common and core task for various applications of 3D urban scene analysis.12 Weinmann et al.13 proposed a pointwise classification framework, whereby each individual point is classified by a binary classifier involving a set of local geometric features derived from the neighborhoods of the point. The framework does not require expert knowledge in the specific domain12 and thus can be applied to label a variety of urban object classes. However, the main challenges of pointwise classification include a low distinctiveness of local geometric features and a high computational complexity of the neighbor search. The commonly used neighbor search approaches for MLS point clouds are based on the k-D tree algorithm, in which a k-dimensional index tree is constructed and the average complexity for the nearest-neighbor search is O(logN) for a point cloud with N points. To enhance the discrimination of local low-level geometric features, multiple neighborhood scales1416 or a selected optimal neighborhood scale13,17,18 are recovered via the k-D tree, resulting in a higher computational cost. To address the above issues, Hackel et al.14 downsampled the point cloud and built a multiscale pyramid of k-D trees to help improve the efficiency of neighbor searching.

An MLS point cloud is usually georeferenced by merging data from LiDAR and other sensors, such as inertial measurement units and global positioning systems.1 However, more semantic information exists among the raw data acquired by the LiDAR, i.e., the relative positions of the measured points with respect to the LiDAR, which can be viewed as the relative positions of these points with respect to the road surface. This information can help to monitor the presence of most constructed objects since the spatial distributions of these objects along the road are generally regular. In addition, the raw data recorded in scan order are beneficial for organizing the point cloud.19

To reduce the neighbor search time and enhance the distinctness of local geometric features for pointwise classification, this study fully exploits the contextual and topological information in the raw data of the MLS point clouds, as shown in Fig. 1. First, a two-dimensional (2D) scan grid for data indexing is recovered, and the relative 3D coordinates with respect to the LiDAR position are calculated. Subsequently, a set of local features is extracted using an efficient neighbor search method with a low computational complexity independent of the number of points N in a point cloud. These features are further merged to produce a variety of binary classifiers for specific classes via a boosting supervised learning algorithm combining decision trees.

Fig. 1

Framework of the proposed pointwise classification approach.

JARS_15_2_024523_f001.png

2.

Methods

This study considers an MLS system with a single 2D LiDAR sensor used in push-broom mode;20 i.e., the scan plane of the sensor is orthogonal to the direction of vehicle movement.

2.1.

Scan Grid Construction

A scan line is defined as a 2D profile acquired by a single rotation of the 2D LiDAR mirror.19,21 Thus, an MLS point cloud can be organized by constructing a scan grid in which each row represents a scan line, as shown in Fig. 2(a). In the grid, a point p measured by the i’th beam on the j’th scan line can be indexed by (i,j). The scan grid provides a compact representation of the point cloud with a size of Nsl×Nb, where Nsl is the number of scan lines and Nb is the number of laser beams per scan line.

Fig. 2

One hundred scan lines acquired by an MLS system with a single 2D LiDAR sensor, for which the scan plane is orthogonal to the direction of vehicle movement. The 2D LiDAR sensor performs a 360 deg scan at an interval of 0.12 deg, recording 3000 points per scan line. Points are colorized according to their class labels. (a) Scan grid, in which each row represents a scan line. The black points represent points in which the laser beam does not return. (b) Georeferenced 3D coordinates. (c) Relative 3D coordinates.

JARS_15_2_024523_f002.png

To construct a scan grid from the MLS data, the scan angles of the point cloud should be recorded. Some LiDARs do not record the point if the laser beam does not return. In this case, we can segment the point cloud into scan lines to calculate the scan line index j for point p by finding a significant jump between the scan angles of adjacent points. Then, the beam index i for point p can be calculated as

Eq. (1)

i=[θθ0Δθ]+1,
where θ is the scan angle of point p, θ0 is the start scan angle of a scan line, and Δθ is the scan resolution, i.e., the nominal angular increment between adjacent beams on a scan line.

If the scan resolution Δθ is not provided, it can be estimated using the mean or median of the differences between scan angles of adjacent points on the same scan line.

2.2.

Relative Coordinate Calculation

To obtain more contextual information from LiDAR data, a relative 3D coordinate system, where x represents the moving distance from the origin along the trajectory of the LiDAR, y represents the horizontal displacement with respect to the LiDAR position, and z represents the vertical displacement with respect to the LiDAR position, is constructed for the MLS point cloud. The relative coordinates of point p(i,j) are calculated as

Eq. (2)

{x(i,j)=k=1jv(k)Δty(i,j)=r(i,j)cosθ(i,j)z(i,j)=r(i,j)sinθ(i,j),
where v(k) is the vehicle speed at the time when the k’th scan line is acquired by the LiDAR, Δt is the time interval between adjacent scan lines, r(i,j) is the radial distance of point p(i,j), and θ(i,j) is the scan angle of point p(i,j).

Since the attitude of the LiDAR (vehicle) over a short period of time can be regarded as constant, the relative coordinates can describe a local spatial distribution of points as well as the georeferenced coordinates, as shown in Figs. 2(b) and 2(c).

2.3.

Neighbor Search

The neighborhood of each point should be recovered for local feature extraction. A spherical neighborhood S(i,j) of point p(i,j) is defined as a set of points within a sphere centered at   p(i,j) with a radius of δ. We propose a fast procedure for searching S(i,j) with a computational complexity of O(1), i.e., independent of the number of points N in a point cloud.

Consider the measurement resolutions Δi and Δj of the scan grid at point p(i,j). The resolution Δi, the minimum distance between the point p(i,j) and its adjacent points p(i±1,j) on the j’th scan line, can be estimated by the distance between the point p(i,j) and the (i±1)’th beams on the j’th scan line, as shown in Fig. 3. The resolution Δj, the distance between the point p(i,j) and its adjacent points p(i,j±1), can be estimated by the distance between the j’th scan line and the (j±1)’th scan lines along the trajectory of the LiDAR. Thus, the measurement resolutions Δi and Δj are calculated as

Eq. (3)

{Δi=r(i,j)sinΔθΔj=v(j)Δt.  

Fig. 3

Resolution Δi at point p(i,j), i.e., the minimum distance between point p(i,j) and its adjacent points on the j’th scan line.

JARS_15_2_024523_f003.png

Then, a candidate neighborhood R(i,j) can be derived from the scan grid:

Eq. (4)

R(i,j)={p(iR,jR)|iR[iδΔi,i+δΔi],jR[jδΔj,j+δΔj]}.

The spherical neighborhood S(i,j) is finally determined as

Eq. (5)

S(i,j)={p(iS,jS)|p(iS,jS)p(i,j)δ,p(iS,jS)R(i,j)}.

The computational complexity of the proposed neighbor search method can be measured by the number of points in R(i,j):

Eq. (6)

NR(i,j)=4δ2ΔiΔj=4δ2r(i,j)sinΔθv(j)Δt,
which depends on the measurement resolutions of the scan grid at point p(i,j) and is independent of the number of points in the whole point cloud. Note that affected by the variation of point density, the computational complexity of the proposed method becomes high with a small range r(i,j).

2.4.

Feature Extraction

To demonstrate the effectiveness of the relative 3D coordinates, a set of intuitive geometric features are extracted from the spherical neighborhood with a δ=0.5  m radius, as shown in Table 1. The density feature is corrected with the measurement resolutions to eliminate the influences of varying vehicle speeds and measurement ranges:

Eq. (7)

d(i,j)=NS(i,j)ΔiΔj,
where NS(i,j) is the number of points in the spherical neighborhood.

Table 1

Local features extracted from the spherical neighborhood.

SymbolDescription
x, y, z, I, and nRelative 3D coordinates, intensity, and number of echoes of the central point
μx, μy, μz, μI, and μnMeans within the neighborhood
σx, σy, σz, σI, and σnStandard deviations within the neighborhood
Δx, Δy, Δz, ΔI, and ΔnRanges within the neighborhood
Lλ, Pλ, Sλ, and OλShape features within the neighborhood13
dDensity corrected with measurement resolutions

In addition, radiometric and penetrating features are derived based on the intensities I and numbers of echoes n measured by the LiDAR, providing further distinctive properties not covered by geometric features.

2.5.

Classifier Learning

The local features should be merged into a series of binary classifiers for a variety of specific classes by using supervised learning algorithms. The decision tree adopts a divide-and-conquer strategy by partitioning the feature space into subregions with a high class purity on the training set.22 A top-down tree structure is constructed and each node chooses the best feature from the feature set to split the training data. Since a single decision tree has a very low bias and extremely high variance, we use the boosting framework to ensemble multiple decision trees to improve the classification accuracy and generalization ability.

We chose the GentleBoost algorithm, in which the total exponential loss on the training set is minimized using a functional Newton-like numerical optimization method.23 The ensemble classifier is

Eq. (8)

F=m=1MTm,
where Tm is a decision tree generated as an incremental function in the m’th iteration and M is the number of iterations.

3.

Results

3.1.

Dataset

To compare our method with prior state-of-the-art methods, we use a publicly available and labeled database, namely Paris-rue-Cassette.20 It contains 12 million points recorded on a street section in Paris with a length of 200  m. A 2D LiDAR sweeps from 180  deg to 180 deg with Δt=10  ms time interval. The starting scan angle θ0 is 180  deg, i.e., the upward direction. All coordinates are geo-referenced (E,N,U) in Lambert 93 and altitude IGN1969 (grid RAF09) reference system. For the points with laser beam return, the range r, scan angle θ, intensity I, number of echoes n, and georeferenced LiDAR position (x0,y0,z0) are recorded in scan order in addition to the georeferenced coordinates. The object classes for the experiments include façade, ground, cars, two-wheelers, road inventory, pedestrians, and vegetation. See Table 2 for details.

Table 2

Object classes in the Paris-rue-Cassette database.

Class nameUrban objectsNo. of points
FaçadeOutside of a building7,027,016
GroundRoads, sidewalks, curbs, and other ground4,229,639
CarCars368,271
Two wheelersBicycles and other two-wheeled vehicles40,331
Road inventoryBollards, lampposts, traffic signs, meters, grids, and other road objects46,105
PedestrianStill pedestrians, walking pedestrians, standing pedestrians, and other pedestrian23,999
VegetationTrees and potted plants212,131
Total11,947,492

3.2.

Recovery of the Scan Grid

The point cloud was first divided into scan lines at the positions where the sign of the scan angle changed from positive to negative. Then, the angle resolution Δθ=0.12  deg was estimated by analyzing the distribution of the difference between scan angles of adjacent points on the same scan line. Finally, a scan grid with a size of 4642×3000 was constructed, as shown in Fig. 4. Given the real-time speeds v of LiDAR estimated by the georeferenced LiDAR positions, the relative 3D coordinates were computed using Eq. (2).

Fig. 4

Scan grid for the Paris-rue-Cassette database.

JARS_15_2_024523_f004.png

3.3.

Neighbor Search Efficiency

To compare the computational complexities of the k-D tree algorithm and the proposed neighbor search method in searching spherical neighborhood, we run the two algorithms on a laptop PC with an AMD Ryzen 5 4600H CPU (hexa-core, 3.0 GHz) and 16 GB of RAM. The k-D tree algorithm was performed over the raw Paris-rue-Cassette dataset, while the proposed neighbor search method was performed over the scan grid of the Paris-rue-Cassette dataset. The Point Cloud Library was used for the k-D tree neighbor search. The search radius δ is set to 0.2, 0.5, and 0.8 m. As shown in Table 3, our method is much faster.

Table 3

Computational complexity of the neighbor search.

Paris-rue-Cassettek-D treeProposed approach
Time of index creation31.872 s0.392 s
Time of neighbor search per 1000 ptsδ=0.2  m1.944 s0.010 s
δ=0.5  m3.314 s0.041 s
δ=0.8  m5.568 s0.088 s

Figure 5 shows the frequency distribution of the search rate NS/NR on the Paris-rue-Cassette dataset with an average search rate of 49.49%, demonstrating the efficiency of the proposed neighbor search method.

Fig. 5

Frequency distribution of search rate on Paris-rue-Cassette. The average search rate is 49.49%.

JARS_15_2_024523_f005.png

3.4.

Effectiveness of the Relative 3D Coordinates

To demonstrate the effectiveness of the relative 3D coordinates, we compare the distinctiveness of the geometric features derived from the georeferenced and relative 3D coordinates. The geometric features in Table 1 are divided into three kinds for comparison: (i) single coordinates of the central point; (b) basic features derived from the neighborhood using a single coordinate, i.e., the means, standards, and ranges within the neighborhood; and (c) shape features derived from the structure tensor of the neighborhood using three coordinates, i.e., the linearity Lλ, planarity Pλ, scattering Sλ, and omnivariance Oλ.

The Bayes error22 for each feature is numerically estimated as follows:

Eq. (9)

e=12nh=1Nhmin[h+(nh),h(nh)],
where h+(nh) and h(nh) are the nh’th bins of the probability histograms with Nh=100 bins for a single feature, corresponding to an object class and a nonobject class, respectively. All features are mapped to the interval [0, 1].

The search radius δ is set to 0.5 m. Figure 6 shows the average Bayes error for each kind of geometric feature on the seven classes. Figures 6(a) and 6(b) show that single relative coordinates are more distinctive than single georeferenced coordinates, implying that the relative coordinates introduce more semantic information. Figure 6(c) shows that shape features using three relative coordinates have similar distinctiveness as those using three georeferenced coordinates, implying that the relative 3D coordinates can maintain the local spatial distribution of points as well as the georeferenced 3D coordinates.

Fig. 6

Distinctiveness of the three kinds of geometric features using the georeferenced and relative coordinates, measured as the average of numerically estimated Bayes errors. (a) Single coordinates of the central point. (b) Basic features using a single coordinate, i.e., the means, standards, and ranges within the neighborhood. (c) Shape features Lλ, Pλ, Sλ, and Oλ using three coordinates. F, façade; G, ground; C, cars; 2W, two-wheelers; RI, road inventory; P, pedestrians; and V, vegetation.

JARS_15_2_024523_f006.png

3.5.

Pointwise Classification Accuracy

The best results of pointwise classification we are aware of are those of Refs. 14, 24, and 25. In Ref. 25, a hierarchical framework composed of ground filtering, structural segmentation, and contextual classification was proposed. In Refs. 14 and 24, geometric features are derived at multiple scales or an optimal scale and then are combined into pointwise object classifiers. To improve the classification accuracy, statistical features derived from the 2D projection of the point cloud, the shape context 3D, and signature of histogram of orientations features are also utilized in Refs. 14 and 24 as well as fundamental geometric features.

The experiments use the same training and test sets as in Refs. 14 and 24; i.e., 1000 points per class are randomly selected as training samples and the remaining data are used as test samples. The number of iterations for GentleBoost is M=500. Table 4 shows the classification results. Results of a variety of pointwise classification approaches are provided in Ref. 22, and the best result for each class is used for comparison. Compared with the prior state-of-the-art methods, our approach achieves an approximately 10% improvement in terms of the F1 score. The improvement increased to 15% by adding radiometric and penetrating features mentioned in Sec. 2.4 into the classifiers.

Table 4

Pointwise classification results.

ClassHackel et al.14Landrieu et al.24Li et al.25Proposed method
Geometric featuresAll features
F1F1F1F1AUCF1AUC
F0.96850.9570.98600.96560.99230.97400.9941
G0.98470.9820.98150.98060.99860.98420.9987
C0.89430.8350.84010.88350.99360.90570.9951
2W0.67840.6670.57250.77230.99830.85940.9993
RI0.31340.3270.25400.56390.99170.67360.9932
P0.39600.6590.54540.76740.99840.80030.9988
V0.67900.5490.90990.77880.99510.82130.9963
Avg.0.70200.7110.72700.81600.99540.85980.9965
Note: F, façade; G, ground; C, cars; 2W, two-wheelers; RI, road inventory; P, pedestrians; V, vegetation.Bold values indicate our experimental results, which are better than the state-of-the-art (the second, third, and fourth columns).

The F1 score depends on the decision threshold. When the decision threshold changes, the F1 score changes. The area under the receiving operating characteristic (ROC) curve (AUC) does not depend on the decision threshold, so it is better than F1 score to evaluate the performance of a classifier. Figure 7 shows the ROC curves for the proposed approach with all features. The AUC can summarize the relationship between the true- and false-positive rates of a binary classifier for different decision thresholds; hence, we also use the AUC to evaluate our approach in Table 4.

Fig. 7

ROC curves for the pointwise classifiers.

JARS_15_2_024523_f007.png

4.

Discussion and Conclusion

This study aims to speed up the neighbor search and enhance feature distinctiveness for pointwise classification by exploiting topological and contextual information among raw data. Considering an MLS system with a single 2D LiDAR sensor, the cores of our approach are: (i) to construct a scan grid according to the scan pattern to organize an MLS point cloud; (ii) to compute the relative 3D coordinates with respect to the LiDAR position; and (iii) to recover the neighborhood by a fast search method. The computational complexity of the proposed neighbor search strategy is independent of the number of points in a point cloud with an average search rate of 49.49%. In terms of the Bayes error, geometric features using the relative coordinates are more distinctive than these features using georeferenced coordinates. Compared with the state-of-the-art methods, the proposed pointwise classification achieves an approximately 10% improvement in terms of the F1 score, whereas it uses simpler geometric features derived for a search radius of 0.5 m. Furthermore, our approach is straightforward to parallelize and would be faster when taking advantage of parallel programming.

The proposed approach has several limitations: (1) the proposed neighbor search approach only works on an MLS system with a single 2D LiDAR sensor used in push-broom mode; (ii) to construct or recover a scan grid from the MLS data, the scan angles of the point cloud should be recorded, and the MLS data needs to be recorded in scan order or time stamps of the MLS data are recorded; (iii) because the proposed approach uses local features derived from several adjacent scan lines, it will not work well if the vehicle drives have multiple drive-runs in different directions (e.g., drive forward and backward along the road).

In future work, the proposed approach will be extended to MLS systems with a single 3D LiDAR or multiple 2D/3D LiDARs by exploring the measurement geometry of 3D LiDARs as well as the spatial relationship of multiple LiDARs. Furthermore, postprocessing, such as soft labeling,24 will be considered to improve the classification accuracy.

Acknowledgments

This research was supported by the National Natural Science Foundation of China under Grant No. 31901239.

References

1. 

I. Puente et al., “Land-based mobile laser scanning systems: a review,” Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXVIII-5/W12 163 –168 (2011). https://doi.org/10.5194/isprsarchives-XXXVIII-5-W12-163-2011 Google Scholar

2. 

Y. Wang et al., “A survey of mobile laser scanning applications and key techniques over urban areas,” Remote Sens., 11 (13), 1540 (2019). https://doi.org/10.3390/rs11131540 Google Scholar

3. 

C. Wang et al., “Urban 3D modeling with mobile laser scanning: a review,” Virtual Real. Intell. Hardware, 2 175 –212 (2020). https://doi.org/10.1016/j.vrih.2020.05.003 Google Scholar

4. 

R. W. Wolcott and R. M. Eustice, “Visual localization within LIDAR maps for automated urban driving,” in IEEE Int. Conf. Intell. Robots and Syst., (2014). https://doi.org/10.1109/IROS.2014.6942558 Google Scholar

5. 

H. Zhao et al., “Ground surface recognition at voxel scale from mobile laser scanning data in urban environment,” IEEE Geosci. Remote Sens. Lett., 17 (2), 317 –321 (2020). https://doi.org/10.1109/LGRS.2019.2919297 IGRSBY 1545-598X Google Scholar

6. 

C. Ye et al., “Robust lane extraction from MLS point clouds towards HD maps especially in curve road,” IEEE Trans. Intell. Transp. Syst., 1 –14 (2020). https://doi.org/10.1109/TITS.2020.3028033 Google Scholar

7. 

Y. Li et al., “Localization and extraction of road poles in urban areas from mobile laser scanning data,” Remote Sens., 11 (4), 401 (2019). https://doi.org/10.3390/rs11040401 Google Scholar

8. 

J. Zhang et al., “Vehicle tracking and speed estimation from roadside Lidar,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 13 5597 –5608 (2020). https://doi.org/10.1109/JSTARS.2020.3024921 Google Scholar

9. 

S. Xu and R. Wang, “Power line extraction from mobile LiDAR point clouds,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 12 734 –743 (2019). https://doi.org/10.1109/JSTARS.2019.2893967 Google Scholar

10. 

Y. Q. Li et al., “Street tree information extraction and dynamics analysis from mobile lidar point cloud,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2020 271 –277 (2020). https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-271-2020 Google Scholar

11. 

S. Xu et al., “Automatic extraction of street trees’ nonphotosynthetic components from MLS data,” Int. J. Appl. Earth Obs. Geoinf., 69 64 –77 (2018). https://doi.org/10.1016/j.jag.2018.02.016 Google Scholar

12. 

E. Che, J. Jung and M. J. Olsen, “Object recognition, segmentation, and classification of mobile laser scanning point clouds: a state of the art review,” Sensors, 19 (4), 810 (2019). https://doi.org/10.3390/s19040810 Google Scholar

13. 

M. Weinmann et al., “Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers,” ISPRS J. Photogramm. Remote Sens., 105 286 –304 (2015). https://doi.org/10.1016/j.isprsjprs.2015.01.016 Google Scholar

14. 

T. Hackel, J. D. Wegner and K. Schindler, “Fast semantic segmentation of 3d point clouds with strongly varying density,” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-3 177 –184 (2016). https://doi.org/10.5194/isprs-annals-III-3-177-2016 Google Scholar

15. 

M. Weinmann et al., “Distinctive 2D and 3D features for automated large-scale scene analysis in urban areas,” Comput. Graphics, 49 47 –57 (2015). https://doi.org/10.1016/j.cag.2015.01.006 Google Scholar

16. 

T. Hackel, J. D. Wegner and K. Schindler, “Joint classification and contour extraction of large 3D point clouds,” ISPRS J. Photogramm. Remote Sens., 130 231 –245 (2017). https://doi.org/10.1016/j.isprsjprs.2017.05.012 Google Scholar

17. 

M. Weinmann et al., “Geometric features and their relevance for 3d point cloud classification,” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-1/W1 157 –164 (2017). https://doi.org/10.5194/isprs-annals-IV-1-W1-157-2017 Google Scholar

18. 

J. Demantké et al., “Dimensionality based scale selection in 3d lidar point clouds,” ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., XXXVIII-5/W12 97 –102 (2011). https://doi.org/10.5194/isprsarchives-XXXVIII-5-W12-97-2011 Google Scholar

19. 

E. Che and M. J. Olsen, “An efficient framework for mobile lidar trajectory reconstruction and Mo-norvana segmentation,” Remote Sens., 11 (7), 836 (2019). https://doi.org/10.3390/rs11070836 Google Scholar

20. 

B. Vallet et al., “TerraMobilita/iQmulus urban point cloud analysis benchmark,” Comput. Graphics, 49 126 –133 (2015). https://doi.org/10.1016/j.cag.2015.03.004 Google Scholar

21. 

Y. Zhou et al., “A fast and accurate segmentation method for ordered LiDAR point cloud of large-scale scenes,” IEEE Geosci. Remote Sens. Lett., 11 (11), 1981 –1985 (2014). https://doi.org/10.1109/LGRS.2014.2316009 Google Scholar

22. 

R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification, 2nd ed.John Wiley & SonsNew York,1998). Google Scholar

23. 

J. Friedman, T. Hastie and R. Tibshirani, “Additive logistic regression: a statistical view of boosting,” Ann. Stat., 28 (2), 337 –407 (2000). https://doi.org/10.1214/aos/1016218223 Google Scholar

24. 

L. Landrieu et al., “A structured regularization framework for spatially smoothing semantic labelings of 3D point clouds,” ISPRS J. Photogramm. Remote Sens., 132 102 –118 (2017). https://doi.org/10.1016/j.isprsjprs.2017.08.010 Google Scholar

25. 

Y. Li, B. Wu and X. Ge, “Structural segmentation and classification of mobile laser scanning point clouds with large variations in point density,” ISPRS J. Photogramm. Remote Sens., 153 151 –165 (2019). https://doi.org/10.1016/j.isprsjprs.2019.05.007 Google Scholar

Biography

Qiujie Li is an associate professor working at the College of Mechanical and Electronic Engineering, Nanjing Forestry University. Her research focuses on mobile laser scanning, point cloud processing, and forestry informatization.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Qiujie Li, Pengcheng Yuan, Yusen Lin, Yuekai Tong, and Xu Liu "Pointwise classification of mobile laser scanning point clouds of urban scenes using raw data," Journal of Applied Remote Sensing 15(2), 024523 (29 June 2021). https://doi.org/10.1117/1.JRS.15.024523
Received: 20 April 2021; Accepted: 11 June 2021; Published: 29 June 2021
Lens.org Logo
CITATIONS
Cited by 14 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Clouds

LIDAR

Laser scanners

Roads

Binary data

Feature extraction

Spherical lenses

Back to Top