Open Access
9 March 2018 Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data
Mohamed Barakat A. Gibril, Mohammed Oludare Idrees, Helmi Zulhaidi Mohd Shafri, Kouame Yao
Author Affiliations +
Abstract
The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

1.

Introduction

Fusion of optical and SAR images has been extensively used to improve the quality of feature extraction in many applications. The advantages of combining multisource spatial data to enhance land use and land cover (LULC) classification have been widely reported.16 However, the quality of the extracted information also relies on the classification algorithm.7 Today, geographic object-based image analysis (GEOBIA) has become the normative feature extraction approach for the remote sensing community, a paradigm shift from the conventional pixel-based image classification method.810 GEOBIA permits multiscale and hierarchical image object representation with additional benefits of employing knowledge-driven mechanisms using the image semantics, such as spectral, spatial texture, and contextual information.

Object-based mapping techniques are even more desirable to generate informative and accurate maps when data from different sensors are combined. Data fusion is a popular technique in remote sensing; however, studies to combine data for object-oriented approach have always relied on the standard fusion techniques, such as wavelet transform, bravery, HIS (intensity hue and saturation), pan-sharpening, Ehlers etc.2,4,1114 Although these fusion techniques enhance spatial resolution of the resulting image, it is well documented that they suffer from spectral distortion.4,15 Also, there is limitation in the number of data that can be combined. Nevertheless, improvement in feature extraction and mapping through fusion and object-based techniques has been reported for various applications, including urban land cover,13,16 landslide inventory and mapping,11,17 vegetation mapping,5,18,19 and flood extent extraction.14 More recently, the idea of integrating spectral and nonspectral information to enrich feature attributes extraction for better classification and quality map production has been advanced.8 In line with this development and the growing application of machine learning algorithms,4,13 fusion by layer stacking is gaining prominence in remote sensing image analysis.3,20,21

Image segmentation is the foundation of GEOBIA. The segmentation process divides image into smaller nonoverlapping regions using the color, texture, and shape properties of the image, usually governed by three parameters available to users, namely scale, shape, and compactness.22,23 The process of identifying optimum combination of these parameters is very challenging.24 Thus, using an optimization technique can be very effective to reduce time and effort involved in a trial-and-error strategy and also to improve feature detection accuracy. Various strategies have been used to evaluate the segmentation quality, including visual analysis, system-level evaluation, empirical discrepancy methods, and empirical goodness methods.16,25

The first method employs visual analysis by comparing multiple segmentation outputs to select best parameter combination. But the approach is subjective, time-consuming, and does not include any quantitative measure to evaluate the quality. The second method is at system-level, which considers segmentation as major part of classification that directly affects the final result. It evaluates quality using classification accuracy as indicator.2628 The third one is the empirical discrepancy method, which references the polygons (e.g., manually digitized features) to examine optimal combination of parameters by measuring the discrepancies between segmentations output and the digitized image objects. If the discrepancy between the segmentation and reference objects is minor, it indicates high segmentation quality.12,16,26,2934 The drawbacks of this technique include the need for extensive manual effort to prepare the reference objects, which can be subjective, labor, and time-consuming.

The last quality evaluation method is empirical goodness methods.29 The empirical goodness methods involve the adoption of statistical quality criteria to score and rank multiple image segmentation and find the optimal combination of segmentation parameters.25 An empirical goodness objective function proposed by Espindola et al.35 uses quantitative statistical criterion that combines the weighted variance and spatial autocorrelation (Moran’s index) of the image pixels to determine segmentation quality. A plethora of empirical goodness methods was proposed and tested.25,27,33,3639 The issue with these empirical methods is that they only optimize the scale parameter and do not emphasize on finding the optimal combination of the three parameters.40,41

The robust Taguchi statistical technique, a fractional factorial design, established by Genichi Taguchi has been widely adopted in the engineering analysis to optimize the design variables and the performance characteristics of the combination of design parameters.42 It provides a straightforward and efficient tool to find the optimum ranges of designs for a high-quality system and significantly minimize the overall testing time and experimental cost.43 It uses an orthogonal array from the design of the experiment that provides a straightforward and systematic method to optimize the design and assesses the performance by measuring signal to noise ratio (SNR) of each experiment. The merger of Espindola’s objective function and Taguchi optimization technique has been applied recently and is gaining relevance in remote sensing applications, such as landslide inventory mapping,40,17 flood mapping,14 asbestos cement roofs detection,20 and automatic birds’ nests detection and counting.41 In this study, LULC classification was improved using GEOBIA. First, segmentation is optimized using Taguchi statistical technique and, subsequently, classification carried out using the machine learning C4.5 decision tree algorithm. Then, the proposed method is compared with support vector machine (SVM; optimized with Taguchi statistical technique), random forest (RF), and a set of unoptimized trials segmented using arbitrary parameter combinations to investigate efficiency of the optimization process.

2.

Study Area, Data, and Method

The study was conducted in Perak, a state in Peninsula Malaysia. Geographically, the site is located between longitudes 100°51′22″E and 101°14′17″E, and latitudes 4°13′21″ N and 3°51′60″N (Fig. 1), covering 1741.5  km2. The land use comprises residential settlements, water bodies, and agricultural land, including oil palm, rice paddy, and vegetable crops.

Fig. 1

Study area with Malaysian states and the location of the study area (right map) and SPOT-6 imagery in RGB color combinations (left image).

JARS_12_1_016036_f001.png

SPOT-6 and RADARSAT-2 imageries used in this study were provided by Agensi Remote Sensing Malaysia (ARSM). The SPOT-6 image, which was acquired in February 2014, has 6-m spatial resolution and four multispectral bands within 0.455- to 0.890-μm wavelength. Similarly, the RADARSAT-2 image was acquired on March 15, 2015, using fine beam mode SGF with HH single polarization at 20-deg to 50-deg incidence angle. The SAR image was terrain geocoded and resampled to 12.5 m on delivery. Table 1 presents the characteristics of the dataset.

Table 1

Data used and their properties.

PropertiesSPOT-6RADARSAT-2
Acquisition dateFebruary 2, 2014March 15, 2015
Spatial resolution (m)6 m12.5 m
Wavelength0.455  μm to 0.525  μm blue 0.530  μm to 0.590  μm green 0.625  μm to 0.695  μm red 0.760  μm to 0.890  μm NIRC-band
PolarizationHH

2.1.

Data Processing

For the preprocessing task, SPOT-6 image was corrected for radiometric and atmospheric effects. Also, speckle in the SAR data was removed using 5×5 kernel local sigma. Subsequently, the image was enhanced texturally with 3×3 kernel textural occurrence and resampled to 6-m spatial resolution to match SPOT-6 image. The two datasets were coregistered to bring them into correct alignment using image-to-image registration. Thereafter, the images were reprojected to Universal Transverse Mercator projection. Then, the two datasets were fused by layer stacking. Unlike other fusion methods that transform the image spectral, layer stacking combines dataset from different sensors without altering the spatial and spectral characteristics of the original data.1 This ability to retain the image fidelity is an advantage for object-based image analysis because the algorithm exploits all image properties to detect physical object in the image.

2.2.

Optimization and Segmentation

Then, the segmentation process was optimized using integrated Taguchi-objective function optimization strategy.41 Based on the optimal parameter combinations, the image was segmented and the result classified using different algorithms, including data mining (DM) using a developed rule-set. Finally, the effect of the optimization was evaluated. Figure 2 presents the data processing and analysis workflow.

Fig. 2

Methodological workflow.

JARS_12_1_016036_f002.png

Image segmentation is the first and most fundamental step in GEOBIA.27 The widely used region growing multiresolution segmentation (MRS) was employed. MRS starts with a single-pixel image object as seed, and then other neighboring image pixels are combined in several successive steps to produce larger ones till the predefined criteria are met.44 However, the quality of this process depends on the proper selection of segmentation parameters: scale and homogeneity (shape and compactness) values. The scale value controls the size segment: high-scale value produces large image segment and small-scale value generates small segments.10,45 The other parameters, homogeneity, combine color and shape properties. In this research, the robust Taguchi-objective function optimization technique was used to derive appropriate parameter combination for the scale, shape, and compactness.11,14,40,41,17,20 The technique integrates the statistical Taguchi and spatial objective function optimizations methods iteratively in a single processing workflow to produce an accurate result.

In GEOBIA, segmentation of image spectral relative to their spatial arrangement (autocorrelation) is fundamental to feature identification and grouping. The segmentation technique divides image into homogeneous contiguous regions that enclose identical pixels as objects within each segment based on the assumption that an image pixel most likely belongs to the same object as its neighboring pixels.46 Accurate partitioning of image into distinct image objects is dependent on appropriate selection of segmentation parameters: scale, shape, and compactness. Over the time, the objective function,47 which attempts to select appropriate parameters that can produce the best quality segmentation based on intrasegment homogeneity and intersegment reparability, has been widely used. However, this optimization technique relies on arbitrary selection of a range of parameters for experimentation. Not only that, it emphasizes on varying the scale factor while keeping the shape and compactness factors constant, even though the quality of the resulting segments depends on the correct combination of the three parameters. And this is perceived as bias for the objective function. Hence, the idea of incorporating Taguchi method into the optimization process by, first, optimizing the design of experiment using the Taguchi orthogonal array and, second, modeling a unique optimal parameter combination with the Taguchi SNR is presented. Comprehensive details of this approach can be found in literature.41,48,49

For experimental design, five levels were defined for the three segmentation parameters (Table 2). The orthogonal array minimizes the number of experiments to only 25 number of experiment coded L25 (35) compared to 243 experiments using the standard factorial. The plateau objective function (POF)35 was measured for each experiment to evaluate the quality of the segmentation for each experiment. POF is a combination of the weighted variance and spatial autocorrelation (Moran’s index) to evaluate both of the intersegment homogeneity and heterogeneity of image objects. To assess experimental results, SNR is calculated as a measure for the determination of the quality. The SNR values yield the optimum segmentation parameters, which were used for image segmentation, followed by classification to extract land-cover classes. Different land-cover features were correctly classified by exploiting the spectral, textural, and spatial relationship of the image objects (segments).

Table 2

Segmentation parameter level and variable definition.

LevelScaleShapeCompactness
1300.50.1
2400.60.3
3500.70.5
4600.80.7
5700.90.9

2.3.

Classification and Accuracy Assessment

The outcome of segmentation is an unclassified image object with database of the layer values, such as spectral indices, backscattering values, and textural parameters (Table 3) that allows manipulating the feature characteristics for DM. The optimized image objects generated was classified into nine land-cover classes (palm oil, initial paddy stage, intermediate paddy stage, matured paddy stage, bare soil, flooded soil, built up areas, water bodies, and grass or other vegetation) using SVM, RF, and rule-based DM approach. Theoretical bases of SVM, RF, and DM can be found in literature.13,19,18,5055

Table 3

Description of the feature space included in SVM, RF, and rule-based classification to classify image objects derived from multisource data and optimized by Taguchi technique.

Feature typesFeature names and descriptions
SpectralMean reflectance band of blue, green, red, NIR, and backscattering.
The standard deviation of reflectance band of blue, green, red, NIR, and backscattering.
Normalized difference vegetation index [(RNIRRRed)(RNIR+RRed)].
Normalized difference water index [(RGreenRNIR)(RGreen+RNIR)].
Soil adjusted vegetation index [(RNIRRRed)(RNIR+RRed+l)*(1+L)].
Ratio G index56=RGreen(RBlue+RGreen+RRed+RNIR).
Brightness values.
Texturalgrey level co-occurrence matrix (GLCM) mean, GLCM contrast, GLCM entropy, GLCM dissimilarity, GLCM homogeneity, GLCM correlation, GLCM std., GLCM Ang. second moment, grey level difference vector (GLDV) mean, GLDV contrast, GLDV entropy, and GLDV Ang. second moment.
Spatial and geometricDensity, compactness, asymmetry, shape index, rectangular fit, and elliptic fit.

SVM and RF classifiers were applied by selecting training samples representative of the respective feature class. But for the rule-based classification, decision tree (Fig. 3) was constructed by implementing the C4.5 algorithm in Weka,55 an open source software using the image object attributes and indices (Table 3). The defined relationships between the image attributes and land-cover classes were utilized in building rule-sets for classifying the image objects. Performance of the optimization process was evaluated by comparing its result with six different studies in which their segmentation parameters were arbitrarily combined and classified using the rule-based classification method. Evaluation of the classifiers was based on the classification accuracy using the traditional confusion matrix and its measures (overall accuracy, kappa coefficient, etc.).57 Selection of training image objects for classification and accuracy assessment was done randomly. The GPS points collected during the site visit does not cover the entire range of classes defined in this study due to the coverage and accessibility. Therefore, the process of selecting training samples was guided by the GPS points, the land-use map provided by the Town and Country Planning Department and GoogleEarth image. In addition, quantitative assessment was carried out to know whether there is a significant change in the classification accuracy result before and after optimization using McNemar’s tests.58

Fig. 3

Decision tree generated in Weka for developing russets used for the classification.

JARS_12_1_016036_f003.png

3.

Results and Discussion

3.1.

Optimization Output

The first two stages of optimization derived from the 25 experiments produced a preliminary view of some level of optimization for the MRS and SVM parameters, where the POF and kappa values are indicators (Table 4). However, at this level, the operation still pose some challenges of the correct choice of optimal combination because there are a number of closely related POF and kappa from different combinations that seems optimal.

Table 4

L25 orthogonal array for MRS and SVM parameters and experiments responses.

L25 combination of MRS parametersL25 combination of SVM parameters
ExperimentLevelShapeCompactnessPOFCγKappa
1300.50.11100.10.78
2300.60.31.0001100.30.80
3300.70.51.1358100.50.78
4300.80.71.2186100.70.79
5300.90.91.3166100.90.78
6400.50.31.1841300.10.80
7400.60.51.2618300.30.82
8400.70.71.3595300.50.85
9400.80.91.3901300.70.85
10400.90.11.2895300.90.83
11500.50.51.3786500.10.80
12500.60.71.5315500.30.84
13500.70.91.4264500.50.85
14500.80.11.4821500.70.85
15500.90.31.0454500.90.83
16600.50.71.5421700.10.82
17600.60.91.5149700.30.87
18600.70.11.5327700.50.85
19600.80.31.2681700.70.85
20600.90.51.0814700.90.83
21700.50.91.4635900.10.82
22700.60.11.525900.30.87
23700.70.31.3883900.50.86
24700.80.51.1012900.70.85
25700.90.70.8548900.90.83

But further iteration using SNR “larger is better” option provided refined optimal values (Fig. 4 and Table 5) that eliminates the ambiguity discussed above. Ultimately, the results yield optimum parameter combination 60:0.7:0.9 for the scale, shape, and compactness factors, respectively, for the MRS and 90 and 0.3 for C and γ, respectively, for SVM.

Fig. 4

Main effect plot of the SNR “larger is better” for (a) MRS and (b) SVM parameters.

JARS_12_1_016036_f004.png

Table 5

Statistical SNR evaluations for MRS and SVM parameters.

LevelMRSSVM
ScaleShapeCompactnessCγ
11.04272.26522.60012.0591.856
22.24482.59921.35341.5931.486
32.67582.68351.48621.5571.489
42.76432.17922.09741.4341.544
51.85950.863.05011.4081.675
Delta1.72161.82351.69680.6510.37
Rank21312

3.2.

Classification Results and Accuracy

Classification results for the optimized (SVM, RF, and DT) and unoptimized (DT) segmentations processes are presented in Figs. 5 and 6. Each classified image contain nine classes: palm oil, initial paddy stage, intermediate paddy stage, matured paddy stage, bare soil, flooded soil, built up areas, water bodies, and grass or other vegetation types.

Fig. 5

LULC map using the optimized segments: (a) SVM, (b) RF, and (c) machine learning-based DT.

JARS_12_1_016036_f005.png

Fig. 6

Classification results of the unoptimized process using arbitrary segmentation parameter combinations (a) 100, 0.9, 0.3; (b) 90, 0.7, 0.3; (c) 80, 0.7, 0.5; (d) 70, 0.7, 0.7; (e) 60, 0.9, 0.5; and (f) 50, 0.9, 0.1.

JARS_12_1_016036_f006.png

Qualitatively, the use of multiattributes of the surface characteristics using the images and their derivatives enhanced the quality of the resulting map. Particularly, in tropical agricultural areas, integration of optical and radar sensor data has provided additional information to separate intraspecies vegetation and various land-cover classes.3,5,59 The classification maps obtained from the optimized SVM, RF, and DM classifiers do not show much difference except for the flooded soil class in the southern part of the image, which is well classified in SVM and RF and partly misclassified as grass/other vegetation in DM. Aside this class, DM shows superiority in identifying subtle features as can be seen in the lot demarcation boundaries and road network, most of which are not detected in SVM and RF. The HH polarization is highly sensitive to moisture content and vegetation;5 these accounts for the well-classified vegetation types. Also, the rivers and their tributaries are well mapped in SVM and DM; but further away from the main river, the tributaries were not detected in RF. Visual analysis reveals that SVM and RF exhibit misclassification among built-up area, bare soil, and matured paddy classes and also between flooded soil and grass land, particularly within the paddy fields, all of which are distinctively separated into their respective classes with DM classifier.

Quantitative evaluation of the classification yielded overall classification accuracy of 87.25%, 88.69%, and 91.79% for SVM, RF, and DM, respectively (Table 6). The accuracy obtained is similar to the result obtained with closely related works. For example, Zhang and Xie19 obtained classification accuracy of 85% and 89% with SVM and RF, respectively, after fusion. Similarly, Ribeiro and Fonseca13 obtained an overall accuracy of 85.66% from pan-sharpened WorldView-2 using object-oriented techniques and DT classifier. On a scale of performance, the three maps produced acceptable results; however, DM using decision-based approach produces superior quality.

Table 6

Error matrix for SVM, RF, and DT classifiers with UA and PA.

SVMRFRule-based based on DM
ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Palm oil92.2196.298.0196.0196.9992.41
Initial paddy96.5895.4798.3785.0798.4179.59
Intermediate paddy67.5410086.7288.4290.5896.98
Matured paddy89.5999.7184.0896.0692.3680.59
Grass86.7310086.4796.1686.3299.91
Bare soil94.0576.8877.4281.6478.0992.21
Flooded soil91.5758.9999.8168.2990.8789.00
Built up areas96.172.6390.1291.5298.3488.67
Water bodies90.4883.5486.0382.9996.92100.00
Overall accuracy87.2588.6991.79
Kappa0.870.870.90
Note: Values in bold face indicate the best result.

Classification maps produced from the unoptimized segmentation show somewhat a similar result with obvious misclassification of the matured paddy class as built-up area. This phenomenon is much pronounced for the larger scale categories (i.e., 100, 90, and 80); but as the scale reduces, so also the degree of misclassification reduced. In all the outputs, the main water bodies are well represented, but most of the tributaries are misclassifies as built-up area, intermediate paddy, or bare-soil [Figs. 6(a)6(e)], whereas at scale 50 [Fig. 6(f)], the tributaries are correctly classified as water. This indicates that using large scale could result in grouping two or more features in a segment (under segmentation), conversely, if the scale is too small it will lead to over segmentation, which could equally complicates the classification process.32,33,60 The trend in the visual analysis is reflected in the quantitative evaluation results (Table 7).

Table 7

Classification accuracy of the unoptimized parameter combination.

100, 0.9, 0.390, 0.7, 0.380, 0.7, 0.570, 0.7, 0.760, 0.9, 0.550, 0.9, 0.1
PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Palm oil95.0793.2393.1688.9795.0788.8395.0788.892.6293.3495.0191.97
Initial paddy81.4474.4284.498.377.4998.2379.8873.5886.9598.2779.3897.59
Intermediate paddy89.5188.5586.8886.9481.2789.6281.3593.4886.0690.185.191.32
Matured paddy80.3878.2688.779.3990.5179.0386.9879.0492.5780.3482.8377.39
Grass88.7585.8980.5983.3479.5886.7279.6491.4988.6786.948692.96
Bare soil73.7694.4466.9281.167.8681.8869.6984.6276.0792.1274.9583.89
Flooded soil70.5199.889.5188.8578.3766.4586.4267.177.267.3698.3183.07
Built-up areas95.6473.3196.5181.5497.380.8789.5972.8795.8483.3893.5478.1
Water bodies89.2710088.4594.7190.4686.7189.2791.6990.9191.3690.5787.62
Overall accuracy86.93%86.16%85.04%84.44%88.08%87.06%
Kappa coefficient0.850.840.8280.8220.8630.8513

In Table 7, it can also be observed that the classification accuracy of the unoptimized segmentation performs well, with the overall accuracy range between 84.44% and 88.08% for both SVM and RF, respectively. However, the optimized DT result out performed them all. On the basis of this, the optimized DT classification output was selected as the best for the LULC map (Fig. 7). Statistically, the McNemar’s computed (two-sided) p<0.0001 is less than the conventional 0.05 at 95% CI, we cannot accept the null hypothesis. Therefore, the conclusion is that there is a significant difference between the optimized and unoptimized classification processes. In general, optimization significantly improved the quality of the classification; however, careful selection of segmentation parameter combination can also produce acceptable result, as demonstrated in Table 7. Inability to obtain sufficient independent ground data for the evaluation process may affect the result; nonetheless, the use of the basic map provided and GoogleEarth image is adequate for this task. Decision tree based on DM provides better qualitative LULC classification map with the optimized segmentation.

Fig. 7

Final LULC map generated from the optimization process and DT classifier.

JARS_12_1_016036_f007.png

4.

Conclusion

Advances in information and data science are fast unifying different disciplines in exploratory knowledge-based applications. In the field of remote sensing, there is growing use of machine learning algorithms for complex decisions. The process translates human cognition to machine intelligence in a more sophisticated way. This study demonstrates a key application in LULC mapping. GEOBIA enables obtaining all rich information from the images to be mined. Combination of segment optimization and DM will increase the accuracy and reliability of feature extraction even in a multifarious environment. Mapping of intraclass land-cover types, such as the different stages of paddy, bare soil and wet soil, and different vegetation cover, will ordinarily be difficult without availability of intricating information in the image to discriminate one feature from another. This does not mean that SVM and RF are not satisfactory, they also perform well; however, they have no capability for attributing data decision intelligence. In summary, the DM approach is a resourceful method for high-quality LULC mapping and will also be useful for mapping under different and more heterogeneous environment.

Acknowledgments

SPOT-6 and RADARSAT-2 data used for this study were provided by the Malaysian Remote Sensing Agency (ARSM). Our appreciations also to the Department of Town and Country Planning, Kuala Lumpur, for providing the land-use map of the study area.

References

1. 

M. B. A. Gibril et al., “Fusion of RADARSAT-2 and multispectral optical remote sensing data for LULC extraction in a tropical agricultural area,” Geocarto Int., 32 735 –748 (2017). https://doi.org/10.1080/10106049.2016.1170893 Google Scholar

2. 

F. B. Sanli et al., “Evaluation of image fusion methods using PALSAR, RADARSAT-1 and SPOT images for land use/land cover classification,” J. Indian Soc. Remote Sens., 45 591 –601 (2017). https://doi.org/10.1007/s12524-016-0625-y Google Scholar

3. 

C. Hütt et al., “Best accuracy land use/land cover (LULC) classification to derive crop types using multitemporal, multisensor, and multi-polarization SAR satellite images,” Remote Sens., 8 684 (2016). https://doi.org/10.3390/rs8080684 Google Scholar

4. 

M. I. Sameen et al., “A refined classification approach by integrating Landsat operational land imager (OLI) and RADARSAT-2 imagery for land-use and land-cover mapping in a tropical area,” Int. J. Remote Sens., 37 2358 –2375 (2016). https://doi.org/10.1080/01431161.2016.1176273 IJSEDK 0143-1161 Google Scholar

5. 

S. Chauhan and H. S. Srivastava, “Comparative evaluation of the sensitivity of multi-polarised SAR and optical data for various land cover,” Int. J. Adv. Remote Sens., GIS, Geogr., 4 1 –14 (2016). Google Scholar

6. 

V. Kumar, P. Agrawal and S. Agrawal, “ALOS PALSAR and hyperion data fusion for land use land cover feature extraction,” J. Indian Soc. Remote Sens., 45 (3), 407 –416 (2017). https://doi.org/10.1007/s12524-016-0605-2 Google Scholar

7. 

R. C. Estoque, Y. Murayama and C. M. Akiyama, “Pixel-based and object-based classifications using high- and medium-spatial-resolution imageries in the urban and suburban landscapes,” Geocarto Int., 30 1113 –1129 (2015). https://doi.org/10.1080/10106049.2015.1027291 Google Scholar

8. 

E. Hussain and J. Shan, “Object-based urban land cover classification using rule inheritance over very high-resolution multisensor and multitemporal data,” GISci. Remote Sens., 53 164 –182 (2016). https://doi.org/10.1080/15481603.2015.1122923 Google Scholar

9. 

R. Momeni, P. Aplin and D. S. Boyd, “Mapping complex urban land cover from spaceborne imagery: the influence of spatial resolution, spectral band set and classification approach,” Remote Sens., 8 88 (2016). https://doi.org/10.3390/rs8020088 Google Scholar

10. 

A. Mui, Y. He and Q. Weng, “An object-based approach to delineate wetlands across landscapes of varied disturbance with high spatial resolution satellite imagery,” ISPRS J. Photogramm. Remote Sens., 109 30 –46 (2015). https://doi.org/10.1016/j.isprsjprs.2015.08.005 IRSEE9 0924-2716 Google Scholar

11. 

V. Moosavi et al., “Application of Taguchi method to satellite image fusion for object-oriented mapping of Barchan dunes,” Geosci. J., 18 45 –59 (2014). https://doi.org/10.1007/s12303-013-0044-9 GEJLD7 0252-1970 Google Scholar

12. 

H. Tong et al., “A supervised and fuzzy-based approach to determine optimal multi-resolution image segmentation parameters,” Photogramm. Eng. Remote Sens., 78 1029 –1044 (2012). https://doi.org/10.14358/PERS.78.10.1029 Google Scholar

13. 

B. M. G. Ribeiro and L. M. G. Fonseca, “Urban land cover classification using WorldView-2 images and C4.5 algorithm,” in Joint Urban Remote Sensing Event (JURSE ’11), 250 –253 (2013). Google Scholar

14. 

B. Pradhan, M. S. Tehrany and M. N. Jebur, “A new semiautomated detection mapping of flood extent from TerraSAR-X satellite image using rule-based classification and taguchi optimization techniques,” IEEE Trans. Geosci. Remote Sens., 54 4331 –4342 (2016). https://doi.org/10.1109/TGRS.2016.2539957 IGRSD2 0196-2892 Google Scholar

15. 

M. Idrees, H. Z. M. Shafri and V. Saeidi, “Imaging spectroscopy and light detection and ranging data fusion for urban feature extraction,” Am. J. Appl. Sci., 10 1575 –1585 (2013). https://doi.org/10.3844/ajassp.2013.1575.1585 Google Scholar

16. 

X. Zhang et al., “Segmentation quality evaluation using region-based precision and recall measures for remote sensing images,” ISPRS J. Photogramm. Remote Sens., 102 73 –84 (2015). https://doi.org/10.1016/j.isprsjprs.2015.01.009 IRSEE9 0924-2716 Google Scholar

17. 

B. Pradhan et al., “Data fusion technique using wavelet transform and Taguchi methods for automatic landslide detection from airborne laser scanning data and Quickbird satellite imagery,” IEEE Trans. Geosci. Remote Sens., 54 (3), 1610 –1622 (2016). https://doi.org/10.1109/TGRS.2015.2484325 IGRSD2 0196-2892 Google Scholar

18. 

B. W. Heumann, “An object-based classification of mangroves using a hybrid decision tree-support vector machine approach,” Remote Sens., 3 2440 –2460 (2011). https://doi.org/10.3390/rs3112440 Google Scholar

19. 

C. Zhang and Z. Xie, “Data fusion and classifier ensemble techniques for vegetation mapping in the coastal everglades,” Geocarto Int., 29 228 –243 (2014). https://doi.org/10.1080/10106049.2012.756940 Google Scholar

20. 

M. B. A. Gibril, H. Z. M. Shafri and A. Hamedianfar, “New semi-automated mapping of asbestos cement roofs using rule-based object-based image analysis and Taguchi optimization technique from WorldView-2 images,” Int. J. Remote Sens., 38 467 –491 (2017). https://doi.org/10.1080/01431161.2016.1266109 IJSEDK 0143-1161 Google Scholar

21. 

U. Peeroo, M. O. Idrees and V. Saeidi, “Building extraction for 3D city modelling using airborne laser scanning data and high-resolution aerial photo,” S. Afr. J. Geomatics, 6 363 –376 (2017). https://doi.org/10.4314/sajg.v6i3.7 Google Scholar

22. 

Y. Liu et al., “Discrepancy measures for selecting optimal combination of parameter values in object-based image analysis,” ISPRS J. Photogramm. Remote Sens., 68 144 –156 (2012). https://doi.org/10.1016/j.isprsjprs.2012.01.007 IRSEE9 0924-2716 Google Scholar

23. 

D. Liu and F. Xia, “Assessing object-based classification: advantages and limitations,” Remote Sens. Lett., 1 187 –194 (2010). https://doi.org/10.1080/01431161003743173 Google Scholar

24. 

H. Zhang, J. E. Fritts and S. A. Goldman, “Image segmentation evaluation: a survey of unsupervised methods,” Comput. Vision Image Understanding, 110 260 –280 (2008). https://doi.org/10.1016/j.cviu.2007.08.003 CVIUF4 1077-3142 Google Scholar

25. 

B. Johnson and Z. Xie, “Unsupervised image segmentation evaluation and refinement using a multi-scale approach,” ISPRS J. Photogramm. Remote Sens., 66 473 –483 (2011). https://doi.org/10.1016/j.isprsjprs.2011.02.006 IRSEE9 0924-2716 Google Scholar

26. 

I. Dronova et al., “Landscape analysis of wetland plant functional types: the effects of image segmentation scale, vegetation classes and classification methods,” Remote Sens. Environ., 127 357 –369 (2012). https://doi.org/10.1016/j.rse.2012.09.018 Google Scholar

27. 

Y. Gao et al., “Optimal region growing segmentation and its effect on classification accuracy,” Int. J. Remote Sens., 32 3747 –3763 (2011). https://doi.org/10.1080/01431161003777189 IJSEDK 0143-1161 Google Scholar

28. 

A. Smith, “Image segmentation scale parameter optimization and land cover classification using the random forest algorithm,” J. Spat. Sci., 55 69 –79 (2010). https://doi.org/10.1080/14498596.2010.487851 Google Scholar

29. 

C. Witharana and D. L. Civco, “Optimizing multi-resolution segmentation scale using empirical methods: exploring the sensitivity of the supervised discrepancy measure Euclidean distance 2 (ED2),” ISPRS J. Photogramm. Remote Sens., 87 108 –121 (2014). https://doi.org/10.1016/j.isprsjprs.2013.11.006 IRSEE9 0924-2716 Google Scholar

30. 

Z. Guo and S. Du, “Mining parameter information for building extraction and change detection with very high-resolution imagery and GIS data,” GISci. Remote Sens., 54 38 –63 (2017). https://doi.org/10.1080/15481603.2016.1250328 Google Scholar

31. 

A. Räsänen et al., “What makes segmentation good? A case study in boreal forest habitat mapping,” Int. J. Remote Sens., 34 8603 –8627 (2013). https://doi.org/10.1080/01431161.2013.845318 IJSEDK 0143-1161 Google Scholar

32. 

N. Clinton et al., “Accuracy assessment measures for object-based image segmentation goodness,” Photogramm. Eng. Remote Sens., 76 289 –299 (2010). https://doi.org/10.14358/PERS.76.3.289 Google Scholar

33. 

L. Draguţ, D. Tiede and S. R. Levick, “ESP: a tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data,” Int. J. Geogr. Inf. Sci., 24 859 –871 (2010). https://doi.org/10.1080/13658810903174803 Google Scholar

34. 

P. R. Marpu et al., “Enhanced evaluation of image segmentation results,” J. Spat. Sci., 55 55 –68 (2010). https://doi.org/10.1080/14498596.2010.487850 Google Scholar

35. 

G. M. Espindola et al., “Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation,” Int. J. Remote Sens., 27 3035 –3040 (2006). https://doi.org/10.1080/01431160600617194 IJSEDK 0143-1161 Google Scholar

36. 

M. Kim and M. Madden, “Determination of optimal scale parameters for alliance-level forest classification of multispectral IKONOS images,” in Proc. of the 1st Int. Conf. on Object-based Image Analysis (OBIA ’06), (2006). Google Scholar

37. 

T. R. Martha et al., “Segment optimization and data-driven thresholding for knowledge-based landslide detection by object-based image analysis,” IEEE Trans. Geosci. Remote Sens., 49 4928 –4943 (2011). https://doi.org/10.1109/TGRS.2011.2151866 IGRSD2 0196-2892 Google Scholar

38. 

H. Luo et al., “Development of a multi-scale object-based shadow detection method for high spatial resolution image,” Remote Sens. Lett., 6 59 –68 (2015). https://doi.org/10.1080/2150704X.2014.1001079 Google Scholar

39. 

T. Kavzoglu, M. Y. Erdemir and H. Tonbul, “A region-based multi-scale approach for object-based image analysis,” in Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 241 –247 (2016). Google Scholar

40. 

V. Moosavi, A. Talebi and B. Shirmohammadi, “Producing a landslide inventory map using pixel-based and object-oriented approaches optimized by Taguchi method,” Geomorphology, 204 646 –656 (2014). https://doi.org/10.1016/j.geomorph.2013.09.012 Google Scholar

41. 

M. O. Idrees and B. Pradhan, “Hybrid Taguchi-objective function optimization approach for automatic cave bird detection from terrestrial laser scanning intensity image,” Int. J. Speleol., 45 289 –301 (2016). https://doi.org/10.5038/1827-806X ISPEAV Google Scholar

42. 

R. S. Rao et al., “The Taguchi methodology as a statistical tool for biotechnological applications: a critical appraisal,” Biotechnol. J., 3 510 –523 (2008). https://doi.org/10.1002/(ISSN)1860-7314 Google Scholar

43. 

M. H. Shojaeefard et al., “Application of Taguchi optimization technique in determining aluminum to brass friction stir welding parameters,” Mater. Des., 52 587 –592 (2013). https://doi.org/10.1016/j.matdes.2013.06.003 MADSD2 0264-1275 Google Scholar

44. 

Trimble, Trimble eCognition Developer User Guide, 1 –266 Trimble Navigation Limited, Westminster, USA (2014). Google Scholar

45. 

W. Yu et al., “A new approach for land cover classification and change analysis: integrating backdating and an object-based method,” Remote Sens. Environ., 177 37 –47 (2016). https://doi.org/10.1016/j.rse.2016.02.030 Google Scholar

46. 

M. O. Idrees and B. Pradhan, “Hybrid Taguchi-objective function optimization approach for automatic cave bird detection from terrestrial laser scanning intensity image,” Int. J. Speleol., 45 289 –301 (2016). https://doi.org/10.5038/1827-806X ISPEAV Google Scholar

47. 

G. M. Espindola et al., “Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation,” Int. J. Remote Sens., 27 3035 –3040 (2006). https://doi.org/10.1080/01431160600617194 IJSEDK 0143-1161 Google Scholar

48. 

M. O. Idrees et al., “Assessing the transferability of a hybrid Taguchi-objective function method to optimize image segmentation for detecting and counting cave roosting birds using terrestrial laser scanning data,” J. Appl. Remote Sens., 10 035023 (2016). https://doi.org/10.1117/1.JRS.10.035023 Google Scholar

49. 

B. Pradhan et al., “Data Fusion technique using wavelet transform and Taguchi methods for automatic landslide detection from airborne laser scanning data and Quickbird satellite imagery,” IEEE Trans. Geosci. Remote Sensing, 54 1 –13 (2015). https://doi.org/10.1109/TGRS.2015.2484325 IGRSD2 0196-2892 Google Scholar

50. 

A. Reyes, M. Solla and H. Lorenzo, “Comparison of different object-based classifications in LandsatTM images for the analysis of heterogeneous landscapes,” Meas. J. Int. Meas. Confed., 97 29 –37 (2017). https://doi.org/10.1016/j.measurement.2016.11.012 Google Scholar

51. 

M. Belgiu and L. Drăgu, “Random forest in remote sensing: a review of applications and future directions,” ISPRS J. Photogramm. Remote Sens., 114 24 –31 (2016). https://doi.org/10.1016/j.isprsjprs.2016.01.011 IRSEE9 0924-2716 Google Scholar

52. 

F. J. Aguilar et al., “A quantitative assessment of forest cover change in the Moulouya River watershed (Morocco) by the integration of a subpixel-based and object-based analysis of Landsat data,” Forests, 7 23 (2016). https://doi.org/10.3390/f7010023 FOPEA4 Google Scholar

53. 

M. P. dos Santos Silva et al., “Remote-sensing image mining: detecting agents of land-use change in tropical forest areas,” Int. J. Remote Sens., 29 4803 –4822 (2008). https://doi.org/10.1080/01431160801950634 IJSEDK 0143-1161 Google Scholar

54. 

P. Thamilselvana and J. G. R. Sathiaseelan, “A comparative study of data mining algorithms for image classification,” Int. J. Educ. Manage. Eng., 5 1 –9 (2015). https://doi.org/10.5815/ijeme Google Scholar

55. 

M. A. Vieira et al., “Object based image analysis and data mining applied to a remotely sensed Landsat time-series to map sugarcane over large areas,” Remote Sens. Environ., 123 553 –562 (2012). https://doi.org/10.1016/j.rse.2012.04.011 Google Scholar

56. 

B. Salehi et al., “Object-based classification of urban areas using VHR imagery and height points ancillary data,” Remote Sens., 4 2256 –2276 (2012). https://doi.org/10.3390/rs4082256 Google Scholar

57. 

J. Nichol and M. S. Wong, “Habitat mapping in rugged terrain using multispectral ikonos images,” Photogramm. Eng. Remote Sens., 74 1325 –1334 (2008). https://doi.org/10.14358/PERS.74.11.1325 Google Scholar

58. 

G. M. Foody, “Thematic map comparison: evaluating the statistical significance of differences in classification accuracy,” Photogramm. Eng. Remote Sens., 70 627 –633 (2004). https://doi.org/10.14358/PERS.70.5.627 Google Scholar

59. 

M. B. A. Gibril et al., “Fusion of RADARSAT-2 and multispectral optical remote sensing data for LULC extraction in a tropical agricultural area,” Geocarto Int., 32 (7), 735 –748 (2016). https://doi.org/10.1080/10106049.2016.1170893 Google Scholar

60. 

T. R. Martha et al., “Segment optimization and data-driven thresholding for knowledge-based landslide detection by object-based image analysis,” IEEE Trans. Geosci. Remote Sens., 49 4928 –4943 (2011). https://doi.org/10.1109/TGRS.2011.2151866 IGRSD2 0196-2892 Google Scholar

Biography

Mohamed Barakat A. Gibril graduated with a first-class honors degree in surveying (geodesy) from Sudan University of Science and Technology, Khartoum, Sudan, in 2010. He completed his master’s degree in remote sensing and GIS from Universiti Putra Malaysia in 2015. He is currently a lecturer at the University of Prince Mugrin, Madinah, Saudi Arabia. His research focuses on satellite image analysis, data fusion, urban mapping from very high-resolution satellite imageries, data mining, and geographic object-based image analysis.

Mohammed Oludare Idrees graduated with distinction in surveying and geoinformatics from Federal Polytechnic, Ado-Ekiti, Nigeria. He obtained his master’s degree in remote sensing and GIS and PhD degree in GIS and geomatic engineering from Universiti Putra Malaysia in 2013 and 2017, respectively. He has over 5 years of industrial and more than 3 years of teaching experience. He has published over 25 research papers in refereed technical journals. His research interests are satellite image analysis and spatial modeling.

Kouame Yao obtained his bachelor’s degree in computing in 2009 from UCSI University, Malaysia and a master’s degree in information technology management in 2011 from Staffordshire, United Kingdom. Inspired by the growing application of information technology in geosciences, he obtained two master’s degrees in remote sensing and GIS from Universiti Putra Malaysia and geosciences from Macquarie University, Australia, in 2015 and 2018, respectively. His research interest is in application of remote sensing in mineral exploration.

Helmi Zulhaidi Mohd Shafri graduated with a first-class honors degree in surveying from RMIT University, Melbourne, Australia, in 1998. He completed his PhD degree in remote sensing from the University of Nottingham, United Kingdom, in 2003. Now, he is the coordinator of the remote sensing and GIS program at the Faculty of Engineering, UPM. He is actively involved in research related to algorithm development and new applications of remote sensing especially in urban engineering and environmental-informatics areas. He has more than 12 years of teaching, research, administrative, and consultancy experience with more than 80 papers in refereed technical journals.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) 1931-3195/2018/$25.00 © 2018 SPIE
Mohamed Barakat A. Gibril, Mohammed Oludare Idrees, Helmi Zulhaidi Mohd Shafri, and Kouame Yao "Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data," Journal of Applied Remote Sensing 12(1), 016036 (9 March 2018). https://doi.org/10.1117/1.JRS.12.016036
Received: 10 October 2017; Accepted: 16 February 2018; Published: 9 March 2018
Lens.org Logo
CITATIONS
Cited by 27 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Image quality

Image processing

Image classification

Remote sensing

Vegetation

Associative arrays

Back to Top