Next Article in Journal
Object-Based Window Strategy in Thermal Sharpening
Next Article in Special Issue
A Local Feature Descriptor Based on Oriented Structure Maps with Guided Filtering for Multispectral Remote Sensing Image Matching
Previous Article in Journal
A Method for Landsat and Sentinel 2 (HLS) BRDF Normalization
Previous Article in Special Issue
Enhancement of Component Images of Multispectral Data by Denoising with Reference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery

1
Department of Civil Engineering, Chungbuk National University, Chungdae-ro 1, Seowon-Gu, Cheongju, Chungbuk 28644, Korea
2
Korea Aerospace Research Institute, Gwahak-ro, Yuseong-Gu, Daejeon 34133, Korea
*
Author to whom correspondence should be addressed.
Submission received: 18 February 2019 / Revised: 10 March 2019 / Accepted: 12 March 2019 / Published: 15 March 2019
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)

Abstract

:
Pansharpening algorithms are designed to enhance the spatial resolution of multispectral images using panchromatic images with high spatial resolutions. Panchromatic and multispectral images acquired from very high resolution (VHR) satellite sensors used as input data in the pansharpening process are characterized by spatial dissimilarities due to differences in their spectral/spatial characteristics and time lags between panchromatic and multispectral sensors. In this manuscript, a new pansharpening framework is proposed to improve the spatial clarity of VHR satellite imagery. This algorithm aims to remove the spatial dissimilarity between panchromatic and multispectral images using guided filtering (GF) and to generate the optimal local injection gains for pansharpening. First, we generate optimal multispectral images with spatial characteristics similar to those of panchromatic images using GF. Then, multiresolution analysis (MRA)-based pansharpening is applied using normalized difference vegetation index (NDVI)-based optimal injection gains and spatial details obtained through GF. The algorithm is applied to Korea multipurpose satellite (KOMPSAT)-3/3A satellite sensor data, and the experimental results show that the pansharpened images obtained with the proposed algorithm exhibit a superior spatial quality and preserve spectral information better than those based on existing algorithms.

1. Introduction

Very high resolution (VHR) satellite sensors, such as WorldView-3, Pléiades, and the Korea multipurpose satellite (KOMPSAT)-3/3A, provide panchromatic images with high spatial resolutions and multispectral images with low spatial resolutions. Generally, pansharpening is a methodology used to sharpen the spatial resolution or clarity of a multispectral image by adding spatial details from panchromatic images with high spatial resolutions [1]. Various pansharpening techniques have been proposed, following approximately two decades of research to extract spatial details from panchromatic images and then add those details through global/local methods [2,3,4,5]. An additional technique for enhancing image spatial resolution is hypersharpening, which is defined as enhancing the spatial resolution of a hyperspectral image by using multispectral or panchromatic image with high spatial resolution [6,7]. General pansharpening algorithms have been classified into component substitution (CS)-based and multiresolution analysis (MRA)-based methods depending on how the spatial details are generated [2]. CS-based algorithms generate pansharpened images by adding spatial details based on high-frequency information from panchromatic images with a high spatial resolution and synthetic intensity images with a low spatial resolution [8,9,10]. CS-based methods have the advantage of enhancing the spatial clarity of pansharpened images as the effects of aliasing, artifacts, and texture blurring are minimized during the pansharpening process [4]. The generalized intensity-hue-saturation (GIHS), Gram–Schmidt (GS), GS adaptive (GSA), and band-dependent spatial detail (BDSD) methods are representative CS-based pansharpening techniques [2,11,12]. Additionally, hybrid algorithms, such as partial replacement adaptive component substitution (PRACS) and generalized BDSD algorithms, have been developed in addition to various CS algorithms using global and local injection gains [13,14,15,16]. Alternatively, MRA-based pansharpening techniques generate pansharpened images using the differences in spatial characteristics between a panchromatic image with a high spatial resolution and a spatially degraded panchromatic image [2]. MRA-based algorithms, such as wavelet-based methods, high-pass filtering (HPF), generalized Laplacian pyramids with modulation transfer function (MTF)-matched filtering (MTF-GLP), and MTF-based algorithms using spatial principal component analysis (SPCA), efficiently preserve the spectral information of the original multispectral image [2,17,18,19,20,21]. However, some artifacts and texture blurring can occur in pansharpened images when applying MRA-based algorithms by utilizing the spatial dissimilarity between panchromatic and multispectral images [5,13]. Consequently, some studies have developed pansharpening algorithms to enhance the spatial clarity of pansharpened images based on MRA-based methods [22,23,24,25]. Nevertheless, most researchers have developed various pansharpening algorithms based on either CS or MRA aimed at generating multispectral images with a spatial resolution similar to that of a panchromatic image while preserving the spectral information of the former [2,18].
Various pansharpening algorithms have been proposed to solve the spectral distortion issues common among such techniques. Xu et al. [26] performed pansharpening to reduce the spectral distortion of pansharpened images by dividing panchromatic and multispectral images into several classes using the K-means algorithm and multiple regression equations. Restaino et al. [27] proposed a method for extracting synthetic panchromatic images by applying morphological operators in MRA-based fusion techniques and improving the spatial resolution compared to using traditional MRA-based techniques. Li et al. [28] proposed a segmentation-based pansharpening method for minimizing spectral distortion and increasing the sharpness of pansharpened images between vegetation and non-vegetation objects. Wang et al. [29] developed a new pansharpening model based on global and nonlocal spatial similarity regularizers to minimize local dissimilarities. Moreover, a pansharpening algorithm for preserving changes in vegetation cover was also proposed [30]. Furthermore, various injection gains using global, local, moving window, and segmentation methods have been applied to various pansharpening algorithms. Accordingly, a segmentation method was proposed by evaluating the time and accuracy associated with the calculation of injection gains [31]. Choi et al. [5] proposed a new hybrid pansharpening algorithm using local injection gains based on the normalized differential vegetation index (NDVI) to reduce computational costs.
Additionally, various pansharpening techniques based on deep learning techniques have been developed. Yang et al. [32] proposed PanNet, which is a deep learning architecture for solving the pansharpening problem associated with spectral and spatial preservation. Masi et al. [33] used a convolutional neural network composed of a three-layer architecture that includes several nonlinear spectral indices for pansharpening. Moreover, a learning method was developed for an efficient convolutional neural network by using a dilated multilevel block and deep residual network [34,35].
Recently, guided filtering (GF) was applied to generate spatial details and injection gains during the pansharpening process. In the improved adaptive intensity-hue-saturation (IAIHS) fusion algorithm, GF was used to compute the optimal weight of pansharpening [36]. Zheng et al. [37] utilized GF to properly add spatial details to imagery from the GaoFen-2 high-resolution imaging satellite. Liu and Liang [38] developed a pansharpening algorithm using GF to extract the missing spatial details of multispectral images by minimizing the difference between a panchromatic image and filtered output image. Additionally, GF based on three-layer decomposition was utilized for a pansharpening algorithm to efficiently extract spatial details from high-spatial resolution image [39]. In the abovementioned algorithms, multispectral images with a low spatial resolution are used as guidance images to optimize panchromatic images with a low spatial resolution; however, due to the differences associated with the time lag between panchromatic and multispectral image sensors, spatial dissimilarity could occur between panchromatic and multispectral images. Although these issues are important for improving the spatial clarity of pansharpened images, most methodologies have focused on minimizing spectral distortion rather than solving these problems.
Therefore, in this manuscript, we minimize the spatial dissimilarity between panchromatic and multispectral images and optimize the spatial clarity. In the proposed algorithm, GF is used to generate optimal multispectral images for pansharpening, in contrast to conventional GF-based pansharpening algorithms that use GF to extract spatial details. Additionally, the optimal panchromatic image possessing spatial characteristics similar to those of the multispectral image regenerated by GF is used to maximize the spatial details for pansharpening. Finally, during pansharpening, we modify the methodology for determining the NDVI-based optimal injection gains based on previous works [4,5]. In particular, the injection gains are optimized by a sigmoid function based on the characteristics of the NDVI, which exhibits a spectral pattern similar to general local injection gains, for KOMPSAT-3A. The proposed algorithm is then applied to satellite image products of KOMPSAT-3A to evaluate its performance on pansharpened products of full scenes. The new pansharpening algorithm based on GF and the modification of local injection gains are proposed in Section 2. In Section 3, the study area and materials are described. Section 4 and Section 5 provide an analysis and discussion of the experimental results based on a comparison of the quantitative and qualitative qualities of the pansharpened images obtained with our algorithm versus those obtained from existing state-of-the-art algorithms. Conclusions are presented in Section 6.

2. Guided Filtering (GF)-Based Pansharpening Algorithm

Recently, various studies of pansharpening algorithms using GF have been conducted. In this manuscript, we aim to generate an optimal multispectral image using GF, while most existing algorithms use GF to extract spatial details. The details of the proposed algorithm are shown in Figure 1.

2.1. Guided Filtering

Generally, GF algorithms are some of the most effective at removing noise from digital images and preserving edge information within images. A GF output image is obtained by a local linear model involving the filter output image Q and guidance image Y in a local window ω k , as shown in Equation (1) [40,41].
Q i = a k Y i + b k ,   i ω k
where Q is modeled as the filter input image and the guidance image Y by removing unwanted texture or noise. The linear coefficients of Equation (1) can be determined using the minimization of the squared difference E between the filter output image Q and filter input image X with a local ridge regularization parameter ε in Equation (2).
E ( a k , b k ) = i ω k ( ( a k Y i + b k X i ) 2 + ε a k 2 )
Therefore, the linear coefficients a and b of Equation (2) are determined by linear ridge regression according to Equations (3) and (4):
a k = 1 | ω | i w k Y i X i μ k X ¯ k σ k 2 + ε
b k = X ¯ k a k μ k ,   X ¯ k = 1 | ω | i ω k X i
where μ k and σ k 2 are the mean and variance of Y in ω k , X ¯ k is the mean of X in ω k , and ε is a GF regularization parameter [40]. After determining the GF coefficients using Equations (3) and (4), the filter output image can be defined based on Equation (5) through the reformulation of Equation (1):
Q i = 1 | ω | k : i ω k ( a k Y i + b k ) = a i ¯ Y i + b i ¯
where a i ¯ = 1 | ω | k ω i a k ,   b i ¯ = 1 | ω | k ω i b k , and | ω | is the number of pixels in the local window. For convenience, in this manuscript, the GF output image Q is abbreviated following Equation (6) through the filter input image X , guidance image Y , window size ω , and regularization parameter ε [42]:
Q = G F ω , ε ( X , Y )

2.2. GF-Based Pansharpening Algorithm

In general, the MRA-based pansharpened image P S ^ k of the kth band can be determined using Equation (7):
P S ^ k = M S ˜ k + g k ( P P l ) ,   k = 1 , ,   N
where M S ˜ k is the resized multispectral image of the kth band, g k denotes the injection gains of the pansharpening algorithm, and P l is a panchromatic image with a low spatial resolution. As noted in the previous chapter, CS-based pansharpening algorithms generate P l using linear combinations based on weight parameters and the relationship between the panchromatic and multispectral images. Alternatively, MRA-based algorithms use panchromatic images with a low spatial resolution by using various image degradation methods to obtain P l .

2.2.1. Generation of an Optimal Multispectral Image with a Low Spatial Resolution Based on a Pansharpening Framework

In Equation (7), the spatial details are determined by the difference between P and P l , which has a low spatial resolution. In most GF-based pansharpening algorithms, the output filter image Q k of the kth band is applied as the synthetic intensity image of the pansharpening process to inject the optimal spatial details of the panchromatic image into each multispectral band [36,37,38]. These methods are similar to the MRA method insomuch as they generate the optimal panchromatic image with a low spatial resolution. However, some pansharpened images do not have abundant and clear spatial details, either because these details cannot be effectively injected into each band or because multispectral images with a lower spatial clarity than the original panchromatic image are employed. Figure 2a,b show the spatial characteristics of the target, which is composed of black and white tarps, acquired from panchromatic and resized KOMPSAT-3A multispectral images. In Figure 2b, a resized multispectral image is generated by cubic interpolation. As shown in Figure 2a,b, although the boundaries between the black and white tarps are very clear in the panchromatic image, various aliasing and noise sources around the boundaries of the tarps are included in the resized multispectral image. Although general pansharpening algorithms attempt to inject the spatial details of the panchromatic image into multispectral bands, these approaches also attempt to preserve the spectral information of the multispectral image as much as possible. Thus, the differences in the spatial details between the panchromatic and resized multispectral images can reduce the spatial clarity of pansharpened images. However, when GF is applied to a resized multispectral image, it is possible to generate an optimal multispectral image by removing these spatial characteristics. To generate filtered output images, previous studies of GF used multispectral images as guidance images and panchromatic images as input images. The GF technique uses a guidance image to generate an output image with noise-removed spectral characteristics similar to those of the input image. In this process, the output image has spatial characteristics similar to those of the guidance image. Therefore, when a multispectral image with a low spatial resolution is used as a guidance image, a panchromatic image with a low spatial resolution is generated. However, in this study, to remedy the degradation of the sharpness of the pansharpened image that occurs when the spatial characteristics of multispectral and panchromatic images are dissimilar, the original panchromatic image is used as a guidance image to generate multispectral images with spatial characteristics similar to those of the panchromatic image. Therefore, GF is applied to each band of the resized multispectral image using a panchromatic image as a guidance image. Then, the noise in the multispectral image regenerated by GF is removed using an MTF-matched filter. Figure 2c shows a multispectral image obtained by GF according to a resized multispectral image and a panchromatic image as a guidance image.
As shown in Figure 2c, the boundaries of the tarps and the edges of each object in the multispectral image generated by GF are clearer than those in the resized original multispectral image. This means that the optimal resized multispectral image for pansharpening can be generated using GF. When applying GF, the window size ω and regularization parameter ε are set to 2 and 0.1, respectively, in reference to the results of Zheng et al. [37] related to pansharpening. In the proposed algorithm, pansharpening is performed by utilizing the filter output image obtained by GF as a multispectral image. Therefore, in the proposed algorithm, the general pansharpening framework in Equation (7) is revised to that in Equation (8):
P S ^ k = M S ˜ G F ,   k + g k ( P P l ) ,   k = 1 , ,   N
where M S ˜ G F ,   k = G F 2 , 0.1 ( M S ˜ k , P ) , which is the optimal resized multispectral image of the kth band obtained by GF. To generate M S ˜ G F ,   k , the original panchromatic image P is used as the guidance image of GF, and the GF filter input image is set as the resized multispectral image M S ˜ k .

2.2.2. Local Injection Gains Based on a Sigmoid Function

The injection gain g k can also be an important factor in determining the pansharpening quality. Therefore, in this manuscript, the local injection gains are determined by modifying the NDVI-based local injection gains [5]. Xu et al. [26] indicated that the local injection gains of vegetated areas are different to those of non-vegetated areas. Additionally, Choi et al. [5] demonstrated that the spectral NDVI pattern reflects a high or moderate correlation to local injection gains based on moving windows. The local injection gain g k based on the NDVI is determined by Equations (9) and (10) [5]:
g k = ( 1 ) a × N D V I + N D V I ¯ + σ ( M S ˜ k ) σ ( I L ) × ( C k ) 3
a = { 1 ,   if   corr ( M S ˜ k ,   N D V I ) < 0 0 ,   if   corr ( M S ˜ k ,   N D V I ) > 0
where σ ( A ) is the standard deviation of A and C k is the high-frequency correlation value obtained by Laplacian filtering between M S ˜ k and I L . In Equation (9), g k is determined by the assumption that the NDVI and local injection gains are moderately or highly correlated. Therefore, the mean value of the NDVI is substituted as the global injection gain. However, in Equation (9), C k , which is obtained by spatial correlation, might be underestimated due to spatial dissimilarity or misalignment between M S ˜ G F , k and I L . If C k is underestimated during pansharpening, the spatial details of the pansharpened image might be similar to those of the original multispectral image. Therefore, in this manuscript, we modify C k , the maximum correlation value associated with spatial or spectral information, through Equations (11)–(13):
C m a x , k = max { C s p e c t r a l , k ,   C s p a t i a l , k }
C s p e c t r a l , k = c o v ( M S ˜ G F , k , I L   ) σ M S ˜ G F , k σ I L
C s p a t i a l , k = c o v ( H P F ( M S ˜ G F , k ) , H P F ( I L )   ) σ H P F ( M S ˜ G F , k ) σ H P F ( I L )
where c o v ( A , B ) is the covariance between A and B , H P F ( A ) is the high-pass-filtered image of A using the Laplacian filter, and I L is the synthetic image generated by linear multiple regression between M S ˜ G F and P .
Moreover, g k from Equation (9) might be underestimated if the NDVI values of the image have a large dynamic range. In such cases, some areas of g k have values that are negative or close to zero, and the spatial details may not be injected correctly based on Equation (8). Hence, overestimation could occur due to NDVI outliers or noise, causing the excessive injection of spatial details and the production of spectral distortions. Therefore, we reformulate Equation (6) using a sigmoid function, as described by Equations (14)–(16):
g k = ( 1 1 + e 3 { ( 1 ) a × N D V I + N D V I ¯ } + 0.5 ) g G
g G = σ ( M S ˜ G F ,   k ) σ ( I L ) × ( C m a x ,   k ) 3
a = { 1 ,   i f   c o r r ( M S ˜ G F ,   k ,   N D V I ) < 0 0 ,   i f   c o r r ( M S ˜ G F ,   k ,   N D V I ) > 0
By using the sigmoid function in Equation (14), the spatial clarity in non-vegetated areas can be increased by setting a high g k value. In vegetated areas, we attempt to minimize spectral distortion by adjusting the value of g k . Additionally, we minimize the underestimation of g in some regions by adjusting the parameters of the sigmoid function.

2.2.3. Extracting Spatial Details for Pansharpening

In this manuscript, we use a multispectral image based on GF. Therefore, the effect of GF is reflected when extracting spatial details from the original panchromatic image. Furthermore, by adjusting the panchromatic image, the problem of producing a pansharpened image that does not effectively increase the spatial clarity is avoided. First, P h with an increased spatial clarity is generated using g k and the MTF-matched filter. Equations (17) and (18) are used to generate P h . In Equations (17) and (18), a constant value (0.5) is determined as the optimal value by experiments based on trial and error through 25 full scene products of KOMPSAT-3A:
P h = P + 1 2 g k ( P P M T F )
g k = ( 1 1 + e 3 { ( 1 ) a   ×   N D V I   +   N D V I ¯ } + 0.5 )
where P is the original panchromatic image and P M T F is the MTF-filtered image of P . In Equation (17), P P M T F can be interpreted as the initial spatial details and g k is used to adjust the initial spatial details to ensure that excessive sharpening, which would result in spectral distortion during the pansharpening process, does not occur in vegetated areas. However, as P M T F from Equation (17) is only a filtered image and has not been subjected to the image upsampling process, we generated a synthetic panchromatic image, which can be used as P l in Equation (8), to extract the spatial details in the pansharpening process. Specifically, in this manuscript, since multispectral images were generated using GF, we performed pansharpening by generating synthetic panchromatic images P G F l with characteristics similar to those of M S ˜ k . For this purpose, a low-resolution panchromatic image P ˜ l was generated through image downscaling and upsampling processes involving the MTF-filtered image P M T F . Then, P G F l was generated as a result of applying GF using P ˜ l as an input image and P as a guidance image. As P G F l is an image generated using the original panchromatic image, the proposed pansharpening algorithm can be classified as an MRA-based algorithm. However, through the extraction of spatial details by P h and P G F l , the spatial clarity of pansharpened images can be efficiently increased compared to that based on traditional CS- and MRA-based algorithms. In the case of non-vegetated areas, the spatial details obtained by the proposed method are clearer than those produced by traditional MRA-based algorithms, while the spatial details in vegetated areas produced by the existing and proposed algorithms are similar. Therefore, the spatial details extracted by the proposed algorithm can minimize spectral distortion in vegetated areas and effectively improve the spatial clarity. Finally, the proposed pansharpening algorithm in this paper can be defined as shown in Equation (19) by modifying Equation (8):
P S ^ k = M S ˜ G F ,   k + g k ( P P l ) = M S ˜ G F ,   k + g k ( P h P G F l ) = M S ˜ G F ,   k + g k ( P P G F l ) + g k 2 2 g G ( P P M T F ) ,   k = 1 , ,   N

3. Materials

The proposed pansharpening algorithm was applied to satellite imagery acquired by KOMPSAT-3A, which was launched by the Korea Aerospace Research Institute (KARI) on 26 May 2015. The specifications and characteristics of KOMPSAT-3A are described in Table 1.
In the experiment, two study areas, including targets based on tarps, were selected. The first site was located in the Salon region of France, and the second site was located in the Baotou region of China. The Salon region of France is complex, as it includes both urban and vegetated areas, whereas the Baotou region is composed primarily of cropland and natural terrain. In particular, all the satellite images used in the experiment were L1R products that were radiometrically corrected and covered full scenes. Table 2 describes the characteristics of the satellite images over the two sites, and Figure 3 illustrates the two study areas.

4. Experimental Results

4.1. Quality Assessment of Pansharpened Images

Various quality assessment methods and quality indices have been proposed to estimate the spectral and spatial quality of pansharpened images. In such cases, a multispectral image with a high spatial resolution that can be used as a reference should exist; however, such images are often unavailable. Therefore, many studies have proposed various quality assessment protocols to solve this problem. Such methods can be roughly divided into synthesis property, consistency property, and quality no reference (QNR) protocols [17,43]. To utilize the original multispectral image as a reference, the synthesis property method is used to generate a pansharpened image after downgrading the original multispectral and panchromatic images. Since the generated pansharpened image has the same spatial resolution as the original multispectral image, the pansharpened image and original multispectral image can be quantitatively compared based on the synthesis property. In the case of a consistency property protocol, the pansharpened image is generated using the original multispectral and panchromatic images, which are then spatially downgraded for comparison with the original multispectral image. The QNR protocol can be applied to a pansharpened image generated by the synthesis and consistency property methods, and the results are evaluated based on the relative similarity between the pansharpened and original multispectral images. Palsson et al. [17] showed that the consistency approach is the most reasonable evaluation method, although it has a similar tendency to the synthesis property. The QNR protocols could not efficiently reflect the spatial and spectral quality of pansharpened image. Therefore, this manuscript also performed evaluations using consistency and synthesis property.

4.2. Quality Indices for Estimating the Quality of a Pansharpened Image

To evaluate the spectral and spatial quality of pansharpened images using the consistency property approach, several spectral and spatial quality indices—namely, the Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS), the spectral angle mapper (SAM), the universal image quality index (UIQI), and the correlation coefficient (CC) [2,17,44,45]—were employed to estimate the spectral quality. ERGAS estimates the global spectral/spatial error of pansharpened images using Equation (20) [2]:
ERGAS = 100 R 1 N k = 1 N ( RMSE ( I k , J k ) μ ( I k ) ) 2
where I k is the reference multispectral image, J k is the pansharpened image, and R is the spatial resolution ratio between the multispectral and panchromatic images. The closer the ERGAS value is to zero, the less spectrally distorted the pansharpened image is. In the case of SAM, the average spectral difference in the angle between each pixel in the reference and pansharpened images is calculated using Equation (21) [2,17]:
SAM = arccos ( I { k } , J { k } I { k } J { k } )
where I { k } indicates a pixel vector of image I in the k-th band. Similar to ERGAS, the closer the SAM value is to zero, the less distorted the pansharpened image is. UIQI, which was developed by Wang and Bovik [44], reflects the loss of correlation, the luminance distortion and the contrast distortion and is calculated using Equation (22):
UIQI = σ I J σ I σ J 2 I J ¯ ( I ¯ ) 2 + ( J ) 2 2 σ I σ J ( σ I 2 + σ J 2 )
where σ I J is the covariance of I and J . The closer the UIQI value is to one, the less distorted the pansharpened image is. Additionally, CC is a representative spectral quality index for pansharpened images. Specifically, CC calculates the spectral similarity between the reference multispectral and pansharpened images using Pearson’s CC. The closer the CC value is to one, the greater the spectral similarity between the pansharpened image and the reference dataset [45].

4.3. Experimental Results and Analysis

To evaluate the performance of the proposed algorithm, we selected two pansharpening algorithms, namely, the GSA and MTF-GLP algorithms, for a comparison of the spectral and spatial quality of the pansharpened images [2,15]. In this manuscript, GFNDVI denotes the proposed GF-based pansharpening algorithm using local injection gains based on the NDVI. Figure 4 and Figure 5 show the pansharpening results according to each algorithm and detailed images of each pansharpened image. In the vegetated area (upper left area of Figure 4), the spectral distortion of the pansharpened images generated by the MTF-GLP is greater (Figure 4c) than that of the pansharpened images generated by the GSA and GFNDVI. Additionally, as shown in Figure 5, the colors of some cultivated areas in the GSA and MTF-GLP results are very bright; this brightness is caused by the excessive injection of spatial details into the vegetated area. Meanwhile, the images pansharpened by the GFNDVI have spectral and spatial characteristics similar to those of the original panchromatic and multispectral images. Furthermore, the pansharpened images by the GFNDVI have the best spatial clarity among the three techniques, as shown in Figure 4e and Figure 5e. This means that a multispectral image generated by GF can be utilized for pansharpening and that our methodology for extracting local injection gains is effective.
Table 3 presents the results of the quality indices for the pansharpened images generated by each algorithm. For the calculation of the quality indices, in case of the consistency property, the downgraded pansharpening images were compared with the original multispectral images. In the case of the synthesis property, the pansharpened images were generated using downgraded panchromatic and multispectral images, and the evaluation index was calculated by comparing them with the original multispectral image. Therefore, the pansharpened image generated to apply the synthesis property should have the same spatial and spectral characteristics as the original multispectral and degraded panchromatic image. The pansharpened image used for the consistency property should be the same spatial characteristics as the original panchromatic image, while the degraded pansharpened image should be similar to original multispectral image. As shown in Table 3, the proposed method shows the best CC and ERGAS values, except for the results by synthesis property in the Salon region. However, in the case of SAM and UIQI, the MTF-GLP and GSA show the best results, respectively. The reason for this discrepancy is that the consistency property is used to quantify the difference between the pixel value from the original multispectral image. However, the proposed method applies GF to the original multispectral image to generate a spatially optimized multispectral image converted into characteristics similar to those of the original panchromatic image. As the original multispectral image is used as the reference for both the evaluation method by synthesis and consistency properties, pansharpened images generated by the GFNDVI algorithm can appear as if the evaluation indices for quantitative estimation might be decreased. This is due to the fact that GFNDVI aims to generate a pansharpened image which has similar spatial characteristics to the original panchromatic image. Additionally, since spectral and spatial quality are generally in a tradeoff relationship, spectral distortion may occur in a pansharpened image having better spatial clarity. Therefore, the evaluation indices of the pansharpened images generated for the Salon region become low, since the spatial clarity is greatly improved, as shown in Figure 4e, and the spectral characteristics of spatially enhanced area are emphasized even more. Nevertheless, considering the best ERGAS and CC values for both the Salon and Baotou regions, the proposed method effectively preserves the spectral information of the original multispectral image.
Additionally, to quantitatively analyze the spatial clarity of the pansharpened images, we analyzed the spatial characteristics of edge targets existing within the image. The results of enlarging the edge target area existing in the pansharpened image are shown in Figure 6 and Figure 7; evidently, the pansharpened images generated by the existing techniques do not show an edge target with definite linearity. In particular, the MTF-GLP, which is an MRA-based pansharpening algorithm, greatly distorts the edge characteristics of the target. Aliasing and blurring are observed in the edges around the target, even in the case of the images fused by the GSA. However, the pansharpened image generated by the GFNDVI effectively represents the edge information of the target, as shown in Figure 6e and Figure 7e. In particular, as shown in the specific area of the red rectangle in Figure 6a and Figure 7a, the edge lines between the black and white areas in the pansharpened images generated by the MTF-GLP and GSA include the effects of aliasing and artifacts. However, the edge target in the pansharpened image produced by the proposed algorithm does not exhibit artifacts. Figure 8 presents magnified views of the area encompassed by the red rectangle in Figure 7a. As shown in Figure 8b,c, the images pansharpened by the MTF-GLP and GSA exhibit aliasing around the edges between the black and white areas of the edge target. Furthermore, the image pansharpened by the MTF-GLP displays spectral distortion around the cross line of the edge target, as shown in Figure 8b. However, the image pansharpened by the proposed algorithm has similar spatial characteristics to those of the original panchromatic image, as shown in Figure 8a,d. Therefore, our proposed algorithm preserves the spatial information of the original panchromatic image during the pansharpening process, while minimizing spectral distortion.
To quantitatively verify the spatial quality results of Figure 6 and Figure 7, the edge of each target was extracted; furthermore, the signal-to-noise ratio (SNR) and the Nyquist value of the MTF based on both an edge spread function (ESF) and a line spread function (LSF) were calculated [46,47]. The reference dataset for the SNR and Nyquist values was based on the original panchromatic image. Since the center of the target is composed of four edges, four edges are extracted from the image, after which the SNR and Nyquist values for the edges are calculated. If the edges in the image are clear, the SNR and MTF-Nyquist values calculated through those edges are high. Table 4 shows the average SNR and Nyquist values for each edge. As shown in Table 4, the SNR and Nyquist values for the edges in the pansharpened images generated by the GFNDVI are higher than those generated by the MTF-GLP and GSA. Table 4 also shows that the SNR and MTF-Nyquist values are similar to those of the original panchromatic image. Therefore, the existing pansharpening algorithms generate aliasing and blurring effects along edge boundaries; however, the proposed algorithm effectively reflects the spatial characteristics of the original panchromatic image.

5. Discussion

The experimental results confirmed that the proposed GFNDVI technique produces similar or superior pansharpened images in terms of the spectral and spatial quality in comparison with existing pansharpening techniques. However, the aim of this study is to generate spatially optimal pansharpened images by removing spatial dissimilarity from multispectral and panchromatic images. Therefore, in this section, the efficiency of the revised algorithm in extracting the local injection gains is discussed. The proposed approach for extracting the local injection gains derives the optimal variables that are neither overestimated nor underestimated. To evaluate these claims, we compared the extracted results of the local injection gains using Equation (9) with the results using the proposed method. The average, maximum, and minimum values of the local injection gains for each band extracted through each technique are shown in Table 5, which demonstrates that the averages of both methods are similar but that the local injection gains of the blue and NIR bands generated by Equation (9) are close to zero. Additionally, the maximum value is excessively large. However, we can confirm that this tendency is eliminated in the case of the proposed scheme using a sigmoid function.
Figure 9 presents a histogram plot of the local injection gains for the blue band for the Salon region of France shown in Figure 3a. As shown in Figure 9b, the histogram plot generated by the proposed method shows a decreased dynamic range due to the minimization of overestimated and underestimated values in comparison with Figure 9a. Therefore, the technique for extracting local injection gains proposed in this manuscript is slightly more stable than the existing technique.
Moreover, the spatial quality of images pansharpened using an ESF and LSF is further analyzed. Figure 10 and Figure 11 show the ESF and LSF along and across the panchromatic image in the blue band according to each algorithm in the Salon and Baotou regions, respectively. As shown in Figure 10 and Figure 11, the ESF curves of the pansharpened image generated by the GFNDVI present a pattern similar to that of the original panchromatic image compared with those of the pansharpened images generated by the GSA and MTF-GLP. Additionally, the LSF curves of the pansharpened images obtained by the GSA and MTF-GLP have a relatively wide full width at half maximum (FWHM), and the distortion of the LSF curve is large compared to that of the original panchromatic image. These results confirm that the linearity of the edge target generated through the proposed algorithm is associated with a small error and high SNR and MTF values. This trend was common at both experimental sites, as shown in Figure 10 and Figure 11. Therefore, the proposed GFNDVI algorithm effectively preserves the spatial clarity of original panchromatic images in the pansharpening process. Notably, the proposed method yields similar and ideal ESF and LSF curve shapes compared with the existing techniques. Furthermore, the proposed technique retrieves high SNR and MTF-Nyquist values.

6. Conclusions

In this manuscript, a new GF-based pansharpening algorithm is proposed to minimize spectral and spatial distortion in pansharpened images caused by spatial dissimilarities due to the differences between panchromatic and multispectral images related to the time lag between the sensors. Specifically, the proposed algorithm is focused on maintaining the spatial clarity of the original panchromatic image and minimizing spectral distortion within the pansharpened image. The main cause of a decrease in spatial clarity is that a resized multispectral image does not have the same spatial characteristics as a panchromatic image. Therefore, GF is used to generate an optimal multispectral image with the same spectral characteristics as the resized multispectral image and spatial characteristics similar to those of the panchromatic image. Additionally, to extract the local injection gains specific to the resized multispectral image based on GF, the existing injection gains were optimized using a sigmoid function. The quality of the pansharpened image generated through the proposed technique was analyzed based on the spectral and spatial characteristics of existing pansharpening image evaluation techniques and targets within images. The experimental results show that the proposed method yields less spectral distortion and better spatial clarity than conventional pansharpening algorithms. The computational costs of extracting local injection gains and the pansharpening model are similar to those of the general GSA and MTF-GLP algorithms; however, further works using parallel processing or graphics processing units will be needed since GF requires a relatively high computational cost.

Author Contributions

Conceptualization, J.C.; methodology, J.C.; software, J.C. and H.P.; validation, J.C., H.P., and D.S.; formal analysis, H.P.; investigation, D.S.; resources, D.S.; data curation, D.S.; writing—original draft preparation, J.C.; writing—review and editing, J.C.; visualization, H.P.; supervision, J.C.; project administration, D.S.; funding acquisition, D.S.

Funding

This research was funded by the Korea Aerospace Research Institute (KARI) (grant no. FR19920) and the Ministry of Education (NRF-2017R1D1A3B03034602).

Acknowledgments

This work was supported by the Korea Aerospace Research Institute (KARI).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 653–660. [Google Scholar]
  2. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  3. González-Audícana, M.; Otazu, X.; Fors, O.; Seco, A. Comparison between Mallat’s and the ‘à trous’ discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. Int. J. Remote Sens. 2005, 26, 595–614. [Google Scholar] [CrossRef]
  4. Choi, J.; Yeom, J.; Chang, A.; Byun, Y.; Kim, Y. Hybrid pansharpening algorithm for high spatial resolution satellite imagery to improve spatial quality. IEEE Geosci. Remote Sens. Lett. 2013, 10, 490–494. [Google Scholar] [CrossRef]
  5. Choi, J.; Kim, G.; Park, N.; Park, H.; Choi, S. A hybrid pan-sharpening algorithm of VHR satellite images that employs injection gains based on NDVI to reduce computational costs. Remote Sens. 2017, 9, 976. [Google Scholar] [CrossRef]
  6. Kwan, C.; Choi, J.H.; Chan, S.H.; Zhou, J.; Budavari, B. A super-resolution and fusion approach to enhancing hyperspectral images. Remote Sens. 2018, 10, 1416. [Google Scholar] [CrossRef]
  7. Selva, M.; Aiazzi, B.; Burera, F.; Chiarantini, L.; Baronti, S. Hyper-sharpening: A first approach on SIM-GA data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3008–3024. [Google Scholar] [CrossRef]
  8. Dou, W.; Che, Y.; Li, X.; Sui, D.Z. A general framework for component substitution image fusion: An implementation using the fate image fusion method. Comput. Geosci. 2007, 33, 219–228. [Google Scholar] [CrossRef]
  9. Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  10. Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An adaptive IHS pan-sharpening method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef]
  11. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS+ Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  12. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
  13. Choi, J.; Yu, K.; Kim, Y. A new adaptive component-substitution based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  14. Zhong, S.; Zhang, Y.; Chen, Y.; Wu, D. Combining component substitution and multiresolution analysis: A novel generalized BDSD pansharpening algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2867–2875. [Google Scholar] [CrossRef]
  15. Aiazzi, B.; Baronti, S.; Lotti, F.; Selva, M. A comparison between global and context-adaptive pansharpening of multispectral images. IEEE Geosci. Remote Sens. Lett. 2009, 6, 302–306. [Google Scholar] [CrossRef]
  16. Oh, K.; Jung, H.; Jeong, N. Pansharpening method for KOMPSAT-2/3 high-spatial resolution satellite image. Korean J. Remote Sens. 2015, 31, 161–170. [Google Scholar] [CrossRef]
  17. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Quantitative quality evaluation of pansharpened imagery: Consistency versus synthesis. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1247–1259. [Google Scholar] [CrossRef]
  18. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L.M. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef]
  19. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  20. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
  21. Kim, Y.; Kim, M.; Choi, J.; Kim, Y. Image fusion of spectrally nonoverlapping imagery using SPCA and MTF-based filters. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2295–2299. [Google Scholar] [CrossRef]
  22. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. MTF-based deblurring using a wiener filter for CS and MRA pansharpening methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2255–2269. [Google Scholar] [CrossRef]
  23. Massip, P.; Blanc, P.; Wald, L. A method to better account for modulation transfer functions in ARSIS-based pansharpening methods. IEEE Trans. Geosci. Remote Sens. 2012, 50, 800–808. [Google Scholar] [CrossRef]
  24. Vivone, G.; Simões, M.; Dalla Mura, M.; Restaino, R.; Bioucas-Dias, J.M.; Licciardi, G.A.; Chanussot, J. Pansharpening based on semiblind deconvolution. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1997–2010. [Google Scholar] [CrossRef]
  25. Vivone, G.; Addesso, P.; Restaino, R.; Dalla Mura, M.; Chanussot, J. Pansharpening based on deconvolution for multiband filter estimation. IEEE Trans. Geosci. Remote Sens. 2019, 57, 540–553. [Google Scholar] [CrossRef]
  26. Xu, Q.; Zhang, Y.; Li, B.; Ding, L. Pansharpening using regression of classified MS and Pan images to reduce color distortion. IEEE Geosci. Remote Sens. Lett. 2015, 12, 28–32. [Google Scholar]
  27. Restaino, R.; Vivone, G.; Dalla Mura, M.; Chanussot, J. Fusion of multispectral and panchromatic images based on morphological operators. IEEE Trans. Image-Process. 2016, 25, 2882–2895. [Google Scholar] [CrossRef] [PubMed]
  28. Li, H.; Jing, L.; Tang, Y.; Wang, L. An image fusion method based on image segmentation for high-resolution remotely-sensed imagery. Remote Sens. 2018, 10, 790. [Google Scholar] [CrossRef]
  29. Wang, W.; Liu, H.; Liang, L.; Liu, Q.; Xie, G. A regularized model-based pan-sharpening method for remote sensing images with local dissimilarities. Int. J. Remote Sens. 2018, 1–25. [Google Scholar]
  30. Garzelli, A.; Aiazzi, B.; Alparone, L.; Lolli, S.; Vivone, G. Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover. Remote Sens. 2018, 10, 1308. [Google Scholar] [CrossRef]
  31. Restaino, R.; Dalla Mura, M.; Vivone, G. Context-adaptive pan-sharpening based on image segmentation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 753–766. [Google Scholar] [CrossRef]
  32. Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A deep network architecture for pan-sharpening. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1753–1761. [Google Scholar]
  33. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef]
  34. Guo, Y.; Ye, F.; Gong, H. Learning an efficient convolution neural network for pansharpening. Algorithms 2019, 12, 16. [Google Scholar] [CrossRef]
  35. Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef]
  36. Jameel, A.; Riaz, M.M.; Ghafoor, A. Guided filter and IHS-based pan-sharpening. IEEE Sens. J. 2015, 16, 192–194. [Google Scholar]
  37. Zheng, Y.; Dai, Q.; Tu, Z.; Wang, L. Guided image filtering-based pan-sharpening method: A case study of GaoFen-2 imagery. ISPRS Int. J. Geo-Inf. 2017, 6, 404. [Google Scholar] [CrossRef]
  38. Liu, J.; Liang, S. Pan-sharpening using a guided filter. Int. J. Remote Sens. 2016, 37, 1777–1800. [Google Scholar] [CrossRef]
  39. Meng, X.; Li, J.; Shen, H.; Zhang, L.; Zhang, H. Pansharpening with a guided filter based on three-layer decomposition. Sensors 2016, 16, 1068. [Google Scholar] [CrossRef] [PubMed]
  40. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  41. Choi, J.; Park, H.; Kim, D.; Choi, S. Unsupervised change detection of KOMPSAT-3 satellite imagery based on cross-sharpened images by guided filter. Korean J. Remote Sens. 2018, 34, 777–786. [Google Scholar]
  42. Cho, K.; Kim, Y.; Kim, Y. Disaggregation of Landsat-8 thermal data using guided SWIR imagery on the scene of a wildfire. Remote Sens. 2018, 10, 105. [Google Scholar]
  43. Jeong, N.; Jung, H.; Oh, K.; Park, S.; Lee, S. Comparison analysis of quality assessment protocols for image fusion of KOMPSAT-2/3/3A. Korean J. Remote Sens. 2016, 32, 453–469. [Google Scholar] [CrossRef]
  44. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  45. Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef] [Green Version]
  46. Crespi, M.; De Vendictis, L. A procedure for high resolution satellite imagery quality assessment. Sensors 2009, 9, 3289–3313. [Google Scholar] [CrossRef] [PubMed]
  47. Javan, F.D.; Samadzadegan, F.; Reinartz, P. Spatial quality assessment of pan-sharpened high resolution satellite imagery based on an automatically estimated edge based metric. Remote Sens. 2013, 5, 6539–6559. [Google Scholar] [CrossRef]
Figure 1. Workflow of the proposed algorithm based on guided filtering (GF).
Figure 1. Workflow of the proposed algorithm based on guided filtering (GF).
Remotesensing 11 00633 g001
Figure 2. Examples of the spatial characteristics of each band for a target: (a) panchromatic image; (b) blue band of the multispectral image; (c) GF-based image of the blue band.
Figure 2. Examples of the spatial characteristics of each band for a target: (a) panchromatic image; (b) blue band of the multispectral image; (c) GF-based image of the blue band.
Remotesensing 11 00633 g002aRemotesensing 11 00633 g002b
Figure 3. Study areas: (a) Salon region, France; (b) Baotou region, China.
Figure 3. Study areas: (a) Salon region, France; (b) Baotou region, China.
Remotesensing 11 00633 g003
Figure 4. The details of pansharpened images according to each algorithm in the Salon region, France: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the generalized Laplacian pyramids with modulation transfer function-matched filtering (MTF-GLP) method; (d) image pansharpened by the Gram–Schmidt adaptive (GSA) method; (e) image pansharpened by the GF-based pansharpening algorithm using local injection gains based on the normalized difference vegetation index (GFNDVI) method.
Figure 4. The details of pansharpened images according to each algorithm in the Salon region, France: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the generalized Laplacian pyramids with modulation transfer function-matched filtering (MTF-GLP) method; (d) image pansharpened by the Gram–Schmidt adaptive (GSA) method; (e) image pansharpened by the GF-based pansharpening algorithm using local injection gains based on the normalized difference vegetation index (GFNDVI) method.
Remotesensing 11 00633 g004aRemotesensing 11 00633 g004b
Figure 5. The details of pansharpened images according to each algorithm in the Baotou region: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the MTF-GLP method; (d) image pansharpened by the GSA method; (e) image pansharpened by the GFNDVI.
Figure 5. The details of pansharpened images according to each algorithm in the Baotou region: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the MTF-GLP method; (d) image pansharpened by the GSA method; (e) image pansharpened by the GFNDVI.
Remotesensing 11 00633 g005
Figure 6. The details of the edge target according to each algorithm in the Salon region: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the MTF-GLP; (d) image pansharpened by the GSA; (e) image pansharpened by the GFNDVI.
Figure 6. The details of the edge target according to each algorithm in the Salon region: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the MTF-GLP; (d) image pansharpened by the GSA; (e) image pansharpened by the GFNDVI.
Remotesensing 11 00633 g006
Figure 7. The details of the edge target according to each algorithm in the Baotou region: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the MTF-GLP; (d) image pansharpened by the GSA; (e) image pansharpened by the GFNDVI.
Figure 7. The details of the edge target according to each algorithm in the Baotou region: (a) panchromatic image; (b) resized multispectral image; (c) image pansharpened by the MTF-GLP; (d) image pansharpened by the GSA; (e) image pansharpened by the GFNDVI.
Remotesensing 11 00633 g007
Figure 8. The magnified views of the red rectangle in Figure 7a according to each algorithm in the Baotou region: (a) panchromatic image; (b) image pansharpened by the MTF-GLP; (c) image pansharpened by the GSA; (d) image pansharpened by the GFNDVI.
Figure 8. The magnified views of the red rectangle in Figure 7a according to each algorithm in the Baotou region: (a) panchromatic image; (b) image pansharpened by the MTF-GLP; (c) image pansharpened by the GSA; (d) image pansharpened by the GFNDVI.
Remotesensing 11 00633 g008
Figure 9. Histogram plot of local injection gains according to blue band of the Salon region of France: (a) result by Equation (9) [5]; (b) result by the proposed algorithm (Equation (14)).
Figure 9. Histogram plot of local injection gains according to blue band of the Salon region of France: (a) result by Equation (9) [5]; (b) result by the proposed algorithm (Equation (14)).
Remotesensing 11 00633 g009
Figure 10. The edge spread function (ESF) and line spread function (LSF) curves in the cross direction according to the pansharpened image generated by each algorithm (Salon region). DN: digital number; RER: relative edge response; SNR: signal-to-noise ratio.
Figure 10. The edge spread function (ESF) and line spread function (LSF) curves in the cross direction according to the pansharpened image generated by each algorithm (Salon region). DN: digital number; RER: relative edge response; SNR: signal-to-noise ratio.
Remotesensing 11 00633 g010
Figure 11. The ESF and LSF curves along the pansharpened image generated by each algorithm (Baotou region).
Figure 11. The ESF and LSF curves along the pansharpened image generated by each algorithm (Baotou region).
Remotesensing 11 00633 g011
Table 1. The specifications of the Korea multipurpose satellite (KOMPSAT)-3A satellite sensor.
Table 1. The specifications of the Korea multipurpose satellite (KOMPSAT)-3A satellite sensor.
SensorKOMPSAT-3A
Multispectral resolution/size2.2 m
Panchromatic resolution/size0.55 m
WavelengthPanchromatic450–900 nm
Blue450–520 nm
Green520–600 nm
Red630–690 nm
NIR760–900 nm
Table 2. Descriptions of the experimental datasets.
Table 2. Descriptions of the experimental datasets.
Site 1 (Salon)Site 2 (Baotou)
Image size (panchromatic image) 24 , 060 × 23 , 800 (pixels) 24 , 060 × 23 , 120 (pixels)
Image size (multispectral image) 6015 × 5950 (pixels) 6015 × 5780 (pixels)
Acquisition date16 July 201715 October 2017
Table 3. Comparative pansharpening results corresponding to each region. ERGAS: erreur relative globale adimensionnelle de synthèse; SAM: the spectral angle mapper; CC: spatial correlation coefficient; UIQI: universal image quality index.
Table 3. Comparative pansharpening results corresponding to each region. ERGAS: erreur relative globale adimensionnelle de synthèse; SAM: the spectral angle mapper; CC: spatial correlation coefficient; UIQI: universal image quality index.
RegionAlgorithmSynthesis PropertyConsistency Property
ERGASSAMCCUIQIERGASSAMCCUIQI
Salon (France)MTF-GLP2.8383.4170.9480.7491.4731.3270.9870.910
GSA2.6303.3520.9550.7771.4531.3150.9870.945
GFNDVI2.7283.8500.9510.7421.2901.6440.9910.936
Baotou (China)MTF-GLP0.8260.8840.9660.8670.4340.4930.9930.952
GSA0.9800.9980.9580.8800.8620.7400.9730.949
GFNDVI0.7440.9950.9700.8760.4170.6170.9940.945
Table 4. Comparative spatial clarity results corresponding to each region. SNR: signal-to-noise ratio; MTF-Nyquist: Nyquist value based on the MTF.
Table 4. Comparative spatial clarity results corresponding to each region. SNR: signal-to-noise ratio; MTF-Nyquist: Nyquist value based on the MTF.
RegionAlgorithmSNR (dB)MTF-Nyquist (%)
SalonPanchromatic67.5517.11
MTF-GLP52.1714.74
GSA51.6716.37
GFNDVI63.1117.90
BaotouPanchromatic47.6526.63
MTF-GLP42.2117.79
GSA42.2920.76
GFNDVI47.3723.91
Table 5. Statistical characteristics of local injection gains according to algorithms used to extract the local injection gains.
Table 5. Statistical characteristics of local injection gains according to algorithms used to extract the local injection gains.
AlgorithmBandAverageMax. ValueMin. Value
Method by Equation (9) [5]Blue0.69312.11600.1158
Green1.12172.54440.5444
Red1.22812.60770.6508
NIR1.48412.06140.0614
Proposed method by Equation (14)Blue0.69291.03010.4508
Green1.12131.66710.7295
Red1.22761.82520.7987
NIR1.48472.00300.7625

Share and Cite

MDPI and ACS Style

Choi, J.; Park, H.; Seo, D. Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery. Remote Sens. 2019, 11, 633. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060633

AMA Style

Choi J, Park H, Seo D. Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery. Remote Sensing. 2019; 11(6):633. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060633

Chicago/Turabian Style

Choi, Jaewan, Honglyun Park, and Doochun Seo. 2019. "Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery" Remote Sensing 11, no. 6: 633. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060633

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop