Next Article in Journal
Empirical Algorithm for Significant Wave Height Retrieval from Wave Mode Data Provided by the Chinese Satellite Gaofen-3
Next Article in Special Issue
A Relative Radiometric Calibration Method Based on the Histogram of Side-Slither Data for High-Resolution Optical Satellite Imagery
Previous Article in Journal
Vertical Deformation Monitoring of the Suspension Bridge Tower Using GNSS: A Case Study of the Forth Road Bridge in the UK
Previous Article in Special Issue
Variational Destriping in Remote Sensing Imagery: Total Variation with L1 Fidelity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Directional 0 Sparse Modeling for Image Stripe Noise Removal

School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Submission received: 29 December 2017 / Revised: 11 February 2018 / Accepted: 22 February 2018 / Published: 26 February 2018
(This article belongs to the Special Issue Data Restoration and Denoising of Remote Sensing Data)

Abstract

:
Remote sensing images are often polluted by stripe noise, which leads to negative impact on visual performance. Thus, it is necessary to remove stripe noise for the subsequent applications, e.g., classification and target recognition. This paper commits to remove the stripe noise to enhance the visual quality of images, while preserving image details of stripe-free regions. Instead of solving the underlying image by variety of algorithms, we first estimate the stripe noise from the degraded images, then compute the final destriping image by the difference of the known stripe image and the estimated stripe noise. In this paper, we propose a non-convex 0 sparse model for remote sensing image destriping by taking full consideration of the intrinsically directional and structural priors of stripe noise, and the locally continuous property of the underlying image as well. Moreover, the proposed non-convex model is solved by a proximal alternating direction method of multipliers (PADMM) based algorithm. In addition, we also give the corresponding theoretical analysis of the proposed algorithm. Extensive experimental results on simulated and real data demonstrate that the proposed method outperforms recent competitive destriping methods, both visually and quantitatively.

Graphical Abstract

1. Introduction

Stripe noise (all denoted as “stripes” in this paper), which is generally caused by the inconsistency of the detecting element scanning or the influence of the detector moving and temperature changes, etc., is a universal phenomenon in remote sensing images. They may result in a bad influence not only on visual quality but also on subsequent applications in remote sensing images. Therefore, it is necessary to remove stripes and simultaneously maintain the healthy pixels from the degraded images. In general, the stripes have strongly directional and structural information, e.g., pixels normally damaged on row by row or column by column.
Recently, many approaches for destriping problems have been proposed, which may be roughly divided into three categories, mainly including filtering-based methods, statistics-based methods and optimization-based methods. Note that the proposed method belongs to the category of optimization-based methods.
The filtering-based methods, which generally obtain the results with variety of filters, have been widely utilized for remote sensing image destriping [1,2,3,4]. In Ref. [1], Chen et al. proposed an approach for remote sensing image destriping tasks based on a finite-impulse-response filter (FIR) in frequency domain. Although this method exhibits good results on the experimental CMODIS data, it inevitably leads to ringing and ripple artifacts. In Ref. [3], the wavelet analysis and adaptive Fourier zero-frequency amplitude normalization was used for hyperspectral image destriping problems, and this wavelet-based method had shown the promising ability for both stripes and random noise.
The statistics-based methods are mainly to analyze the distribution of stripes. These approaches have strong directional characteristic which is considered as the stripe prior for the remote sensing image destriping [5,6,7,8,9,10,11]. In Ref. [7], Weinreb et al. introduced a method based on matching empirical distribution functions (EDFs) for GOES-7 data, while the limitations and unstable property were caused by assuming the similarity and regularity among the stripes. To conquer the instability when the stripes are irregular or nonlinear, Rakwatin et al. [9] introduced a method, using both histogram-matching algorithm and local least squares fitting, to remove the stripes of Aqua MODIS band 6. In Ref. [10], spectral moment matching (SpcMM) method, which can automatically remove variety of frequencies stripes in a specific band, was proposed for Hyperion image destriping. In addition, Shen et al. [11] employed a piece-wise destriping method, which uses correction coefficients of each portion by considering neighboring normal row, for nonlinear and irregular stripes, but it cannot automatically select a threshold value to divide the image into different parts.
Recently, the optimization-based methods have shown superiorities for remote sensing image destriping problems [12,13,14,15,16,17,18,19,20,21,22,23]. The image destriping generally results in an ill-posed problem which fails to obtain a meaningful, stable and unique solution by the direct and traditional methods. Therefore, a common strategy for ill-posed problems is to construct a regularization model via investigating the priors of underlying image. For the optimization-based methods, they focus on searching and discovering the intrinsically prior knowledge to generate reasonable regularization models. In Ref. [24], Zorzi et al. proposed a sparse and low-rank decomposition for models specified in the time domain, and the authors in [25], proposed a sparse and low-rank decomposition of the inverse of the manifest spectral density to get a simpler graphical model. In Ref. [17], the authors presented a unidirectional total variational (UTV) model for MODIS (i.e., moderate-resolution imaging spectroradiometer) image stripes removal by fully considering the directional information of stripes. The UTV model is motivated by the classical TV model and the analysis of directional stripes. Chang et al. [21] proposed an optimization model combining the UTV with sparse priors of stripes applying to denoising and destriping simultaneously. In Ref. [22], the authors utilized the split Bregman iteration method with an anisotropic spectral-spatial total variation regularization to remove multispectral image stripes.
In summary, although these optimization-based methods can yield effective results of removing stripes, there still exists much room to improve. Most of them are implemented only from the perspective of noise removal, but without considering the typical properties of stripes, e.g., directional and structural properties. Even though they consider these properties, the formulated sparse destriping models fail to accurately depict the typical properties of stripes [26,27]. Moreover, the designed algorithms for non-convex models, e.g., 0 sparse model, generally do not have the convergence analysis. These motivate us to develop a new model and design the corresponding effective algorithm, that theoretically guarantees the convergence, to solve the remote sensing destriping problems.
In this paper, to remove the stripes of remote sensing images, we propose a non-convex sparse model which mainly consists of three sparse priors, including an 0 sparse prior by fully considering the directional property of stripes (y-axis), an 1 sparse prior by considering the discontinuity of underlying image (x-axis), and the sparsity of stripes by considering the structural property of stripes. The framework of the proposed method is shown in Figure 1. Moreover, we design a proximal alternating direction method of multipliers (PADMM) based algorithm to solve the proposed non-convex sparse model. Actually, PADMM method mainly incorporates a proximal term into the well-known ADMM method which has been applied to many image processing applications, e.g., image deblurring [28], image denoising [29], tensor completion [30], etc. Note that, after adding this additional proximal term, the resulting algorithm is actually not an ADMM algorithm. The Karush–Kuhn–Tucker (KKT) conditions is the first-order necessary for a solution in nonlinear programming to be optimal. In particular, the convergence to the KKT point of the optimization problem is theoretically proven in the work.
The goal of this work is to propose a new model and design the corresponding effective algorithm for remote sensing image destriping, aiming to obtain promising results. The contributions of this work are summarized as follows
(1)
Fully considering the latent priors of stripes, we formulate an 0 sparse model to depict the intrinsically sparse characteristic of the stripes.
(2)
We solve the non-convex model by a designed PADMM based algorithm which the corresponding theoretical analysis is given by this paper (see Appendix A).
(3)
The proposed method outperforms recent several competitive image destriping methods.
The outline of this paper is organized as follows. In Section 2, we will briefly introduce the related work. The proposed model and detailed solving algorithm will be shown in Section 3. In Section 4, we compare the proposed method with some competitive remote sensing image destriping methods, and give the discussions about the proposed method. Finally, conclusions are drawn in Section 5.

2. Related Work

2.1. Destriping Problem Formulation

The striping effects in remote sensing images mainly make up of additive and multiplicative components [15]. However, the multiplicative stripes can be described as additive case by the logarithm [31]. Thus, many types of research focus more on the additive stripes model
b ( x , y ) = u ( x , y ) + s ( x , y )
where b ( x , y ) , u ( x , y ) and s ( x , y ) denote the components of the observed image, the underlying image and stripes at the location ( x , y ) , respectively. For convenience, a matrix-vector form can be written as follows
b = u + s ,
where b , u and s R n represent the lexicographical order vectors of b ( x , y ) , u ( x , y ) and s ( x , y ) , respectively. The purpose of our work is to estimate the stripes s , then the underlying image will be recovered by the formula of u = b s .

2.2. UTV for Remote Sensing Image Destriping

The total variation (TV) model, which is first proposed by Rudin, Osher, and Fatemi (ROF) [32], has shown powerful ability in many image applications, e.g., image unmixing [33], image deblurring [34], image inpainting [35], etc. It has the following form
E ( u ) = 1 2 Ω | | u b | | 2 + λ T V ( u ) ,
where λ is a positive regularization parameter, and T V ( u ) represents the regularization expressed as
T V ( u ) = Ω | u | = Ω u x 2 + u y 2 d x d y .
In many approaches, s ( x , y ) is usually regarded as constant in a given line. Although this assumption has shown stability in MOS-B, it fails in MODIS. Not only predominant nonlinear effects, but also the data quality of random stripes have been obtained in many emissive bands. Thus, more realistic assumptions are introduced to design an efficient destriping method.
Without loss the generality, we can assume that the stripes are along the vertical direction (y-axis). In mathematical words, most pixels of the stripe noise s hold the following property [17]:
s ( x , y ) y s ( x , y ) x ,
where we denote y-axis is along stripes direction, and x-axis is across stripes direction. When ( x , y ) is not the location of a pixel belonging to the stripes, both sides of Equation (5) are almost the same, denoted as
s ( x , y ) y s ( x , y ) x .
By the relation in Equations (5) and (6), we get
Ω s ( x , y ) y d x d y Ω s ( x , y ) x d x d y ,
which means
T V y ( s ) T V x ( s )
where T V x and T V y are horizontal and vertical variations, respectively. The authors in [17] encourage the robustness of stripes removal by minimizing the unidirectional total variation (UTV) model as follows
E ( u ) = T V y ( u b ) + λ T V x ( u ) ,
which can be solved by Euler–Lagrange equation based algorithm.
In Ref. [17], the UTV model can effectively deal with remote sensing image destriping problems, which has been demonstrated holding promising ability on Aqua and Terra MODIS data. Although TV model preserves image edges well, it can not accurately depict the specific directional property of stripes, and leads to undesired results. The UTV model that involves unidirectional constraint can remove stripes in the meanwhile not destroy the underlying image details. Inspired by the UTV model, we fully consider the intrinsically directional and structural priors of stripes and the continuous property of the underlying image. Finally, we form a unidirectional and sparsity-based optimization model.

3. The Proposed Method

Combining the stripes model in Equation (2), we will give the proposed optimal model with unidirectional prior motivated by the extension of the UTV model. In what follows, the detailed explanations of the proposed model and the corresponding solving algorithm will be exhibited.
Section 3.1 is the proposed model, Section 3.2 is the existing work, and Section 3.3 is the designed algorithm based on the existing work Section 3.2.

3.1. The Proposed Model

3.1.1. Local Smoothness Along Stripe Direction

The stripes of remote sensing images generally appear with column-by-column (y-axis) or row-by-row (x-axis). Without loss of generality, we view all stripes as column-by-column case to formulate the finally directional model (the row-by-row stripes can be easily rotated to column-by-column stripes to fit in the proposed model). Considering the smoothness of the stripes, the difference between adjacent pixels is quite small, or even close to zero, thus we generally use sparse prior to this character along the stripe direction (y-axis). The first regularization for the difference within the stripes is given as follows
R 1 = | | y s | | 0 ,
where y is a partial differential operator along stripe direction. We denote that y u represents the vector form of y U where U is a 2D image and the definition of x u is similar to y u . Comparing with some popular sparse measures, e.g., 1 -norm and p -norm ( 0 < p < 1 ), the 0 -norm that stands for the number of non-zero elements of a vector is a promising measure to depict sparse property, thus here we employ 0 -norm to describe the sparsity of y s . Although this term will lead to the non-convexity of the proposed model, we utilize the designed PADMM based algorithm to guarantee the solution converging to the KKT point.

3.1.2. Local Continuity of the Underlying Image

In general, the underlying image u along the x-axis is viewed as being continuous. When adding column-by-column stripes s to the underlying image, the local continuity of u is broken, which means that we should force x u to be small to keep the continuity of u . By this assumption and the relation u = b s , we utilize the following 1 -norm regularization to describe the local continuity of the underlying image
R 2 = | | x ( b s ) | | 1 ,
where x represents the difference operator in the across-stripe direction. Note that this term is actually the second term of the UTV model in Equation (9).

3.1.3. Global Sparsity of Stripes

In many destriping approaches [26,27,36,37], the stripes can be naturally viewed as being sparse when the stripes are not dense. Here, we take the 1 -norm to depict the sparsity of stripes:
R 3 = | | s | | 1 .
Even though the stripes are dense, this sparse term in Equation (12) must be retained, since it can effectively avoid the undesired effect and keep the robustness of the proposed method (see more discussion in the Results Section).
Combining the above three regularization terms, we finally formulate the 0 sparse model for remote sensing image destriping,
min s | | y s | | 0 + μ | | s | | 1 + λ | | x ( b s ) | | 1 ,
where μ and λ are two positive parameters.
Note that the proposed model in Equation (13) is similar to the model in [26], since they both employ the directional property of stripes. However, there still exists an important difference that the model in [26] enforces 1 norm to y s and 0 norm to s whereas our model enforces 1 norm to s and 0 norm to y s . From the experiment results, our optimal model can get better performance based on PADMM algorithm. For instance, Figure 2 shows the number of non-zero elements of s (Figure 2a) and y s (Figure 2b), where s is estimated from a real image example by the method [26], it is clear that y s is almost all around 0, whereas s is not. The 0 norm is a promising way to depict sparsity, thus our model enforces 0 norm to y s .
In what follows, we will exhibit how to solve the proposed non-convex sparse model by introducing the PADMM based algorithm, as well as give the theoretical analysis of the convergence.

3.2. The Solution

For the non-convex 0 regularization term, there exist many approaches to approximate it, e.g., 1 -norm [38], the logarithm function [39] or the penalty decomposition algorithm (PDA) [40]. In this work, we first present a work in Lemma 1, i.e., the mathematical program with equilibrium constraints (MPEC) [36], to transfer the non-convex 0 regularization term to the other equivalent one. Then, we can design a PADMM based algorithm to efficiently solve the equivalent model, in the meanwhile theoretically guarantee the convergence.
Lemma 1
(MPEC equation [36]). For any given w R n , it holds that
| | w | | 0 = min 0 v 1 1 , 1 v , s . t . v | w | = 0 ,
where represents the inner product of two vectors, anddenotes the elementwise product. The absolute of w stands for the absolute value of each element of w . Then, v * = 1 sign ( | w | ) is the unique optimal solution of the minimization problem in Equation (14), and sign ( x ) = 1 x > 0 0 x = 0 1 x < 0 .
Proof. 
See details in [36]. ☐
From Lemma 1, the 0 -norm sparse optimization model in Equation (13) is equivalent to
min 0 v 1 , s 1 , 1 v + μ | | s | | 1 + λ | | x ( b s ) | | 1 , s . t . v | y s | = 0 ,
According to the analysis of [36], if s * is the globally optimal solution of Equation (13), then ( s * , 1 sign ( | y s * | ) ) is the unique global minimizer of Equation (15).
Note that Equation (15) is still a non-convex problem, and the non-convexity is only caused by the constraint v | y s | = 0 . However, the problem in Equation (15) is similar to the main problem in [36], which is efficiently solved by a PADMM based algorithm that theoretically guarantees the convergence. Therefore, we employ the designed PADMM based algorithm to solve the resulted problem in Equation (15), as well as give the theoretical analysis of the convergence.
In the following, we will use the PADMM based algorithm to solve the optimization problem in Equation(15).

3.3. PADMM Based Algorithm

Considering the non-smooth 1 terms in the problem in Equation (15), we take the following variable substitutions to get the new optimization problem,
min 0 v 1 , s , h , z , w 1 , 1 v + μ | | z | | 1 + λ | | w | | 1 , s . t . v | h | = 0 , y s = h , s = z , x ( b s ) = w ,
with the auxiliary variables h , z , w R n . The augmented Lagrangian function L of Equation (16) is as follows
L ( h , z , w , v , s , π 1 , π 2 , π 3 , π 4 , β 1 , β 2 , β 3 , β 4 ) = 1 , 1 v + μ | | z | | 1 + λ | | w | | 1 + y s h , π 1 + β 1 2 | | y s h | | 2 2 + s z , π 2 + β 2 2 | | s z | | 2 2 + x ( b s ) w , π 3 + β 3 2 | | x ( b s ) w | | 2 2 + v | h | , π 4 + β 4 2 | | v | h | | | 2 2 ,
where π 1 , π 2 , π 3 , and π 4 are Lagrange multipliers, and β 1 , β 2 , β 3 , and β 4 are positive parameters. The minimization problem in Equation (17) can be solved by the PADMM based algorithm. Next, we discuss the solution of each subproblem.
(1) The h -subproblem can be written to the minimized problem as follows
min h y s k h , π 1 k + β 1 2 | | y s k h | | 2 2 + v k | h | , π 4 k + β 4 2 | | v k | h | | | 2 2 .
Now, let h i is the i-th pixel of h and we discuss two situations when the element h i 0 , if h i > 0 ,
h i = ( β 1 ( y s ) i + ( π 1 k ) i ) ( π 4 k ) i ( v k ) i β 1 + β 4 ( v k ) i ( v k ) i ,
if h i < 0 ,
h i = ( 1 ) ( β 1 ( y s k ) i + ( π 1 k ) i ) ( π 4 k ) i ( v k ) i β 1 + β 4 ( v k ) i ( v k ) i .
In summary, the h -subproblem has the closed-form solution as follows
h k + 1 = sign ( q k ) | q k | π 4 k v k β 1 + β 4 v k v k ,
where q k = β 1 y s k + π 1 k .
(2) The z -subproblem is given as follows
min z μ | | z | | 1 + s k z , π 2 k + β 2 2 | | s k z | | 2 2 ,
which has the closed-form solution by soft-thresholding strategy [41]
z k + 1 = Shrink ( s k + π 2 k β 2 , μ β 2 ) ,
where Shrink ( a , T ) = sign ( a ) max ( | a T | , 0 ) .
(3) Similar to z -subproblem, w -subproblem is written as follows
min w λ | | w | | 1 + β 3 2 | | x ( b s k ) w + π 3 k β 3 | | 2 2 .
The problem in Equation (24) has the following closed-form solution by the soft-shrinkage formulation,
w k + 1 = Shrink ( p k , λ β 3 ) ,
where p k = x ( b s k ) + π 3 k β 3 .
(4) The v -subproblem can be written as follows
min 0 v 1 v , c k + β 4 2 | | v | h k + 1 | | | 2 2 ,
where c k = 1 π 4 k | h k + 1 | . Combining with the constraint 0 v 1 , it has the closed-form solution,
v k + 1 = min ( 1 , max ( 0 , c k β 4 | h k + 1 | | h k + 1 | ) ) .
(5) Here, PADMM based algorithm needs to introduce an extra convex proximal term 1 2 | | s s k | | D 2 , which is defined as | | x | | D 2 = x T Dx , and D is a symmetric positive definite matrix. The s -subproblem becomes a strong convex optimization problem as
min s y s h k + 1 , π 1 k + β 1 2 | | y s h k + 1 | | 2 2 + s z k + 1 , π 2 k + β 2 2 | | s z k + 1 | | 2 2 + x ( b s ) w k + 1 , π 3 k + β 3 2 | | x ( b s ) w k + 1 | | 2 2 + 1 2 | | s s k | | D 2 ,
where
D = 1 κ I ( β 1 y T y + β 2 + β 3 x T x ) ,
κ 0 , 1 β 1 | | y | | 2 2 + β 2 + β 3 | | x | | 2 2 .
Then, Equation (28) will be equivalent to:
s k + 1 = a r g min s 1 2 | | s g k | | 2 2 ,
where g k = s k κ [ β 1 ( y s k h k + 1 ) + β 2 ( s k z k + 1 ) β 3 x T ( x b x s k w k + 1 ) ] .
(6) Finally, we update the Lagrangian multipliers by
π 1 k + 1 = π 1 k + β 1 ( y s k + 1 h k + 1 ) , π 2 k + 1 = π 2 k + β 2 ( s k + 1 z k + 1 ) , π 3 k + 1 = π 3 k + β 3 ( x ( b s k + 1 ) w k + 1 ) , π 4 k + 1 = π 4 k + β 4 ( v k + 1 | h k + 1 | ) .
Combining Steps (1)–(6), we formulate the final algorithm to iteratively solve the proposed 0 sparse model in Equation (13). In particular, the subproblems all have the closed-form solutions to ensure the accuracy of the algorithm. Finally, the solving process has been summarized in Algorithm 1.
In Algorithm 1, λ , μ , β 1 , β 2 , β 3 , and β 4 are some pre-defined parameters, while t o l and M i t e r represent the positive tolerance value and the maximum iterations, respectively. In this work, we set t o l = 1 / 255 and M i t e r = 10 3 . In the following, we discuss the convergence of Algorithm 1.
Algorithm 1 The algorithm for model in Equation (13)
Input: The observed image b (with stripes), the parameters λ , μ , β i , i = 1 , 2 , 3 , 4 ,
  the constant κ ( 0 , 1 β 1 | | y T | | 2 + β 2 + β 3 | | x T | | 2 ) , the maximum number of iterations M i t e r ,
  and the calculation accuracy t o l .
Output: The stripes s
Initialize:
(1) k 0 , v 0 1 , s 0 b , rho← 1
While rho> tol and k < M i t e r
(2) k k + 1
(3) Solve h k by Equation (21)
(4) Solve z k by Equation (23)
(5) Solve w k by Equation (25)
(6) Solve v k by Equation (27)
(7) Solve s k by Equation (29)
(8) Update the multipliers π i , i = 1 , 2 , 3 , 4 , by Equation (30)
(9) Calculate
   r h o = | | y s k + 1 h k + 1 | | 2 + | | s k + 1 z k + 1 | | 2 + | | x ( b s k + 1 ) w k + 1 | | 2 + | | v k + 1 | h k + 1 | | | 2 .
Endwhile

4. Experiment Results

In this section, we compare the proposed method with several competitive destriping methods, including the wavelet Fourier adaptive filter (WFAF) [3], the statistical linear destriping (SLD) [31], the unidirectional total variation model (UTV) [17], the global sparsity and local variational (GSLV) [26], and the Low-Rank Single-Image Decomposition (LRSID) [27], on both simulated and real remote sensing data. The codes of these methods, except the GSLV method, are available on the website (http://www.escience.cn/people/changyi/codes.html). Moreovre, we provide the code of the proposed method to compare the results on the website (http://www.escience.cn/people/dengliangjian/codes.html). As suggested in [27], we utilize the same periodic/nonperiodic stripes function adding stripes intensity [0, 255] to the underlying images. By the similar measure as in [27], the degraded images were normalized to [0, 1]. All experiments are conducted in MATLAB (R2016a) on a desktop with 16Gb RAM and Inter(R) Core(TM) CPU i5-4590: @3.30 GHz.
To evaluate the effects of different destriping methods, we will compare several qualitative and quantitative assessments. On the qualitative aspect, we show the visual results, the mean cross-track profile and the power spectrum of different methods. We also employ some acknowledged indexes, i.e., peak signal-to-noise ratio (PSNR) [42], structural similarity index (SSIM) [42], the inverse coefficient of variation (ICV) [15], mean relative deviation (MRD) [15], and the relative error (ReErr), to evaluate the performance of different approaches. These indexes are as follows,
PSNR = 10 log 10 255 2 × n | | u ^ u | | 2 ,
where u ^ and u are the estimated underlying image and the original underlying image. n represents the total number of pixels of one image.
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) μ x 2 + μ y 2 + C 2 ,
where μ x and μ y represent the average value of x and y images. σ x and σ y stand for the variance of x and y images, and σ x y is the covariance of these two images. C 1 and C 2 are constant here.
ReErr = | | s a d d e d s e s t i m a t e d | | 2 | | s a d d e d | | 2 ,
where the s a d d e d and s e s t i m a t e d represent the added stripes and estimated stripes by different methods, respectively.
ICV = R a R s d ,
where R a is the average value of the pixels with a window in a homogenous image region, and R b is the standard deviation of the pixels of the patch in this window. ICV calculates the homogeneous striped regions in estimated underlying image.
MRD = 1 n Σ i = 1 n | u ^ i u i | u i × 100 % ,
where u ^ i is the estimated underlying image, and u i is the original underlying image. MRD index shows the performance to retain the noise-free regions. Among these indices, ICV and MRD are non-reference indexes, and the others are full-reference indexes. Since stripes of the real images are dense, it is difficult to select several samples with the size of 10 × 10 window. In this paper, we select five different samples within a 6×6 window, and the mean ICV (MICV) and the mean MRD (MMRD) are shown in Table 4 for the six real experiments. Then, we will discuss how to select parameters. We note that we test the comparing methods according to the default or suggested parameters in their papers and codes.

4.1. Simulated Experiments

In simulated experiments, the stripes with periodic (Per) and nonperiodic (NonPer) noise are mainly determined by “Intensity” and r. Here, the “Intensity” means the added absolute value of the stripe scope, and the r represents the stripes ratio level within the remote sensing images. For convenience to compare, different stripes added to remote sensing images will be denoted as a vector with three elements, e.g., (Per, 10, 0.2) which represents the periodic stripes, the “Intensity” 10 and stripes ratio 0.2.
We take six experimental images, which the first and second are available on the website (“DigitalGlobe” with http://www.digitalglobe.com/product-samples), the sixth is available on the website (https://earthexplorer.usgs.gov/), and the second, fourth and fifth examples are available on the website (MODIS data with https://ladsweb.nascom.nasa.gov/), to test the performance of different methods. In the simulated experiments, the experimental images include: Washington DC Mall band 30, MODIS band 20, MODIS band 32, two small parts of the downloaded “Map Scale Ortho” image (“Map Scale Ortho” can be freely downloaded from the DigitalGlobe http://www.digitalglobe.com/resources/product-samples/30cm-imagery. We only use the two small parts of the downloaded “Map Scale Ortho” image acquired by the Satellite WorldView-3, since the “Map Scale Ortho” image is too large). To compare these methods clearly, we zoom in destriping details on the bottom left or bottom right of the image.

4.1.1. Periodic Stripes

For the periodic stripes case, we only take one example, i.e., the first column of Figure 3 with added stripes (Per, 10, 0.2), to compare the performance. Most of all existing methods performs quite effective to remove the stripes due to the simple structures of periodic stripes. The first column of Figure 3 also demonstrates the consistent conclusion that all comparing approaches remove the periodic stripes and well preserve the image details of stripe-free regions.

4.1.2. Nonperiodic Stripes

For the nonperiodic stripes case, we test five remote sensing images from the second column to the end column in Figure 3 with added stripes (NonPer, 100, 0.6), (NonPer, 50, 0.2), (NonPer, 60, 0.4), (NonPer, 100, 0.4) and (NonPer, 50, 0.6), respectively. Then, we display the destriping results of WFAF, SLD, UTV, GSLV, LRSID and the proposed method for different simulated remote sensing images starting from the third row to the end row. See the visual results of the second column, the WFAF method has an obvious black line and changes the intensity contrast of the underlying image. Although the other comparing methods can remove stripes, some regions change the intensity contrast of the underlying image on the left and the right parts, and the proposed method shows a good performance. Then, from the third to sixth examples, we can clearly observe the residual stripes and blurring effects resulted by the others comparing methods. Moreover, our method not only removes stripes completely but also preserves image details well. In Figure 4, we display the smaller patches of Figure 3 for visual quality comparisons, and our results have a better performance than the others.
Figure 5 shows the estimated stripes based on Figure 3. In Figure 5, we can see that the other methods may generate blurring effect and change intensity contrast. Meanwhile, the estimated stripes of the proposed method neither eliminate image structures nor bring in blurring effects for both periodic and nonperiodic stripes cases.
In Table 1, we show the ReErr results between the added stripes and estimated ones of Figure 5 from the quantitative aspect. Moreover, the PSNR values for different images have been shown in Table 1, and it is easy to know that our results outperform the other methods.

4.1.3. Averagely Quantitative Performance on 32 Test Images

To quantitatively test robustness and effectiveness of the proposed method, Table 2 and Table 3 report the averagely quantitative comparisons of 32 remote sensing images, which are randomly selected from three websites (“DigitalGlobe” with http://www.digitalglobe.com/product-samples; some subimages of “hyperspectral image of Washington DC Mall” with https://engineering.purdue.edu/~biehl/MultiSpec/; “MODIS” data with https://ladsweb.nascom.nasa.gov/). In the tables, the best PSNR and SSIM results have been highlighted in bold. Especially, we compare these methods on 32 remote sensing images with fixed parameters for each method.
Table 2 shows the PSNR and SSIM results on periodic stripes with different stripe levels. Although variance of PSNR is not the smallest, the SSIM of the proposed method holds the best performance, and SSIM is an important index to indicate stability on structural similarity of one method. Therefore, the proposed method can improve the PSNR and SSIM results to remove stripes.
For the nonperiodic stripes, we show the mean value results in Table 3. The WFAF method shows the instability, and the PSNR and SSIM of LRSID method are consistent with the results in [27]. From the two tables, our method always shows a good performance.
In Figure 6, we take two examples of Table 2 to show the PSNR and SSIM performance of all comparing methods on each image. The y-axis stands for the value of PSNR or SSIM and the x-axis represents the i-th image of 32 examples. Figure 6I,II show the PSNR and SSIM performance of stripes (Per, 100, 0.6). Moreover, Figure 6III,IV show that of stripes (NonPer, 50, 0.2). Although the PSNR results fluctuate with respect to different images, our method holds the best PSNR results on most of all images. Moreover, the SSIM results show the best performance with the smallest variance, which is consistent with the results of Table 2 and Table 3. From Figure 6, our method is superior to the other comparing methods.

4.2. Real Experiments

We also display the destriping results of six methods for six real remote sensing images, which are also available on the website (https://ladsweb.nascom.nasa.gov/) (see Figure 7). In the real experiments, we show the six real images: MODIS band 34, original band 28, Original Aqua MODIS band 30, MODIS band 136, Original MODIS Terra band 27, and Terra Africa MODIS band 33. Similar to Figure 3, the six real images with different stripes are shown in the first row, and the destriping results of all comparing methods are presented from the second row to the end row.
In Figure 8, we show the extracted stripe components of compared methods for the six real images in Figure 7. It seems that the proposed method removes too much background information in the real image experiments. Moreover, the non-reference indexes of MICV and MMRD have been shown in Table 4, and the best results have been highlighted. The proposed method shows a good performance in most of the cases. Although the results of the first two images are not the best, the results have only little difference among these compared methods. The indexes of MICV and MMRD depend on the selected patches and the window size. Different selected patches and window size may result in the performance of the two indexes. Here, we utilize them to partially evaluate the quantitative performance of the real image destriping.
In Figure 7, for the first, fifth and last real images, the proposed method not only removes the stripes completely, but also preserves image details on stripe-free regions well. Note that the methods GSLV and LRSID fail to obtain desired results for the first image as the mentioned in their papers. For the fourth column, there are also several stripe residuals with WFAF and SLD, and the wide black shadow areas appear by the UTV, GSLV and LRSID methods. Moreover, the destriping results of the WFAF and SLD leave obvious stripes for the second image, and still exist the wispy stripes for the third example. According to several real experiments, the results demonstrate the universal effectiveness and stability of the proposed method.

4.3. More Analysis

4.3.1. Qualitative Analysis

For the further comparisons of different destriping methods for simulated and real remote sensing images, we show the following two assessments. One is the mean cross-track profile that the x-axis stands for the column number of an image and the y-axis represents the mean value of each column (see Figure 9 and Figure 10). The other is the power spectrum that the x-axis is the normalized frequencies of an image, and the y-axis shows the spectral magnitude with a logarithmic scale (see Figure 11 and Figure 12).
In simulated experiments, the mean cross-track profile of the first image of Figure 3 is shown in Figure 9. Note that Figure 9a shows the mean cross-track profile of the underlying image, and Figure 9b is the result of the degraded image. Moreover, Figure 9c–f shows the mean cross-track profile results of the six destriping methods, respectively. From the overall perspective, Figure 9d,e shows the obvious change of the intensity contrast. Seeing the details, Figure 9c–g has some mild fluctuations which are different from the underlying image in Figure 9a. The performance of the proposed method is almost same as that of the original one.
In addition, the power spectrum results of the second image of Figure 3 is shown in Figure 11. We denote the power spectrum results as Figure 11a–h which represent the power spectrum results of the underlying image, the degraded image and the destriping results of six methods, respectively. Figure 11c–g has more fluctuations which indicate these methods may have the stripe residuals or bring a little new noise in their destriping processes. Our method, i.e., Figure 11h, it not only removes all stripes, but also preserves almost the essential details such as edges.
In real experiments, we also show the mean cross-track profile and the power spectrum in Figure 10 and Figure 12, respectively. Figure 10 shows the mean cross-track profile results of the first column of Figure 7. Note that Figure 10a is the mean cross-track profile result of the first real remote sensing image, and Figure 10b–g shows the profile results of the six destriping methods, respectively. In general, the profiles of the destriping method should smoothen huge fluctuates and maintain primary structure information. However, the profiles of WFAF and LRSID have obvious fluctuations where the stripes still exist, and that of SLD is over-smooth which loses a lot of underlying image details. In Figure 10d,e, although stripes are mostly removed, the destriping profiles have some mild blur and too much smoothness because of the unidirectional property of UTV and the global sparsity of GSLV, respectively. In addition, the profile of the proposed method, i.e., Figure 10g, can realize the desired result both on removing stripes and keeping underlying image details.
In Figure 12, the power spectrum results of the fourth example of Figure 7 are plotted. Figure 12a–h represent the power spectrum results of the fourth real remote sensing image and six destriping methods, respectively. We observe that the real remote sensing image in Figure 12a has many fluctuations where stand for stripes. According to the power spectrum results of the six methods in Figure 12b–f, although the stripes are almost removed well, there are still some slight blurring regions, while the proposed method shows the desired performance in Figure 12b.

4.3.2. The Influence of Different Regularization Terms in the Proposed Model

Fully considering the destriping problem in Equation (2) and the optimization model in Equation (13), we assume that R 2 is a necessary term, since R 2 is the only term to describe the property of the underlying image u . To confirm whether both R 1 and R 2 are necessary priors as well as have important contribution to destriping performance, in Figure 13, we give the mean value of PSNR and SSIM for 32 images as before. Here, R 12 represents R 1 + R 2 , R 23 stands for R 2 + R 3 and R 123 represents R 1 + R 2 + R 3 (i.e., the proposed model). Please find the definitions of R 1 , R 2 , R 3 from Equations (10)–(12), respectively.
Figure 13I,II show the mean value of PSNR and the mean value of SSIM on 32 images same as before for periodic stripes. The periodic stripe levels in Figure 13a–f are (Per, 10, 0.2), (Per, 10, 0.6), (Per, 50, 0.2), (Per, 50, 0.6), (Per, 100, 0.2) and (Per, 100, 0.6), respectively. Moreover, Figure 13III,IV display the mean value of PSNR and the mean value of SSIM on 32 images for nonperiodic stripes. The nonperiodic stripe levels in Figure 13a–f stand for (NonPer, 10, 0.2), (NonPer, 10, 0.6), (NonPer, 50, 0.2), (NonPer, 50, 0.6), (NonPer, 100, 0.2) and (NonPer, 100, 0.6), respectively.
From the results in Figure 13, we can conclude three points. (1) The PSNR results of the proposed model (i.e., R 123 ) have the best performance compared to those of the other two models (i.e., R 12 and R 13 ), and the SSIM results have the similar performance as compared with the PSNR results. (2) R 23 shows more stability than R 12 as the green bars do not significantly change with different stripes. (3) R 3 actually plays a more important role than R 1 with respect to PSNR (see Figure 13I,III. On the contrary, R 1 plays a more important role than R 3 with respect to SSIM (see Figure 13II,IV). Figure 13 demonstrates the effectiveness of the proposed model and the importance of the three terms.

5. Discussion

5.1. Parameter Value Selection

In this paper, the proposed method mainly involves six parameters λ , μ , β 1 , β 2 , β 3 , and β 4 . The stripes of different types can be removed by setting different parameters. For example, if the stripes are dense, the μ should be small and the λ should be large.
We tune the parameters by given strategy as follows. For instance, Figure 14, shows contour maps of PSNR values of different parameters setting for one example. In Figure 14I, we tune the regularization parameters λ , μ , and fix β 1 , β 2 , β 3 and β 4 . The yellow region is the best choice for the highest PSNR, and the orange is the next. Thus, we empirically set the regularization parameters within the range [0.1, 10] for λ and [0.1, 1] for μ from this kind of contour maps for different examples. Similarly, from the Figure 14II, we empirically set the regularization parameters within the range [1, 10] for β 2 and β 3 since most of all PSNR results are similar. Then, λ 1 is empirically set with [1, 102], and β 4 is set with [1, 103]. Note that, we tune these parameters by 10 times interval in each parameter range. For instance, when tune β 4 , we choose β 4 = 1 , 10 , 100 , 1000 . Note that, the parameters cannot be automatically or adaptively chosen for different images. In this paper, we can have a good performance in the most of the situations. For most of the simulated experiments, we set the parameters with λ = 1 , μ = 0.1 , β 1 = 100 , β 2 = β 3 = 10 , β 4 = 1000 . For the real experiments, most of the examples show the superior results with λ = 10 , μ = 1 and β 1 = β 2 = β 3 = β 4 = 1 . Note that, if fine tuning parameters based on this strategy for each image would get better results. However, we unify parameters to simplify the process of the parameter setting.

5.2. Limitation

In Table 5, we have shown the computed cost of two experiments. Actually, since the proposed method iteratively solves the 0 norm as sub-problems by the designed algorithm, the proposed method indeed increases the computational burden compared with others methods. In particular, some statistic-based methods have a very fast speed. For example, SLD method only takes 0.046 s (for the image size 300 × 300). Actually, it is not easy to achieve the trade-off between the fast speed and the high accuracy. The more time cost may be viewed as a limitation of the proposed method.

6. Conclusions

In this paper, we proposed a directionally non-convex 0 sparse model for remote sensing image destriping. We consider the directional and sparse characteristics of the stripe and the underlying image from local and global aspects. The non-convex 0 norm can be effectively solved iteratively rather than hard threshold. This model was efficiently solved by the designed PADMM algorithm based on the MPEC reformulation. Furthermore, we also theoretically gave the corresponding proof of the convergence to the KKT point by this work. Experimental results on simulated and real data demonstrated the effectiveness of the proposed method, both quantitatively and visually. In particular, the proposed method obtained promising quantitative results, e.g., PSNR and SSIM, than some recent competitive methods on the average performance of 32 collected images. The power spectrum and spatial mean cross-track profiles were also employed to illustrate the superiority of the given method. Besides, for the real images, we also employed some non-reference indexes such as MICV and MMRD to compare the performance of different compared methods, which also demonstrated the effectiveness of our method.
In the future, we will try to speed up the proposed method using C language or parallel computing, and release the corresponding source code. In addition, we will extend the proposed model to the oblique stripes removal by fully considering the latent properties of oblique stripes. Furthermore, the proposed method was only applied to single-band image stripe removal. We may extend our framework to multispectral or hyperspectral image stripe removal by some intrinsic properties, e.g., low-rank and non-local priors.

Acknowledgments

This research is supported by NSFC (61772003 and 61702083) and the Fundamental Research Funds for the Central Universities (ZYGX2016KYQD142, ZYGX2016J132, ZYGX2016J129).

Author Contributions

Hongxia Dou designed the framework, made the experiments and wrote the manuscript. Tingzhu Huang assisted in the prepared work. Liangjian Deng, Xile Zhao, and Jie Huang analyzed the results. Moreover, all coauthors moderated and improved the presentation of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Convergence of the Proposed Method

In fact, the global convergence of the ADMM algorithm has been proved under some conditions [43], and that of the generalized ADMM is also verified in [44]. Wen et al. [45] show that the sequence formed by ADMM can converge to a KKT point. Moreover, some researches give the convergence property of proximal ADMM (PADMM), see [36,46]. Considering our non-convex optimization model, convergence to a stationary point (local minimum) is the best convergence property. Similarly, in this paper, we design a PADMM based algorithm to solve the remote sensing image destriping problem, as well as prove the convergence of the proposed algorithm which can converge to the KKT point. Here, we denote that the limitation of the vector is defined as pointwise convergence. For instance, for x k = ( x 1 k , , x n k ) T , lim k x k + 1 x k = 0 represents that lim k x i k + 1 x i k = 0 , i = 1 , , n .
Theorem A1 (Convergence of Algorithm 1).
Let P ( h , z , w , v , s ) , Q ( π 1 , π 2 , π 3 , π 4 ) . { P k , Q k } k = 1 is a sequence of the solution of Algorithm 1 after k-th iteration. Assume that lim k ( Q k + 1 Q k ) = 0 and lim k s k + 1 s k = 0 , then the accumulation point of the subsequence { P k , Q k } is the KKT point which satisfies the KKT conditions.
Proof. 
For convenience, we define
Δ { z | 0 z 1 } .
 ☐
Recall our optimization model
min v Δ , s 1 , 1 v + μ | | z | | 1 + λ | | w | | 1 s . t . v | h | = 0 , y s = h , s = z , x ( b s ) = w .
The Lagrange function L is
L ( h , z , w , v , s , π 1 , π 2 , π 3 , π 4 ) = 1 , 1 v + μ | | z | | 1 + λ | | w | | 1 + y s h , π 1 + s z , π 2 + x ( b s ) w , π 3 + v | h | , π 4 ,
where π 1 , π 2 , π 3 and π 4 are Lagrange multipliers. Now, we give the first-order optimal conditions of the proposed problem for L ( h * , z * , w * , v * , s * , π 1 * , π 2 * , π 3 * , π 4 * ) .
0 = y T π 1 * + π 2 * x T π 3 * , 0 π 4 * | h * | 1 , v v * , v 0 π 1 * + π 4 * v * | | h * | | 1 0 π 2 * + μ | | z | | 1 0 π 3 * + λ | | w | | 1 0 = y s * h * 0 = s * z * 0 = x ( b s * ) w * 0 = v * | h * |
The Robinson’s constraint qualification can guarantee the existence of the optimization solution. Next, we will confirm the convergence property of the designed PADMM based algorithm with a convergence sequence under the similar assumption condition in [45]. The augmented Lagrangian function L ( h , z , w , v , s , π 1 , π 2 , π 3 , π 4 , β 1 , β 2 , β 3 , β 4 ) , which is in Equation (17), is denoted as L β . Note that, the Lagrangian function L is used to get the KKT conditions. Then we prove that the solution of the augmented Lagrangian function L , which is solved by Algorithm 1, can satisfy the KKT conditions.
(i)
According to the limit of Q k and the update formula of the multipliers Q k + 1 , we can get
lim k y s k + 1 h k + 1 = 0 ,
lim k s k + 1 z k + 1 = 0 ,
lim k x ( b s k + 1 ) w k + 1 = 0 ,
lim k v k + 1 | h k + 1 | = 0 .
(ii)
According to the limit of π 1 k , π 4 k , and the h k + 1 subproblem of L β in Equation (18) , we can get
lim k h k + 1 a r g min h y s k + 1 h , π 1 k + 1 + β 1 2 | | y s k + 1 h | | 2 + v k + 1 | h | , π 4 k + 1 + β 4 2 | | v k + 1 | h | | | 2 ,
By the first optimality condition of h , we have
lim k π 1 k + 1 + π 4 k + 1 v k + 1 | | h k + 1 | | 1 0 .
(iii)
According to the limit of π 2 k , and the z k + 1 subproblem of L β in Equation (22), we can get
lim k z k + 1 a r g min z μ | | z | | 1 + s k + 1 z , π 2 k + 1 + β 2 2 | | s k + 1 z | | 2 ,
By the first optimality condition of z , we have
lim k π 2 k + 1 + μ | | z k + 1 | | 1 0 .
(iv)
According to the limit of π 3 k , and the w k + 1 subproblem of L β in Equation (24), we can get
lim k w k + 1 a r g min w λ | | w | | 1 + β 3 2 | | x ( b s k + 1 ) w + π 3 k + 1 β 3 | | 2 ,
By the first optimality condition of w , we have
lim k π 3 k + 1 + λ | | w k + 1 | | 1 0 .
(v)
According to the limit of π 4 k , h k and the v k + 1 subproblem of L β in Equation (26), we can get
lim k v k + 1 a r g min 0 v 1 v , π 4 k + 1 | h k + 1 | 1 + β 4 2 | | v | h k + 1 | | | 2 ,
By the first optimality condition of v , we have
lim k π 4 k + 1 | h k + 1 | 1 , v v k + 1 0 , v .
(vi)
According to the limit of s k + 1 and the update formula of s k + 1 subproblem of L β in Equation (28), we have the first optimality condition of s is
y T π 1 k + 1 + β 1 y T ( y s k + 1 h k + 1 ) + π 2 k + 1 + β 2 ( s k + 1 z k + 1 ) x T π 3 k + 1 β 3 x T ( x T ( b s k + 1 ) w k + 1 ) + D ( s k + 1 s k ) = 0 .
Combining it with Equations (A4)–(A7), we have
lim k x T π 1 k + 1 + π 2 k + 1 y T π 3 k + 1 + D ( s k + 1 s k ) = 0 .
Since the formula lim k s k + 1 s k = 0 and the matrix.
D is a positive definite, so we have lim k D ( s k + 1 s k ) = 0 . Thus, we have
lim k x T π 1 k + 1 + π 2 k + 1 y T π 3 k + 1 = 0 .
Combining Equations (A4)–(A13), we conclude that the { P k , Q k } is the sequence generated by Algorithm 1, and, as k , there exists a subsequence { P k , Q k } , whose accumulation point satisfies the KKT conditions in Equation (A3).

References

  1. Chen, J.S.; Shao, Y.; Guo, H.D.; Wang, W.M.; Zhu, B.Q. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  2. Münch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar]
  3. Pande-Chhetri, R.; Abd-Elrahman, A. De-striping hyperspectral imagery using wavelet transform and adaptive frequency domain filtering. ISPRS J. Photogramm. Remote Sens. 2011, 66, 620–636. [Google Scholar] [CrossRef]
  4. Pal, M.K.; Porwal, A. Destriping of Hyperion images using low-pass-filter and local-brightness-normalization. ISPRS J. Photogramm. Remote Sens. 2011, 66, 620–636. [Google Scholar]
  5. Gadallah, F.L.; Csillag, F.; Smith, E.J.M. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  6. Horn, B.K.P.; Woodham, R.J. Destriping Landsat MSS images by histogram modification. Comput. Graph. Image Process. 1979, 10, 69–83. [Google Scholar] [CrossRef]
  7. Weinreb, M.P.; Xie, R.; Lienesch, J.H.; Crosby, D.S. Destriping GOES images by matching empirical distribution functions. Remote Sens. Environ. 1989, 29, 185–195. [Google Scholar] [CrossRef]
  8. Wegener, M. Destriping multiple sensor imagery by improved histogram matching. Int. J. Remote Sens. 1990, 11, 859–875. [Google Scholar] [CrossRef]
  9. Rakwatin, P.; Takeuchi, W.; Yasuoka, Y. Restoration of Aqua MODIS band 6 using histogram matching and local least squares fitting. IEEE Trans. Geosci. Remote Sens. 2009, 47, 613–627. [Google Scholar] [CrossRef]
  10. Sun, L.X.; Neville, R.; Staenz, K.; White, H.P. Automatic destriping of Hyperion imagery based on spectral moment matching. Can. J. Remote Sens. 2008, 34, S68–S81. [Google Scholar] [CrossRef]
  11. Shen, H.F.; Jiang, W.; Zhang, H.Y.; Zhang, L.P. A piece-wise approach to removing the nonlinear and irregular stripes in MODIS data. Int. J. Remote Sens. 2014, 35, 44–53. [Google Scholar] [CrossRef]
  12. Fehrenbach, J.; Weiss, P.; Lorenzo, C. Variational algorithms to remove stationary noise: applications to microscopy imaging. IEEE Trans. Image Process. 2012, 21, 4420–4430. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Fehrenbach, J.; Weiss, P. Processing stationary noise: model and parameter selection in variational methods. SIAM J. Imaging Sci. 2014, 7, 613–640. [Google Scholar] [CrossRef]
  14. Escande, P.; Weiss, P.; Zhang, W.X. A variational model for multiplicative structured noise removal. J. Math. Imaging Vis. 2017, 57, 43–55. [Google Scholar] [CrossRef]
  15. Shen, H.F.; Zhang, L.P. A MAP-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  16. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J.; Huang, J. Stripe noise removal of remote sensing images by total variation regularization and group sparsity constraint. Remote Sens. 2017, 9, 559. [Google Scholar] [CrossRef]
  17. Bouali, M.; Ladjal, S. Toward optimal destriping of MODIS data using a unidirectional variational model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  18. Zhang, H.Y.; He, W.; Zhang, L.P.; Shen, H.F.; Yuan, Q.Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  19. Zhou, G.; Fang, H.Z.; Yan, L.X.; Zhang, T.X.; Hu, J. Removal of stripe noise with spatially adaptive unidirectional total variation. Opt. Int. J. Light Electron Opt. 2014, 125, 2756–2762. [Google Scholar] [CrossRef]
  20. Liu, H.; Zhang, Z.L.; Liu, S.Y.; Liu, T.T.; Chang, Y. Destriping algorithm with L0 sparsity prior for remote sensing images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2295–2299. [Google Scholar]
  21. Chang, Y.; Yan, L.X.; Fang, H.Z.; Liu, H. Simultaneous destriping and denoising for remote sensing images with unidirectional total variation and sparse representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1051–1055. [Google Scholar] [CrossRef]
  22. Chang, Y.; Yan, L.X.; Fang, H.Z.; Luo, C.N. Anisotropic Spectral-Spatial Total Variation Model for Multispectral Remote Sensing Image Destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef] [PubMed]
  23. He, W.; Zhang, H.Y.; Zhang, L.P.; Shen, H.F. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  24. Zorzi, M.; Chiuso, A. Sparse plus low rank network identification: A nonparametric approach. Automatica 2017, 76, 355–366. [Google Scholar] [CrossRef]
  25. Zorzi, M.; Sepulchre, R. AR identification of latent-variable graphical models. IEEE Trans. Autom. Control 2016, 61, 2327–2340. [Google Scholar] [CrossRef]
  26. Liu, X.X.; Lu, X.L.; Shen, H.F.; Yuan, Q.Q.; Jiao, Y.L.; Zhang, L.P. Stripe Noise Separation and Removal in Remote Sensing Images by Consideration of the Global Sparsity and Local Variational Properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  27. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Remote Sensing Image Stripe Noise Removal: From Image Decomposition Perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  28. Liu, J.; Huang, T.Z.; Selesnick, I.W.; Lv, X.G.; Chen, P.Y. Image restoration using total variation with overlapping group sparsity, Information Sciences. Inf. Sci. 2015, 295, 232–246. [Google Scholar] [CrossRef]
  29. Mei, J.J.; Dong, Y.Q.; Huang, T.Z.; Yin, W.T. Cauchy noise removal by nonconvex ADMM with convergence guarantees. J. Sci. Comput. 2018, 74, 743–766. [Google Scholar] [CrossRef]
  30. Ji, T.Y.; Huang, T.Z.; Zhao, X.L.; Ma, T.H.; Deng, L.J. A non-convex tensor rank approximation for tensor completion. Appl. Math. Model. 2017, 48, 410–422. [Google Scholar] [CrossRef]
  31. Carfantan, H.; Idier, J. Statistical linear destriping of satellite-based pushbroom-type images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1860–1871. [Google Scholar] [CrossRef]
  32. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  33. Zhao, X.L.; Wang, F.; Huang, T.Z.; Ng, M.K.; Plemmons, R.J. Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4045–4058. [Google Scholar] [CrossRef]
  34. Ma, T.H.; Huang, T.Z.; Zhao, X.L.; Lou, Y.F. Image Deblurring With an Inaccurate Blur Kernel Using a Group-Based Low-Rank Image Prior. Inf. Sci. 2017, 408, 213–233. [Google Scholar] [CrossRef]
  35. Getreuer, P. Total variation inpainting using split Bregman. Image Process. Line 2012, 2, 147–157. [Google Scholar] [CrossRef]
  36. Yuan, G.Z.; Ghanem, B. l0TV: A new method for image restoration in the presence of impulse noise. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5369–5377. [Google Scholar]
  37. Shen, H.F.; Li, X.H.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.F. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  38. Dong, B.; Zhang, Y. An efficient algorithm for 0 minimization in wavelet frame based image restoration. J. Sci. Comput. 2013, 54, 350–368. [Google Scholar] [CrossRef]
  39. Fan, Y.R.; Huang, T.Z.; Ma, T.H.; Zhao, X.L. Cartoon-texture image decomposition via non-convex low-rank texture regularization. J. Frankl. Inst. 2017, 354, 3170–3187. [Google Scholar] [CrossRef]
  40. Lu, Z.S.; Zhang, Y. Sparse approximation via penalty decomposition methods. SIAM J. Optim. 2013, 23, 2448–2478. [Google Scholar] [CrossRef]
  41. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  42. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assesment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  43. He, B.S.; Yuan, X.M. On the O(1/n) Convergence Rate of the Douglas Rachford Alternating Direction Method. SIAM J. Numer. Anal. 2012, 50, 700–709. [Google Scholar] [CrossRef]
  44. Deng, W.; Yin, W.T. On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 2016, 663, 889–916. [Google Scholar] [CrossRef]
  45. Wen, Z.W.; Yang, C.; Liu, X.; Marchesini, S. Alternating direction methods for classical and ptychographic phase retrieval. Inverse Probl. 2012, 28, 115010. [Google Scholar] [CrossRef]
  46. Fazel, M.; Pong, T.K.; Sun, D.F.; Tseng, P. Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. Appl. 2013, 34, 946–977. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed destriping method.
Figure 1. Framework of the proposed destriping method.
Remotesensing 10 00361 g001
Figure 2. The number of nonzero elements of s (a) and y s (b), where s is estimated from a real image example (see Figure 3) by the method [26]. The y-axis represents the number of nonzero elements of s and y s , respectively, and the x-axis stands for the image pixel value whose range is [0, 1].
Figure 2. The number of nonzero elements of s (a) and y s (b), where s is estimated from a real image example (see Figure 3) by the method [26]. The y-axis represents the number of nonzero elements of s and y s , respectively, and the x-axis stands for the image pixel value whose range is [0, 1].
Remotesensing 10 00361 g002
Figure 3. The visual results of different simulated images. From top to bottom: underlying images, degraded images, the destriping results of WFAF, SLD, UTV, GSLV, LRSID, and Ours. The degraded images in the second row are, respectively, added the stripes (from left to right): (Per, 10, 0.2), (NonPer, 100, 0.4), (NonPer, 50, 0.2), (NonPer, 60, 0.4), (NonPer, 100, 0.4) and (NonPer, 50, 0.6). Readers are recommended to zoom in all figures for better visibility.
Figure 3. The visual results of different simulated images. From top to bottom: underlying images, degraded images, the destriping results of WFAF, SLD, UTV, GSLV, LRSID, and Ours. The degraded images in the second row are, respectively, added the stripes (from left to right): (Per, 10, 0.2), (NonPer, 100, 0.4), (NonPer, 50, 0.2), (NonPer, 60, 0.4), (NonPer, 100, 0.4) and (NonPer, 50, 0.6). Readers are recommended to zoom in all figures for better visibility.
Remotesensing 10 00361 g003
Figure 4. The zoom results of different simulated images in Figure 3. From top to bottom: zoom of the underlying images, the degraded images, the destriping results of WFAF, SLD, UTV, GSLV, LRSID, and Ours. Note that the levels of stripes are same as Figure 3.
Figure 4. The zoom results of different simulated images in Figure 3. From top to bottom: zoom of the underlying images, the degraded images, the destriping results of WFAF, SLD, UTV, GSLV, LRSID, and Ours. Note that the levels of stripes are same as Figure 3.
Remotesensing 10 00361 g004
Figure 5. The stripes s of different simulated images in Figure 3. From top to bottom: the added stripes on the underlying image, the extracted stripe components of WFAF, SLD, UTV, GSLV, LRSID, and Ours. Note that the levels of stripes are same as Figure 3.
Figure 5. The stripes s of different simulated images in Figure 3. From top to bottom: the added stripes on the underlying image, the extracted stripe components of WFAF, SLD, UTV, GSLV, LRSID, and Ours. Note that the levels of stripes are same as Figure 3.
Remotesensing 10 00361 g005
Figure 6. The PSNR and SSIM performance on 32 images for the stripes (Per, 100, 0.6) and (NonPer, 50, 0.2). The x-axis represents each image and the quantitative results are shown in the y-axis. (I) and (II) are the PSNR and SSIM results for the stripes (Per, 100, 0.6), respectively. (III) and (IV), respectively, represent the PSNR and SSIM performance of the stripes (NonPer, 50, 0.2).
Figure 6. The PSNR and SSIM performance on 32 images for the stripes (Per, 100, 0.6) and (NonPer, 50, 0.2). The x-axis represents each image and the quantitative results are shown in the y-axis. (I) and (II) are the PSNR and SSIM results for the stripes (Per, 100, 0.6), respectively. (III) and (IV), respectively, represent the PSNR and SSIM performance of the stripes (NonPer, 50, 0.2).
Remotesensing 10 00361 g006
Figure 7. The visual results of different real images. From top to bottom: the real images, the destriping results of WFAF, SLD, UTV, GSLV, LRSID, and Ours. Readers are recommended to zoom in all figures for better visibility.
Figure 7. The visual results of different real images. From top to bottom: the real images, the destriping results of WFAF, SLD, UTV, GSLV, LRSID, and Ours. Readers are recommended to zoom in all figures for better visibility.
Remotesensing 10 00361 g007
Figure 8. The stripes s of different real images in Figure 7. From top to bottom: the extracted stripe components of WFAF, SLD, UTV, GSLV, LRSID, and Ours.
Figure 8. The stripes s of different real images in Figure 7. From top to bottom: the extracted stripe components of WFAF, SLD, UTV, GSLV, LRSID, and Ours.
Remotesensing 10 00361 g008
Figure 9. Spatial mean cross-track profiles for simulated image of the first simulated example of Figure 3: (a) underlying image; (b) degraded image; and destriping results by: (c) WFAF; (d) SLD; (e) UTV; (f) GSLV; (g) LRSID; and (h) Ours.
Figure 9. Spatial mean cross-track profiles for simulated image of the first simulated example of Figure 3: (a) underlying image; (b) degraded image; and destriping results by: (c) WFAF; (d) SLD; (e) UTV; (f) GSLV; (g) LRSID; and (h) Ours.
Remotesensing 10 00361 g009
Figure 10. Spatial mean cross-track profiles for the first real example of Figure 7: (a) real image; and destriping results by: (b) WFAF; (c) SLD; (d) UTV; (e) GSLV; (f) LRSID; and (g) Ours.
Figure 10. Spatial mean cross-track profiles for the first real example of Figure 7: (a) real image; and destriping results by: (b) WFAF; (c) SLD; (d) UTV; (e) GSLV; (f) LRSID; and (g) Ours.
Remotesensing 10 00361 g010
Figure 11. Power spectrum for simulated image of the second example of Figure 7: (a) underlying image; (b) degraded image; and destriping results by: (c) WFAF; (d) SLD; (e) UTV; (f) GSLV; (g) LRSID; and (h) Ours.
Figure 11. Power spectrum for simulated image of the second example of Figure 7: (a) underlying image; (b) degraded image; and destriping results by: (c) WFAF; (d) SLD; (e) UTV; (f) GSLV; (g) LRSID; and (h) Ours.
Remotesensing 10 00361 g011
Figure 12. Power spectral for the forth real example of Figure 7: (a) real image; and destriping results by: (b) WFAF; (c) SLD; (d) UTV; (e) GSLV; (f) LRSID; and (g) Ours.
Figure 12. Power spectral for the forth real example of Figure 7: (a) real image; and destriping results by: (b) WFAF; (c) SLD; (d) UTV; (e) GSLV; (f) LRSID; and (g) Ours.
Remotesensing 10 00361 g012
Figure 13. The influence of different terms in the proposed model. R 12 represents R 1 + R 2 , R 23 stands for R 2 + R 3 and R 123 represents R 1 + R 2 + R 3 (i.e., the proposed model). (I) The mean PSNR performance on 32 images for periodic stripes with different stripe levels; (II) The mean SSIM performance on 32 images for periodic stripes with different stripe levels; The stripe levels (a–f) stand for (Per, 10, 0.2), (Per, 10, 0.6), (Per, 50, 0.2), (Per, 50, 0.6), (Per, 100, 0.2) and (Per, 100, 0.6), respectively. (III) The mean PSNR performance on 32 images for nonperiodic stripes with different stripe levels; (IV) The mean SSIM performance on 32 images for nonperiodic stripes with different stripe levels. The stripe levels (a–f) stand for (NonPer, 10, 0.2), (NonPer, 10, 0.6), (NonPer, 50, 0.2), (NonPer, 50, 0.6), (NonPer, 100, 0.2) and (NonPer, 100, 0.6), respectively.
Figure 13. The influence of different terms in the proposed model. R 12 represents R 1 + R 2 , R 23 stands for R 2 + R 3 and R 123 represents R 1 + R 2 + R 3 (i.e., the proposed model). (I) The mean PSNR performance on 32 images for periodic stripes with different stripe levels; (II) The mean SSIM performance on 32 images for periodic stripes with different stripe levels; The stripe levels (a–f) stand for (Per, 10, 0.2), (Per, 10, 0.6), (Per, 50, 0.2), (Per, 50, 0.6), (Per, 100, 0.2) and (Per, 100, 0.6), respectively. (III) The mean PSNR performance on 32 images for nonperiodic stripes with different stripe levels; (IV) The mean SSIM performance on 32 images for nonperiodic stripes with different stripe levels. The stripe levels (a–f) stand for (NonPer, 10, 0.2), (NonPer, 10, 0.6), (NonPer, 50, 0.2), (NonPer, 50, 0.6), (NonPer, 100, 0.2) and (NonPer, 100, 0.6), respectively.
Remotesensing 10 00361 g013
Figure 14. Parameters value selection
Figure 14. Parameters value selection
Remotesensing 10 00361 g014
Table 1. The ReErr results between s a d d e d and s e s t i m a t e d , and the PSNR between the ground-truth and the computed image by the different methods.
Table 1. The ReErr results between s a d d e d and s e s t i m a t e d , and the PSNR between the ground-truth and the computed image by the different methods.
Images(a)(b)(c)(d)(e)(f)
ReErrWFAF0.15880.28280.25190.24680.23860.2574
SLD0.08740.16700.17230.16640.13300.1346
UTV0.08310.15420.23710.18180.13140.1375
GSLV0.08670.10300.23850.19260.09120.1654
LRSID0.09170.18840.27310.21250.14500.1897
Ours0.01930.06930.03650.08920.03040.0813
PSNRWFAF37.123921.309733.116628.696024.553928.1562
SLD42.312125.887836.416432.120029.628333.7867
UTV42.746326.578733.641331.350429.737033.6047
GSLV42.379830.079833.589930.850632.903131.9996
LRSID40.834524.711532.872830.463828.981731.6380
Ours55.450333.528749.895438.169742.445438.1677
Table 2. The mean value of PSNR and SSIM of 32 images with periodic noise.
Table 2. The mean value of PSNR and SSIM of 32 images with periodic noise.
IntensityIntensity = 10Intensity = 50Intensity = 100
Ratior = 0.2r = 0.6r = 0.2r = 0.6r = 0.2r = 0.6
PSNRWFAF41.400 ± 3.60141.702 ± 3.87037.160 ± 1.97537.553 ± 1.97532.196 ± 1.45732.501 ± 1.732
SLD42.037 ± 2.92741.048 ± 2.90941.710 ± 2.93041.957 ± 2.92840.614 ± 2.54941.644 ± 2.836
UTV42.030 ± 3.22941.032 ± 2.88640.920 ± 2.77343.086 ± 2.29841.470 ± 3.38541.058 ± 3.299
GSLV42.552 ± 2.95542.630 ± 2.88642.202 ± 3.05843.533 ± 2.85643.431 ± 3.09143.801 ± 2.705
LRSID43.948 ± 2.10442.775 ± 2.01042.308 ± 2.16944.548 ± 1.97643.779 ± 2.50044.035 ± 2.014
Ours52.918 ± 4.07449.497 ± 3.95652.853 ± 4.91049.212 ± 4.39052.854 ± 4.90249.182 ± 4.368
SSIMWFAF0.9934 ± 0.00580.9936 ± 0.00620.9887 ± 0.00840.9905 ± 0.00780.9818 ± 0.01030.9847 ± 0.0085
SLD0.9966 ± 0.00290.9966 ± 0.00290.9965 ± 0.00310.9965 ± 0.00320.9962 ± 0.00330.9964 ± 0.0037
UTV0.9959 ± 0.00270.9959 ± 0.00270.9911 ± 0.00250.9928 ± 0.00230.9954 ± 0.00240.9937 ± 0.0076
GSLV0.9991 ± 0.00770.9968 ± 0.00760.9916 ± 0.00790.9903 ± 0.00820.9966 ± 0.00850.9969 ± 0.0053
LRSID0.9990 ± 0.01070.9945 ± 0.00560.9932 ± 0.00440.9947 ± 0.00320.9936 ± 0.00470.9957 ± 0.0031
Ours0.9994 ± 0.00070.9987 ± 0.00110.9994 ± 0.00130.9986 ± 0.00160.9994 ± 0.00620.9986 ± 0.0019
Table 3. The mean value of PSNR and SSIM of 32 images with nonperiodic noise.
Table 3. The mean value of PSNR and SSIM of 32 images with nonperiodic noise.
IntensityIntensity = 10Intensity = 50Intensity = 100
Ratior = 0.2r = 0.6r = 0.2r = 0.6r = 0.2r = 0.6
PSNRWFAF40.971 ± 2.52339.372 ± 2.24930.536 ± 1.50837.609 ± 2.26324.849 ± 1.57322.594 ± 1.541
SLD41.476 ± 2.59240.935 ± 2.20135.964 ± 1.51042.007 ± 3.02030.963 ± 1.41428.403 ± 1.729
UTV41.153 ± 2.88038.615 ± 2.04135.648 ± 1.52742.505 ± 3.01031.055 ± 4.68731.599 ± 2.578
GSLV42.282 ± 2.35939.018 ± 1.65441.985 ± 1.23939.838 ± 2.90336.184 ± 1.39935.408 ± 2.472
LRSID42.672 ± 1.41839.034 ± 1.30242.814 ± 1.34940.497 ± 2.02437.779 ± 1.21233.559 ± 1.132
Ours48.801 ± 3.98544.700 ± 3.78449.057 ± 4.79149.057 ± 4.49244.365 ± 5.10639.452 ± 4.494
SSIMWFAF0.9925 ± 0.00560.9903 ± 0.00690.9744 ± 0.01040.9905 ± 0.00810.9364 ± 0.02070.9029 ± 0.0565
SLD0.9965 ± 0.00310.9952 ± 0.00310.9950 ± 0.00410.9964 ± 0.00320.9907 ± 0.00600.9823 ± 0.0142
UTV0.9958 ± 0.00290.9934 ± 0.00520.9937 ± 0.00420.9914 ± 0.00560.9886 ± 0.01930.9851 ± 0.0122
GSLV0.9982 ± 0.00160.9917 ± 0.00420.9962 ± 0.01010.9967 ± 0.00880.9956 ± 0.00910.9933 ± 0.0152
LRSID0.9983 ± 0.00320.9934 ± 0.01130.9891 ± 0.00700.9962 ± 0.00420.9975 ± 0.00910.9924 ± 0.0402
Ours0.9991 ± 0.00060.9956 ± 0.00350.9990 ± 0.00100.9986 ± 0.00160.9979 ± 0.00120.9942 ± 0.0042
Table 4. The MICV and MMRD index results of the six real images.
Table 4. The MICV and MMRD index results of the six real images.
ImagesIndexWFAFSLDUTVGSLVLRSIDOurs
(a)MICV4.77594.92746.88515.39557.46125.4162
MMRD (%)0.00780.04460.16460.11990.15900.0945
(b)MICV5.58715.57408.93506.44367.55927.4144
MMRD (%)0.04050.03770.11420.06950.09000.0662
(c)MICV3.96193.99404.30384.28844.04374.9604
MMRD (%)0.02860.02710.02760.02580.03850.0243
(d)MICV1.70571.67203.22972.22373.50392.6574
MMRD (%)0.36000.37850.17070.08760.22500.0262
(e)MICV0.86730.85330.88490.87860.89550.9017
MMRD (%)0.03280.03110.04320.03120.02650.0264
(f)MICV13.561913.295216.100116.511113.690616.7897
MMRD (%)0.01960.02100.02140.01900.02500.0176
Table 5. The computational cost of the methods.
Table 5. The computational cost of the methods.
Image SizeWFAFSLDUTVGSLVLRSIDOurs
300 × 3000.22740.04660.27451.14424.25816.6541
400 × 4000.26030.06500.57111.70048.521215.2448

Share and Cite

MDPI and ACS Style

Dou, H.-X.; Huang, T.-Z.; Deng, L.-J.; Zhao, X.-L.; Huang, J. Directional 0 Sparse Modeling for Image Stripe Noise Removal. Remote Sens. 2018, 10, 361. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030361

AMA Style

Dou H-X, Huang T-Z, Deng L-J, Zhao X-L, Huang J. Directional 0 Sparse Modeling for Image Stripe Noise Removal. Remote Sensing. 2018; 10(3):361. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030361

Chicago/Turabian Style

Dou, Hong-Xia, Ting-Zhu Huang, Liang-Jian Deng, Xi-Le Zhao, and Jie Huang. 2018. "Directional 0 Sparse Modeling for Image Stripe Noise Removal" Remote Sensing 10, no. 3: 361. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop