Next Article in Journal
Entropy Generation Rates through the Dissipation of Ordered Regions in Helium Boundary-Layer Flows
Next Article in Special Issue
Global Reliability Sensitivity Analysis Based on Maximum Entropy and 2-Layer Polynomial Chaos Expansion
Previous Article in Journal
Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations
Previous Article in Special Issue
Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation

1
College of Information and Communications Engineering, Harbin Engineering University, Harbin 150001, China
2
College of Communication and Electronic Engineering, Qiqihar University, Qiqihar 161006, China
3
National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Submission received: 30 April 2017 / Revised: 11 June 2017 / Accepted: 13 June 2017 / Published: 15 June 2017
(This article belongs to the Special Issue Maximum Entropy and Its Application II)

Abstract

:
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms.

1. Introduction

The adaptive filter technique has been widely used for echo cancellation, channel equalization, signal enhancement and active noise control [1]. A great number of adaptive filtering algorithms have been developed to meet the various requirements in practical engineering applications. From these algorithms, the least mean square (LMS) algorithm and its normalized form have been extensively studied because they are easy to implement and have good performance [2,3]. These two algorithms are the most classical algorithms that have been applied in channel estimation and echo cancellation in recent decades. Furthermore, set-membership (SM) filtering techniques have been proposed not only to reduce the computational burden but also to improve the estimation performance [4,5,6,7,8,9,10,11,12,13,14,15]. The SM filtering technique utilizes a special bound on the magnitude of the estimation error to split the adaptive filtering algorithms into two steps: (1) the first step is the information evaluation; (2) the second step is parameter update. If the second step does not occur frequently, the SM filters have low computations. Thus, the SM can reduce the complexity due to data-selective methods. Furthermore, the SM filtering algorithms also achieve lower steady-state misadjustment, such as the SM normalized LMS (NLMS) algorithm (SM-NLMS) [8,14]. However, the SM-NLMS algorithm cannot use the sparse characteristics of the multi-path channels. Then, the adaptive filtering algorithms for sparse channel estimation and sparse system identification applications have been proposed, including the proportionate NLMS (PNLMS) and the zero attracting adaptive filtering algorithms [16,17,18,19,20,21,22,23,24,25,26,27,28,29].
The PNLMS algorithm is realized by assigning different gains to each coefficients, which is implemented according to the magnitudes of each coefficient. In fact, the PNLMS algorithm is a variable step-size NLMS algorithm, which is implemented by using the gain assignment scheme to control the step-size. As a result, the PNLMS algorithm can enhance the convergence at the initial iterations in comparison with the classical NLMS algorithm. However, the performance of the PNLMS algorithm may be worse than that of the NLMS algorithm when it reaches its steady-state status. Then, an improved PNLMS (IPNLMS) has been presented to enhance the performance of the PNLMS algorithm [30]. However, they still need to be improved. After that, the SM technique has been integrated into the PNLMS algorithm to develop a SM-PNLMS algorithm [31]. The results showed that the SM-PNLMS performs better than the PNLMS in terms of the convergence and the estimation bias.
With the development of the signal processing, the sparse signal processing has attracted much more attention in recent decades. In particular, the compressed sensing (CS) [32] promotes the sparse signal processing since it can achieve high recovery accuracy for dealing with sparse signals. Then, the CS concepts have been incorporated into the adaptive filter algorithms to help to exploit sparseness characteristics of existing sparse channels or systems. On the basis of the motivation of the CS, the l 1 -norm penalty has been integrated into the LMS’s cost function to create the zero attracting LMS (ZA-LMS) algorithm. The ZA-LMS gives an attraction term to force the inactive coefficients to zero compared to the traditional LMS algorithm. However, the ZA-LMS exerts a penalty on all the coefficients uniformly, which may degrade the estimation performance for handling the less sparse signals. Then, a reweighted ZA-LMS (RZA-LMS) algorithm is proposed by Chen et al. [33] The RZA-LMS selects different penalties to each coefficient. However, these LMS algorithms have drawbacks for dealing with colored inputs and are sensitive to the scaling of the inputs. To overcome the drawbacks of the zero attracting LMS algorithms, the zero attracting techniques have been expanded into the affine projection algorithms [20,21,34,35,36,37,38], combined LMS [32], high-order error criterion algorithms [28,29] and PNLMS algorithms [24]. However, some of these expanded zero attracting adaptive algorithms are high in computational complexity.
In this paper, a correntropy induced metric (CIM) penalized SM-PNLMS algorithm is proposed to fully utilize the sparsity property of the acoustic channel and echo channel. The proposed CIM scheme is developed within the kernel framework. Then, the CIM penalty is integrated into the cost function of the SM-PNLMS algorithm to create an approximation l 0 -norm constrained zero attraction. The proposed CIM constraint SM-PNLMS (CIMSM-PNLMS) algorithm is derived in detail and its estimation behaviors are investigated through a underwater acoustic channel and an echo channel. The simulated numerical results illustrated that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms.
The structure of the paper is summarized as follows. In Section 2, the SM filtering theory and the SM-PNLMS algorithm are reviewed. Section 3 gives the proposed CIMSM-PNLMS algorithm within the framework of the SM theory and ZA theory. Section 4 presents the estimation performance of the proposed CIMSM-PNLMS algorithm. Section 5 draws a short conclusion of this paper.

2. Review of Related Algorithms

2.1. The Review of the SM Filtering Theory

Assuming that the channel input signal is x ( n ) = x ( n ) , x ( n 1 ) , x ( n 2 ) , , x ( n N + 1 ) T , the unknown channel impulse response of a finite impulse response (FIR) channel denotes h = h 0 , h 1 , , h N 1 T , where N represents the number of the total channel coefficients. The output of the adaptive estimator is
y ( n ) = x T ( n ) h ^ ( n ) ,
where h ^ ( n ) denotes the estimation channel vector at instant n. According to the adaptive estimation, the desired signal is
d ( n ) = x T ( n ) h + v ( n ) ,
where v ( n ) is a noise signal which is assumed to be independent with the input signal x ( n ) . In this case, the estimated error is
e ( n ) = d ( n ) y ( n ) .
The SM technique defines a special model space Θ that contains input and output vector pairs. A specified bound is set to effectively select the updating criterion for these data pairs within Θ . The goal of SM filtering criterion is to solve the optimization subjected to
| e ( n ) | 2 γ 2 ,
where γ represents the specified bound [2]. For h ^ ( n + 1 ) Θ , the optimization problem of the SM-NLMS will turn to be [2,6,8,9,10,11]
min h ^ ( n + 1 ) h ^ ( n ) 2 2 s . t . d ( n ) x T ( n ) h ^ ( n + 1 ) = γ .
A Lagrange multiplier method is considered to determine the minimization of Equation (5). Then, the updating equation of the SM-NLMS [8] is
h ^ ( n + 1 ) = h ^ ( n ) + μ SM ( n ) x ( n ) e ( n ) x T ( n ) x ( n ) + ε SM ,
where
μ SM ( n ) = 1 γ e ( n ) , if e ( n ) > γ 0 , otherwise ,
and ε SM is a small positive constant used to prevent the denominator from zero. The updates of Equation (6) occurs when d ( n ) x T ( n ) h ^ ( n + 1 ) > γ or d ( n ) x T ( n ) h ^ ( n + 1 ) < γ [9,37].

2.2. The Review of the SM-PNLMS Algorithm

Similar to the PNLMS algorithm [22], a gain assignment matrix is incorporated into the SM-NLMS algorithm for constructing the expected SM-PNLMS algorithm whose updating equation is
h ^ ( n + 1 ) = h ^ ( n ) + μ SM x ( n ) G ( n ) e ( n ) x T ( n ) G ( n ) x ( n ) + ε SM ,
where μ SM is a overall step size. G ( n ) is a diagonal matrix that assigns different step sizes for different coefficients, which is defined as
G ( n ) = diag { g 0 ( n ) , g 1 ( n ) , , g N 1 ( n ) ,
where each element in G ( n ) can be calculated by using
g i ( n ) = α i ( n ) i = 0 N 1 α i , 0 i N 1 ,
α i ( n ) = max ρ max δ , h ^ 0 ( n ) , h ^ 1 ( n ) , , h ^ N 1 ( n ) , h ^ i ( n ) .
Here, ρ is a positive parameter whose value range is usually 1 N 5 N , and it prevents h ^ i ( n ) from stopping when its value is much smaller than the largest coefficient. Herein, we set ρ = 5 N . The parameter δ is a regularization parameter that prevents the updating from stalling when all the coefficients are zeros at the initial iterations. In this paper, we set δ = 0.5 .

3. The Proposed CIMSM-PNLMS Algorithm

Although the PNLMS algorithm can exploit the sparse channel characteristics and the SM-PNLMS algorithm simplifies the computational complexity of the PNLMS by using the SM technique, these algorithms may perform worse than the NLMS and SM-NLMS, respectively. The PNLMS just improves the convergence at the initial iterations. It may cause a worse steady-state estimation behavior or a slower convergence when the PNLMS converges [22,23]. To further improve the convergence and the estimation performance of the SM-PNLMS and to exploit the sparsity of the acoustic channels, a CIM penalized SM-PNLMS (CIMSM-PNLMS) is proposed by introducing a CIM penalty into the cost function of the SM-PNLMS algorithm. The CIM constraint is used for further exploiting the sparsity property of the sparse systems or channels, and improving the convergence and steady-state performance at the later iterations. Furthermore, a l 1 -norm and a reweighting l 1 -norm are also used for constructing zero attracting SM-PNLMS (ZASM-PNLMS) algorithm and reweighted ZASM-PNLMS (RZASM-PNLMS) algorithm for the sake of comparison. Herein, the CIM is discussed within the kernel framework [39,40,41]. The correntropy of two arbitrary vectors can be described as
V ( X , Y ) = 1 N i = 1 N k ( x i , y i ) ,
where N denotes the number of elements in the vectors, and k ( . ) represents kernel function. A Gaussian kernel is used to develop the CIM, and k ( . ) is written as
k ( x , y ) = 1 σ 2 π exp ( ( x y ) 2 2 σ 2 ) ,
where σ denotes the kernel width. The nonlinear metric CIM is defined as
CIM ( X , Y ) = ( k ( 0 ) V ( X , Y ) ) 1 2 .
Taking the square on both sides of Equation (14), we obtain
CIM 2 ( X , 0 ) = k ( 0 ) N i = 1 N ( 1 exp ( ( x i ) 2 2 σ 2 ) ) .
Then, the CIM is incorporated into the cost function of the SM-PNLMS algorithm to develop the CIMSM-PNLMS algorithm. For h ^ ( n + 1 ) Θ , the purpose of our proposed CIMSM-PNLMS algorithm is to solve the optimization problem
min h ^ ( n + 1 ) h ^ ( n ) G 1 ( n ) 2 + ρ CIM G 1 ( n ) CIM 2 ( h ^ ( n + 1 ) , 0 ) s . t . d ( n ) x T ( n ) h ^ ( n + 1 ) = γ .
Then, a new cost function of the CIMSM-PNLMS is achieved
J ( n ) = h ^ ( n + 1 ) h ^ ( n ) T G 1 ( n ) h ^ ( n + 1 ) h ^ ( n ) + ρ CIM G 1 ( n ) CIM 2 h ^ ( n + 1 ) , 0 + λ d ( n ) x T ( n ) h ^ ( n + 1 ) γ .
Let
J ( n ) h ^ ( n + 1 ) = 0 ,
and
J ( n ) λ = 0 .
Then, we can get
G 1 ( n ) h ^ ( n + 1 ) h ^ ( n ) + ρ CIM G 1 ( n ) CIM 2 h ^ ( n + 1 ) , 0 h ^ ( n + 1 ) λ x ( n ) = 0 ,
and
x T ( n ) h ^ ( n + 1 ) = d ( n ) γ .
Left multiplying G ( n ) to both sides of the Equation (20), and we can get
h ^ ( n + 1 ) h ^ ( n ) + ρ CIM CIM 2 h ^ ( n + 1 ) , 0 h ^ ( n + 1 ) λ G ( n ) x ( n ) = 0 .
Thus, we have
h ^ ( n + 1 ) = h ^ ( n ) + λ G ( n ) x ( n ) ρ CIM CIM 2 h ^ ( n + 1 ) , 0 h ^ ( n + 1 ) .
Multiplying x T ( n ) on Equation (23) and substituting it into Equation (21), we will get
λ = e ( n ) γ + x T ( n ) ρ CIM CIM 2 h ^ ( n + 1 ) , 0 h ^ ( n + 1 ) x T ( n ) G ( n ) x ( n ) .
Substituting Equation (24) into Equation (23), we will get
h ^ ( n + 1 ) = h ^ ( n ) + μ CIM G ( n ) x ( n ) e ( n ) x T ( n ) G ( n ) x ( n ) ρ CIM I x ( n ) x T ( n ) G ( n ) x T ( n ) G ( n ) x ( n ) CIM 2 h ^ ( n + 1 ) , 0 h ^ ( n + 1 ) .
If one does not affect the minimum disturbance, a simpler method can be used to reduce the computational burden. Herein, x ( n ) x T ( n ) G ( n ) x T ( n ) G ( n ) x ( n ) is ignored, which is similar to [24,37,42]. In addition, from Equation (15), we can get
CIM 2 h ^ ( n + 1 ) , 0 = 1 N σ 2 π 1 exp h ^ ( n + 1 ) 2 2 σ 2 .
Thus, we have
CIM 2 h ^ ( n + 1 ) , 0 h ^ ( n + 1 ) = 1 N σ 3 2 π h ^ ( n + 1 ) exp h ^ ( n + 1 ) 2 2 σ 2 .
Assuming that h ^ ( n + 1 ) h ^ ( n ) , we can write the update function of the CIMSM-PNLMS as
h ^ ( n + 1 ) = h ^ ( n ) + μ CIM x ( n ) G ( n ) e ( n ) x T ( n ) G ( n ) x ( n ) + ε CIM ρ CIM 1 N σ 3 2 π h ^ ( n ) exp ( ( h ^ ( n ) ) 2 2 σ 2 ) ,
where μ CIM is also obtained from Equation (7). In the second term of Equation (28), ε CIM is a small positive constant used to prevent the denominator from zero. G ( n ) in the CIMSM-PNLMS is the same as that in the SM-PNLMS algorithm. For h ^ ( n + 1 ) Θ , the updating equation of the CIMSM-PNLMS algorithm is
h ^ ( n + 1 ) = h ^ ( n ) .
We can see that the updating equation of the proposed CIMSM-PNLMS algorithm has an additional term ρ CIM 1 N σ 3 2 π h ^ ( n ) exp ( ( h ^ ( n ) ) 2 2 σ 2 ) in comparison with the SM-PNLMS algorithm, which is denoted as CIM zero attractor. The zero attraction strength of the CIM zero attractor is controlled by the parameter ρ CIM .
In a word, the proposed CIMSM-PNLMS algorithm exploits the sparsity of a sparse channel or a sparse system by the zero attraction penalty. Here, the CIMSM-PNLMS algorithm can be summarized as
h ^ ( n + 1 ) = h ^ ( n ) + A 1 P 1 SM PNLMS algorithm + A 2 P 2 CIMSM PNLMS algorithm ,
where
A 1 = μ CIM x ( n ) G ( n ) e ( n ) x T ( n ) G ( n ) x ( n ) + ε CIM ,
A 2 = ρ CIM 1 N σ 3 2 π h ^ ( n ) exp ( ( h ^ ( n ) ) 2 2 σ 2 ) ,
where the A 1 term is an adaptive update term and the A 2 term denotes the sparsity constraint term, which acts as a zero attractor. In addition, the CIMSM-PNLMS algorithm provides two update paths, namely, P 1 and P 2 . The path P 1 approximates h ^ ( n ) to the hyperplane defined by e ( n ) = 0 . P 2 is the zero attractor that forces the zero or near zero coefficients of h ^ ( n ) in the direction of zero. Similar to the CIMSM-PNLMS, we also proposed a zero attracting SM-PNLMS (ZASM-PNLMS), which is implemented by integrating a l 1 -norm into the cost function of the SM-PNLMS algorithm whose updating function is
h ^ ( n + 1 ) = h ^ ( n ) + μ ZASM ( n ) x ( n ) G ( n ) e ( n ) x T ( n ) G ( n ) x ( n ) + ε ZASM ρ ZASM sgn [ h ^ ( n ) ] ,
where μ ZASM ( n ) is the same as Equation (7) and ε ZASM is a small positive constant used to prevent the denominator from zero. G ( n ) in the ZASM-PNLMS is the same as that in the SM-PNLMS algorithm.

4. Performance of the Proposed CIMSM-PNLMS Algorithm

In order to analyze the performance of the proposed CIMSM-PNLMS algorithm, five experiments are constructed within the sparse channel estimation scenarios. In the first experiment, the convergence speed of the proposed CIMSM-PNLMS will be investigated. In the second experiment, the mean square error (MSE) of the CIMSM-PNLMS with different SNRs are presented. In the third experiment, the CIMSM-PNLMS is evaluated with different sparsity levels. In the fourth experiment, the key parameter of the CIMSM-PNLMS is investigated in detail. In the fifth experiment, we verify the performance of the CIMSM-PNLMS in an acoustic echo channel.
Experiment 1.
The proposed CIMSM-PNLMS algorithm is investigated in an underwater acoustic channel mode with a length of 64 to discuss the convergence speed. Herein, only four active coefficients are considered in this underwater acoustic channel [43], where the active coefficients are 1 and the other coefficients are 0. The signal-to-noise ratio (SNR) is set to be 30 dB, where input signal is normalized to 1 ( δ s 2 = 1 ) and the noise power ( δ v 2 ) is 1 × 10 3 . One hundred Monte Carlo runs are considered to obtain each point. The convergence speed of the proposed CIMSM-PNLMS is compared with that of the NLMS, PNLMS, IPNLMS, SM-PNLMS, ZASM-PNLMS, RZASM-PNLMS and CIMSM-NLMS [44] algorithms. Assume that the noise is Gaussian white noise, which is independent from the input signal. The MSE is used for evaluating the estimation performance, and the simulation parameters are: μ NLMS = 0.45 , μ PNLMS = μ IPNLMS = 0.4 , a = 0.5 , ρ ZASM = ρ RZASM = 6 × 10 6 , ρ CIMN = ρ CIM = 8 × 10 5 , γ = 2 δ v 2 , σ = 0.05 . μ NLMS , μ PNLMS and μ IPNLMS denote the step sizes of the NLMS, PNLMS and IPNLMS, respectively. a denotes the adjusting parameter of the IPNLMS whose value range is [ 1 , 1 ] . ρ ZASM , ρ RZASM , ρ CIMN , ρ CIM denote the zero-attraction factors of the ZASM-PNLMS, RZASM-PNLMS, CIMSM-NLMS and CIMSM-PNLMS. γ denotes the error bound of the SM technique. σ denotes the kernel width. The simulation result is shown in Figure 1. We can see that the proposed CIMSM-PNLMS achieves the fastest convergence. The PNLMS algorithm converges quickly at the initial iterations while its convergence slows down dramatically at the later iterations. Our proposed CIMSM-PNLMS algorithm integrates a CIM zero attractor, and hence, the convergence is speeded up at the later iterations.
Experiment 2.
The estimation behaviors of the proposed CIMSM-PNLMS algorithm at different SNRs are analyzed herein. In this experiment, the length of the channel is 64 and only one coefficient is active in this channel. We set the SNR to be 30 dB, 20 dB and 10 dB, respectively. To get the same initial convergence speed, the parameters are set as follows: μ NLMS = 0.8 , μ PNLMS = 0.65 , μ IPNLMS = 0.6 , a = 0.5 , ρ ZASM = 3 × 10 5 , ρ RZASM = 6 × 10 5 , ρ CIMN = 7 × 10 5 , ρ CIM = 5 × 10 5 , γ = 2 δ v 2 , σ = 0.01 . The simulation results are given in Figure 2, Figure 3 and Figure 4 for SNR = 30 dB, SNR = 20 dB and SNR = 10 dB, respectively. It is found that the proposed CIMSM-PNLMS algorithm has the smallest estimation error in terms of the MSE. The larger the SNR is, the smaller the MSE is. Even when the SNR is equal to 10 dB, the CIMSM-PNLMS algorithm still possesses the lowest MSE. In a word, the proposed CIMSM-PNLMS algorithm can give a good estimation performance even though channel environment is not good. It is worth noting that the estimation performance of the ZASM-PNLMS algorithm decreases for SNR = 10 dB.
Experiment 3.
In this experiment, the proposed CIMSM-PNLMS algorithm with various sparsity levels is studied in detail. Here, the sparsity level K is defined as the number of the active channel coefficients. K = 1 , K = 4 and K = 8 are employed to investigate the estimation performance at different sparsity levels. The related parameters are μ NLMS = 0.75 , μ PNLMS = 0.65 , μ IPNLMS = 0.5 , a = 0 , ρ ZASM = ρ RZASM = ρ CIMN = ρ CIM = 9 × 10 6 , γ = 2 δ v 2 , σ = 0.01 . The numerical results are demonstrated in Figure 5, Figure 6 and Figure 7 for K = 1 , K = 4 and K = 8 , respectively. It is observed that the proposed CIMSM-PNLMS outperforms the PNLMS, SM-PNLMS, ZASM-PNLMS, RZASM-PNLMS, CIMSM-NLMS and IPNLMS algorithms. When the sparsity level K increases from 1 to 8, the proposed CIMSM-PNLMS algorithm performs best in all the mentioned algorithms with respect to the MSE. Even though K = 8 , the CIMSM-PNLMS algorithm still achieves the lowest MSE. However, the estimation gain is reduced with an increment of K.
Experiment 4.
The key parameters of the proposed CIMSM-PNLMS algorithm is considered to obtain the effects on the convergence and the MSE. Herein, we set the sparsity level K = 2 and SNR = 30 dB. The effects on the MSE of ρ CIM are presented in Figure 8. It is found that when ρ CIM decreases from 5 × 10 4 to 5 × 10 5 , the MSE is reduced. A lowest MSE is achieved for ρ CIM = 5 × 10 5 . When ρ CIM continues to reduce from ρ CIM = 5 × 10 5 to ρ CIM = 5 × 10 8 , the MSE rebounds to the opposite direction, which means that the estimation error is increased. ρ CIM effects at different SNRs on the MSE are also given in Figure 9. It can be seen that the ρ CIM gives similar effects on the MSE at different SNRs. When ρ CIM reaches 5 × 10 5 , the proposed CIMSM-PNLMS algorithm has the smallest MSE. The effects of γ are reported in Figure 10. We found that with the decrement of γ , the MSE is reduced and the convergence is accelerated. However, the MSE will not continue to decrease when the γ is less than 0.1. Thus, we should select proper parameters to get a good estimation performance for the proposed CIMSM-PNLMS.
Experiment 5.
Finally, the proposed algorithm is used to estimate an acoustic echo channel to further verify the estimation performance of the CIMSM-PNLMS algorithm. A typical acoustic echo channel with a length of 256 is shown in Figure 11. Herein, the active coefficients are obtained by an exponential function, while other coefficients are zeros. The sparsity measurement is defined as ξ 12 ( h ) = N N N ( 1 h 1 N h 2 ) , which is used for measuring the sparsity of this acoustic echo channel. Herein, we consider ξ 12 ( h ) = 0.7416. The related parameters are μ PNLMS = 0.8 , μ IPNLMS = 0.75 , ρ ZASM = 5 × 10 7 , ρ RZASM = 7 × 10 7 , ρ CIMN = 3 × 10 7 , ρ CIM = 3 × 10 6 . The numerical results are shown in Figure 11. It is noted that the CIMSM-PNLMS still performs best in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS, ZASM-PNLMS, RZASM-PNLMS, CIMSM-NLMS and ZASM-PNLMS algorithms.

5. Conclusions

In this paper, a robust sparse SM-PNLMS algorithm with a CIM zero attractor has been proposed and its performance has been deeply investigated in various acoustic channels. The CIMSM-PNLMS algorithm has been derived in detail and the key parameter effects on the MSE and convergence have been studied. The proposed CIMSM-PNLMS algorithm has been used for estimating acoustic channels at different sparsity levels to verify its effectiveness. The numerical results show that the proposed CIMSM-PNLMS algorithm provides the fastest convergence speed and the lowest MSE for estimating underwater acoustic channels and acoustic echo channels at different SNRs and sparsity levels.

Acknowledgments

This work was partially supported by the National Key Research and Development Program of China-Government Corporation Special Program (2016YFE0111100), the National Science Foundation of China (61571149), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), Projects for the Selected Returned Overseas Chinese Scholars of Heilongjiang Province and Ministry of Human Resources and Social Security of the People’s Republic of China (MOHRSS) of China, and the Foundational Research Funds for the Central Universities (HEUCF160815, HEUCFD1433).

Author Contributions

Zhan Jin wrote the draft of the paper and wrote the code and did the simulations of this paper. Yingsong Li helped to check the coding and simulations, and he also put forward the idea of the CIMSM-PNLMS and ZASM-PNLMS algorithms. Yanyan Wang provided some analysis on the CIMSM-PNLMS and ZASM-PNLMS algorithms. All of the authors wrote this paper together and they have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2002. [Google Scholar]
  2. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementaion, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  3. Sayed, A.H. Fundamentals of Adaptive filtering; Wiley-IEEE: New York, NY, USA, 2003. [Google Scholar]
  4. Combettes, P.L. The foundations of set theoretic estimation. Proc. IEEE 1993, 81, 182–208. [Google Scholar] [CrossRef]
  5. Nagaraj, S.; Gollamudi, S.; Kapoor, S.; Huang, Y. An adaptive set-membership filtering technique with sparse updates. IEEE Trans. Signal Process. 1999, 47, 2928–2941. [Google Scholar] [CrossRef]
  6. Werner, S.; Diniz, P.S.R. Set-membership affine projection algorithm. IEEE Signal Process. Lett. 2001, 8, 231–235. [Google Scholar] [CrossRef]
  7. Gollamudi, S.; Nagaraj, S.; Huang, Y.F. Blind equalization with a deterministic constant modulus cost-a set-membership filtering approach. In Proceedings of the 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Istanbul, Turkey, 5–9 June 2000; pp. 2765–2768. [Google Scholar]
  8. Gollamudi, S.; Nagaraj, S.; Huang, Y.F. Set-membership filtering and a set-membership normalized LMS algorithm with an adaptive step size. IEEE Signal Process. Lett. 1998, 5, 111–114. [Google Scholar] [CrossRef]
  9. Diniz, P.S.R. Adaptive Filtering:Algorithms and Practical Implementations, 2nd ed.; Kluwer: Boston, MA, USA, 2002; pp. 234–237. [Google Scholar]
  10. De Lamare, R.C.; Diniz, P.S.R. Set-membership adaptive algorithms based on time-vary error bounds for CDMA interference suppression. IEEE Trans. Veh. Technol. 2009, 58, 644–654. [Google Scholar] [CrossRef]
  11. Bhotto, M.Z.A.; Antoniou, A. Robust set-membership affine projection adaptive-filtering algorithm. IEEE Trans. Signal Process. 2012, 60, 73–81. [Google Scholar] [CrossRef]
  12. Lin, T.M.; Nayeri, M.; Deller, J.R., Jr. Consistently convergent OBE algorithm with automatic selection of error bounds. Int. J. Adapt. Control Signal Process. 1998, 12, 302–324. [Google Scholar] [CrossRef]
  13. De Lamare, R.C.; Sampaio-Neto, R. Adaptive reduced-rank MMSE filtering with interpolated FIR filters and adaptive interpolators. IEEE Signal Process. Lett. 2005, 12, 177–180. [Google Scholar] [CrossRef]
  14. Clarke, P.; Lamare, R.C.D. Low-complexity reduced-rank linear interference suppression based on set-membership joint iterative optimization for DS-CDMA systems. IEEE Trans. Veh. Technol. 2011, 60, 4324–4337. [Google Scholar] [CrossRef]
  15. Cai, Y.; Lamare, R.C.D. Set-membership adaptive constant modulus beamforming based on generalized sidelobe cancellation with dynamic bounds. In Proceedings of the 10th International Symposium on Wireless Communication Systems, Ilmenau, German, 27–30 August 2013; pp. 194–198. [Google Scholar]
  16. Gu, Y.; Jin, J.; Mei, S. l0 norm constraint LMS algorithms for sparse system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  17. Taheri, O.; Vorobyov, S.A. Sparse channel estimation with lp-norm and reweighted l1-norm penalized least mean squares. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 2864–2867. [Google Scholar]
  18. Li, Y.; Wang, Y.; Jiang, T. Sparse channel estimation based on a p-norm-like constrained least mean fourth algorithm. In Proceedings of the 7th International Conference on Wireless Communications and Sinal Processing (WCSP), Nanjing, China, 15–17 October 2015. [Google Scholar]
  19. Gui, G.; Adachi, F. Improved least mean square algorithm with application to adaptive sparse channel estimation. EURASIP J. Wirel. Commun. Netw. 2013, 2013, 204. [Google Scholar] [CrossRef]
  20. Li, Y.; Zhang, C.; Wang, S. Low complexity non-uniform penalized affine projection algorithm for sparse system identification. Circuits Syst. Signal Process. 2016, 35, 1611–1624. [Google Scholar] [CrossRef]
  21. Li, Y.; Li, W.; Yu, W.; Wan, J.; Li, Z. Sparse adaptive channel estimation based on lp-norm penalized affine projection algorithm. Int. J. Antennas Propag. 2014, 2014, 434659. [Google Scholar] [CrossRef]
  22. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptition in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–518. [Google Scholar] [CrossRef]
  23. Deng, H.; Doroslova, M. Improved convergence of the PNLMS algorithm for sparse impluse response identification. IEEE Signal Process. Lett. 2005, 12, 181–184. [Google Scholar] [CrossRef]
  24. Li, Y.; Hamamura, M. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation. Sci. World J. 2014, 2014, 1–9. [Google Scholar] [CrossRef] [PubMed]
  25. Li, Y.; Hamamura, M. Zero-attracting variable-step-size least mean square algorithms for adaptive sparse channel estimation. Int. J. Adapt. Control Signal Process. 2015, 29, 1189–1206. [Google Scholar] [CrossRef]
  26. Gui, G.; Peng, W.; Adachi, F. Sparse least mean fourth algorithm for adaptive channel estimation in low signal-to-noise ratio region. Int. J. Commun. Syst. 2014, 27, 3147–3157. [Google Scholar] [CrossRef]
  27. Gui, G.; Xu, L.; Matsushita, S. Improved adaptive sparse channel estimation using mixed square/fourth error criterion. J. Frankl. Inst. 2015, 352, 4579–4594. [Google Scholar] [CrossRef]
  28. Gui, G.; Mehbodniya, A.; Adachi, F. Least mean square/fourth algorithm for adaptive sparse channel estimation. In Proceedings of the 24th IEEE International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013; pp. 296–300. [Google Scholar]
  29. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  30. Benesty, J.; Gay, S.L. An improved PNLMS algorithm. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, USA, 13–17 May 2002. [Google Scholar]
  31. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware Set-membership NLMS algorithms and their application for sparse channel estimation and echo cancellation. Int. J. Electron. Commun. 2016, 70, 895–902. [Google Scholar] [CrossRef]
  32. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  33. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, 19–24 April 2009. [Google Scholar]
  34. Li, Y.; Hamamura, M. Smooth approximation l0-norm constrained affine projection algorithm and its applications in sparse channel estimation. Sci. World J. 2014, 2014, 937252. [Google Scholar]
  35. Meng, R.; de Lamare, R.C.; Nascimento, V.H. Sparsity-aware affine projection adaptive algorithms for system identification. In Proceedings of the Sensor Signal Processing for Defence (SSPD), London, UK, 27–29 September 2011. [Google Scholar]
  36. Lima, M.V.S.; Martins, W.A.; Diniz, P.S.R. Affine projection algorithms for sparse system identification. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  37. Lima, M.V.S.; Ferreira, T.N.; Martins, W.A.; Diniz, P.S.R. Sparsity-Aware Data-Selective Adaptive Filters. IEEE Trans. Signal Process. 2014, 62, 4557–4572. [Google Scholar] [CrossRef]
  38. Lima, M.V.S.; Sobron, I.; Martins, W.A.; Diniz, P.S.R. Stability and MSE analyses of affine projection algorithms for sparse system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 6399–6403. [Google Scholar]
  39. Seth, S.; Principe, J.C. Compressed signal reconstruction using the correntropy induced metric. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3845–3848. [Google Scholar]
  40. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  41. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Principe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  42. Das, R.L.; Chakraborty, M. Improving the performance of the PNLMS algorithm using l1 norm regularization. IEEE/ACM Trans. Audio Speech Lang. Prov. 2016, 24, 1280–1290. [Google Scholar] [CrossRef]
  43. George, Z. A novel matlab-based underwater acoustic channel simulator. J. Commun. Comput. 2013, 10, 1131–1138. [Google Scholar]
  44. Wang, Y.; Li, Y.; Albu, F.; Yang, R. Sparse Channel Estimation Using Correntropy Induced Metric Criterion Based SM-NLMS Algorithm. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar]
Figure 1. Convergence of the CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm.
Figure 1. Convergence of the CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm.
Entropy 19 00281 g001
Figure 2. Estimation behavior of the CIMSM-PNLMS for SNR (signal-to-noise ratio) = 30 dB.
Figure 2. Estimation behavior of the CIMSM-PNLMS for SNR (signal-to-noise ratio) = 30 dB.
Entropy 19 00281 g002
Figure 3. Estimation behavior of the CIMSM-PNLMS for SNR = 20 dB.
Figure 3. Estimation behavior of the CIMSM-PNLMS for SNR = 20 dB.
Entropy 19 00281 g003
Figure 4. Estimation behavior of the CIMSM-PNLMS for SNR = 10 dB.
Figure 4. Estimation behavior of the CIMSM-PNLMS for SNR = 10 dB.
Entropy 19 00281 g004
Figure 5. Estimation behavior of the CIMSM-PNLMS for K = 1 .
Figure 5. Estimation behavior of the CIMSM-PNLMS for K = 1 .
Entropy 19 00281 g005
Figure 6. Estimation behavior of the CIMSM-PNLMS for K = 4 .
Figure 6. Estimation behavior of the CIMSM-PNLMS for K = 4 .
Entropy 19 00281 g006
Figure 7. Estimation behavior of the CIMSM-PNLMS for K = 8 .
Figure 7. Estimation behavior of the CIMSM-PNLMS for K = 8 .
Entropy 19 00281 g007
Figure 8. ρ CIM effects of the CIMSM-PNLMS algorithm on the mean square error (MSE).
Figure 8. ρ CIM effects of the CIMSM-PNLMS algorithm on the mean square error (MSE).
Entropy 19 00281 g008
Figure 9. ρ CIM effects on MSE at different SNRs.
Figure 9. ρ CIM effects on MSE at different SNRs.
Entropy 19 00281 g009
Figure 10. Comparison of MSE with different γ values.
Figure 10. Comparison of MSE with different γ values.
Entropy 19 00281 g010
Figure 11. Comparison of MSE in echo paths.
Figure 11. Comparison of MSE in echo paths.
Entropy 19 00281 g011

Share and Cite

MDPI and ACS Style

Jin, Z.; Li, Y.; Wang, Y. An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation. Entropy 2017, 19, 281. https://0-doi-org.brum.beds.ac.uk/10.3390/e19060281

AMA Style

Jin Z, Li Y, Wang Y. An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation. Entropy. 2017; 19(6):281. https://0-doi-org.brum.beds.ac.uk/10.3390/e19060281

Chicago/Turabian Style

Jin, Zhan, Yingsong Li, and Yanyan Wang. 2017. "An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation" Entropy 19, no. 6: 281. https://0-doi-org.brum.beds.ac.uk/10.3390/e19060281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop