Next Article in Journal
Du Bois–Reymond Type Lemma and Its Application to Dirichlet Problem with the p(t)–Laplacian on a Bounded Time Scale
Next Article in Special Issue
Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary
Previous Article in Journal
Minimum Entropy Production Effect on a Quantum Scale
Previous Article in Special Issue
A Multi-Modal Fusion Method Based on Higher-Order Orthogonal Iteration Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Variational Bayesian-Based Kalman Filter with Unknown Time-Varying Measurement Loss Probability and Non-Stationary Heavy-Tailed Measurement Noise

1
Department of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
2
Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin 150001, China
3
Department of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Submission received: 7 September 2021 / Revised: 12 October 2021 / Accepted: 14 October 2021 / Published: 16 October 2021
(This article belongs to the Special Issue Advances in Image Fusion)

Abstract

:
In this paper, a new variational Bayesian-based Kalman filter (KF) is presented to solve the filtering problem for a linear system with unknown time-varying measurement loss probability (UTVMLP) and non-stationary heavy-tailed measurement noise (NSHTMN). Firstly, the NSHTMN was modelled as a Gaussian-Student’s t-mixture distribution via employing a Bernoulli random variable (BM). Secondly, by utilizing another Bernoulli random variable (BL), the form of the likelihood function consisting of two mixture distributions was converted from a weight sum to an exponential product and a new hierarchical Gaussian state-space model was therefore established. Finally, the system state vector, BM, BL, the intermediate random variables, the mixing probability, and the UTVMLP were jointly inferred by employing the variational Bayesian technique. Simulation results revealed that in the scenario of NSHTMN, the proposed filter had a better performance than current algorithms and further improved the estimation accuracy of UTVMLP.

1. Introduction

Under the minimal mean square error criteria, the KF is the optimal estimator for the linear Gaussian state-space model [1,2]. KF has been widely employed in a variety of applications [3,4,5]. Unfortunately, in many practical applications, when the sensor produces intermittent faults, the actual measurement of the sensors may not be accurately represented by the KF measurement model [6,7]. If the random measurement loss occurs, the measurement of the sensors contains only pure noise. In this situation, the estimation accuracy of a typical KF will drop significantly or even diverge. Various filtering methods have been developed to address the measurement loss filtering issue, such as the intermittent KF (IKF) [8,9]. However, IKF has an important assumption: the measurement loss probability is known. In practical applications, the measurement loss probability is usually unknown and the IKF is no longer applicable in this case [7].
In order to address the filtering issues of the unknown measurement loss probability of the linear system, the first Bayesian Kalman filter and the second Bayesian Kalman filter were designed by estimating the process and posterior distribution of the measurement loss, respectively [10]. The above two filters, however, are no longer valid if the unknown measurement loss probability is time-varying. Recently, the variational Bayesian-based adaptive KF (VBAKF) was derived for a linear system with unknown time-varying measurement loss probability (UTVMLP) and both the system vector and UTVMLP are jointly estimated by introducing the variational Bayesian technique [7]. Additionally, VBAKF shows excellent performance in the context of white Gaussian measurement noise with known statistical characteristics. Unfortunately, in realistic engineering applications, measurement outliers may occur at various periods due to environmental changes and unreliable sensors, resulting in NSHTMN, i.e., when the system runs healthily, the measurement noise is the Gaussian-distributed, and when the time-varying measurement outliers erode the system, the measurement noise is heavy-tail-distributed [11,12]. In the scenario of NSHTMN, the estimation accuracy of VBAKF will drop sharply.
Recently, some mixture distribution-based algorithms have been presented to address NSHTMN, such as the Gaussian-Student’s t-mixture distribution-based KF (GSTKF) [13,14]. However, the filtering problem with UTVMLP and NSHTMN cannot directly solved by employing a mixture distribution, that is, under the scenario of UTVMLP and NSHTMN, the current likelihood function is a weighted sum of double-mixture distributions, which is an unclosed and unconjugated distribution that makes the Bayesian inference difficult to employ directly.
In this paper, a new variational Bayesian-based KF is presented to settle the filtering issue for linear discrete-time systems with UTVMLP and NSHTMN. Firstly, the Gaussian-Student’s t-mixture distribution with BM is employed to model the NSHTMN. Secondly, the form of the likelihood function is converted to an exponential product and constructs a new hierarchical Gaussian state-space model by utilizing BL. Thirdly, the variational Bayesian method is introduced to simultaneously estimate the system state vector, BM, BL, the intermediate random variables, the mixing probability, and the UTVMLP. Finally, a numerical simulation experiment reveals that the proposed filter has better estimation accuracy but is more time-consuming than existing filtering algorithms in the scenarios of NSHTMN and UTVMLP.
The contributions of this paper are as follows:
(a)
By employing a Bernoulli-distributed variable, the NSHTMN is modelled as a Gaussian-Student’s t-mixture distribution;
(b)
The measurement likelihood function is converted from the weight sum of two mixture distributions to an exponential product and a new hierarchical Gaussian state-space model is therefore derived;
(c)
The system state vector, UTVMLP, and the unknown variables are simultaneously estimated by utilizing the variational Bayesian technique;
(d)
Numerical simulation results indicate that the proposed filter has better performance than that of existing algorithms in the scenarios of NSHTMN and UTVMLP

2. Problem Formulation

Consider the linear stochastic system with the following state and measurement equations:
x t = F t 1 x t 1 + e t 1
y t = b t H t x t + g t
where x t m denotes the system state vector; F t 1 m × m denotes the state transition matrix; e t m represents the Gaussian-distributed white process noise vector with a zero mean value and covariance matrix Q t ; y t n represents the measurement vector; H t n × m is the measurement matrix; g t n is the white NSHTMN vector; and t represents the index of discrete time. The phenomenon of measurement loss is described by introducing the identically distributed and mutually uncorrelated measurement loss defined as the Bernoulli random variable (BL) b t , which is expressed by the following equations.
p ( b t = 1 ) = E [ b t ] = γ t
p ( b t = 0 ) = 1 E [ b t ] = 1 γ t
where γ t [ 0 , 1 ] denotes the time-varying measurement loss probability. Note that the value of γ t is unknown in this paper. The initial Gaussian-distributed system state vector x 0 is the random vector with mean x ^ 0 | 0 = 0 and covariance matrix P 0 | 0 . Additionally, it is assumed that the initial system state vector x 0 , the noise vectors e t 1 and g t , and the Bernoulli random variable b t are mutually independent.
It can be seen from Equations (1)–(4) that the ideal measurement was received by the sensor when b t = 0 and the measurement loss with UTVMLP occurred when b t = 1 . Meanwhile, the measurement noise is NSHTMN due to measurement outliers, that is, when the system runs healthily, the measurement noise is Gaussian-distributed, and when measurement outliers erode the system, the measurement noise is heavy-tail-distributed. The NSHTMN and UTVMLP can result in estimation errors or even in filtering divergence. Therefore, a new variational Bayesian-based Kalman filter with NSHTMN and UTVMLP will be proposed.

3. Proposed Variational Bayesian-Based Kalman Filter

In this section, a new variational Bayesian-based Kalman filter is proposed to address the filtering issue for a linear system with NSHTMN and UTVMLP. Firstly, the Gaussian-Student’s t-mixture distribution is utilized to model the NSHTMN and the hierarchical form is derived. Secondly, by converting the measurement likelihood function into an exponential multiplication, a new hierarchical Gaussian state-space model is established. Thirdly, by using the variational Bayesian method, the system state and unknown variables are simultaneously estimated. Finally, the required mathematical expectations are given.

3.1. Gaussian-Student’s t-Mixture Distribution

The NSHTMN vector can be modeled as the Gaussian-Student’s t-mixture distribution by employing another mixing-defined Bernoulli random variable (BM), ζ t , and the probability density function (PDF), p ( g t ) is given as
p ( g t ) = ζ t = 0 1 [ N ( g t ; 0 , R t ) ] ζ t [ ST ( g t ; 0 , R t , μ ) ] ( 1 ζ t ) p ( ζ t | φ t ) p ( φ t ) d φ t   ζ t { 0 , 1 }
where N ( x ; 0 , Σ ) represents the Gaussian PDF with a zero mean vector and covariance matrix Σ , and ST ( x ; 0 , Υ , ω ) represents the student’s t-PDF with a zero mean vector, covariance matrix Υ , and degree of freedom (dof) parameter ω . R t represents the covariance matrix of the nominal measurement noise. The PDF of the mixing probability φ t and the probability mass function (PMF) of ζ t are defined as follows, respectively.
p ( φ t ) = Be ( φ t ; h 0 , 1 h 0 )
p ( ζ t | φ t ) = φ t ζ t ( 1 φ t ) ( 1 ζ t )
where Be ( x ; σ , κ ) represents the Beta PDF with shape parameters σ and κ .
Due to the hierarchical properties of the student’s t-distribution, Equation (5) can be rewritten as such:
p ( g t ) = ζ t = 0 1 0 + [ N ( g t ; 0 , R t ) ] ζ t [ N ( g t ; 0 , R t / β t ) ] ( 1 ζ t ) p ( β t ) p ( ζ t | φ t ) p ( φ t ) d φ t d β t
p ( β t ) = G ( β t , μ 2 , μ 2 )
where G ( x , a , b ) represents the Gamma PDF with shape parameter a and rate parameter b , and β t represents the intermediate random variable.

3.2. New Hierarchical Gaussian State-Space Model (HGSSM)

According to Equations (2)–(4), the measurement likelihood PDF is derived as Based on Equation (2), the following equation can be obtained.
p ( y t | x t , γ t ) = b t = 0 1 p ( y t , b t | x t , γ t ) = p ( y t | x t , b t = 1 ) p ( b t = 1 ) + p ( y t | x t , b t = 0 ) p ( b t = 0 ) = ( 1 γ t ) p ( y t | x t , b t = 1 ) + γ t p ( y t | x t , b t = 0 )
p ( y t | x t , b t = 1 ) = p g t ( y t H t x t )
p ( y t | x t , b t = 0 ) = p g t ( y t )
where p g t ( · ) represents the measurement noise PD. Substituting Equations (11) and (12) in Equation (10) results in
p ( y t | x t , γ t ) = ( 1 γ t ) p g t ( y t H t x t ) + γ t p g t ( y t )
Remark 1.
The measurement likelihood PDF in Equation (13) is an unclosed and unconjugated weighted sum form, and it is impossible to infer the system state vector and unknown parameters directly by utilizing the variational Bayesian. The weighted sum will then be converted into an exponential multiplication form to address this problem.
The PMF of BL b t is given as
p ( b t | γ t ) = ( 1 γ t ) b t γ t ( 1 b t )
Exploiting Equations (13) and (14), the measurement likelihood PDF is reformulated as
p ( y t | x t , γ t ) = b t = 0 1 p ( y t | x t , b t ) p ( b t | γ t ) = b t = 0 1 [ ( 1 γ t ) b t [ p g t ( y t H t x t ) ] b t γ t ( 1 b t ) [ p g t ( y t ) ] ( 1 b t ) ] = b t = 0 1 [ [ p g t ( y t H t x t ) ] b t [ p g t ( y t ) ] ( 1 b t ) ] p ( b t | γ t )
According Equation (15), the exponential multiplication-formed likelihood PDF p ( y t | x t , b t ) is given as follows.
p ( y t | x t , b t ) = [ p g t ( y t H t x t ) ] b t [ p g t ( y t ) ] ( 1 b t )
Remark 2.
The variational Bayesian method must select the suitable conjugate-prior distributions for unknown variables. Therefore, the appropriate prior PDFs to construct a new HGSSM are selected.
The one-step predicted PDF p ( x t | y 1 : t 1 ) of system state vector x t is assumed as being Gaussian distributed as follows.
p ( x t | y 1 : t 1 ) = N ( x t ; x ^ t | t 1 , P t | t 1 )
where x ^ t | t 1 represents the mean vector and P t | t 1 represents the covariance matrix. Both x ^ t | t 1 and P t | t 1 can be updated by the typical Kalman filter, which is given as
x ^ t | t 1 = F t 1 x ^ t 1 | t 1
P t | t 1 = H t 1 P t 1 | t 1 H t 1 T + Q t 1
In employing Equations (8), (9) and (16), the conditional likelihood PDF p ( y t | x t , b t , β , ζ t ) is derived as
p ( y t | x t , b t , β t , ζ t ) = [ N ( y t ; H t x t , R t ) ζ t N ( y t ; H t x t , R t / β t ) ( 1 ζ t ) ] b t × [ N ( y t ; 0 , R t ) ζ t N ( y t ; 0 , R t / β t ) ( 1 ζ t ) ] ( 1 b t ) = N ( y t ; H t x t , R t ) ζ t b t N ( y t ; H t x t , R t / β t ) ( 1 ζ t ) b t × N ( y t ; 0 , R t ) ζ t ( 1 b t ) N ( y t ; 0 , R t / β t ) ( 1 ζ t ) ( 1 b t )
It can be seen from Equations (6)–(9), (13) and (20) that the measurement vector y t depends on system state vector x t , intermediate random variable β t , BM ζ t , BL b t , mixing probability φ t , and measurement loss probability γ t . The following joint-prior PDF must be calculated, i.e.,
p ( Ξ | y 1 : t 1 ) = p ( x t | y 1 : t 1 ) p ( β t ) p ( b t | γ t ) p ( ζ t | φ t ) p ( φ t ) p ( γ t | y 1 : t 1 ) , Ξ { x t , β t , b t , ζ t , γ t , φ }
where the definitions of p ( γ t ) , p ( ζ t | φ ) , p ( β ) , p ( b t | γ t ) , and p ( x t | y 1 : t 1 ) are given in Equations (6), (7), (9), (14) and (17), respectively. Additionally, p ( γ t | y 1 : t 1 ) denotes the prior PDF of the time-varying measurement loss probability, which can be assumed as the following Beta distribution.
p ( γ t | y 1 : t 1 ) = Be ( γ t ; η ^ t | t 1 , δ ^ t | t 1 ) = p ( γ t 1 | y 1 : t 1 ) p ( γ t | γ t 1 ) d γ t 1 ( Bayes   theorem )
where the shape parameters η ^ t | t 1 and δ ^ t | t 1 can be calculated by introducing the forgetting factor ρ ( 0 , 1 ] as follows.
η ^ t | t 1 = ρ η ^ t 1 | t 1
δ ^ t | t 1 = ρ δ ^ t 1 | t 1
where η ^ t 1 | t 1 and δ ^ t 1 | t 1 represent posterior shape parameters.
The proposed new HGSSM is comprised of Equations (14) and (17)–(24). System state vector x t , intermediate random variable β t , BM ζ t , BL b t , mixing probability φ t , and measurement loss probability γ t will be simultaneously estimated by utilizing the variational Bayesian method.

3.3. Variational Bayesian Approximation of the Joint Posterior PDFs

Aiming at the estimation of the unknown variables of the new HGSSM, the joint posterior PDF p ( Ξ | y t ) with Ξ { x t , β t , b t , ζ t , φ t , γ t } is required to be solved. However, the analytical solution of p ( Ξ | y t ) is not accessible. The variational Bayesian approach is therefore employed to determine an approximate PDF for p ( Ξ | y t ) and to compute an approximate solution [15,16,17], i.e.,
p ( Ξ | y 1 : t ) q a ( x t ) q a ( β t ) q a ( b t ) q a ( ζ t ) q a ( φ t ) q a ( γ t )
where θ represents an arbitrary element of Ξ and q a ( θ ) denotes the approximate PDF or PMF. By minimizing the Kullback–Leibler divergence (KLD) between p ( Ξ | y 1 : t ) and q a ( x t ) q a ( β t ) q a ( b t ) q a ( ζ t ) q a ( φ t ) q a ( γ t ) , q a ( θ ) can be obtained as follows.
{ q a ( x t ) , q a ( β t ) , q a ( b t ) , q a ( ζ t ) , q a ( φ t ) , q a ( γ t ) } = argminKLD ( q a ( x t ) q a ( β t ) q a ( b t ) q a ( ζ t ) q a ( φ t ) q a ( γ t ) | | p ( Ξ | y 1 : t ) )
KLD ( q a ( A ) | | p ( A ) ) q a ( A ) log q a ( A ) p ( A ) d A
where KLD ( q a ( A ) | | p ( A ) ) represents the KLD between q a ( A ) and p ( A ) , and the optimal solution of Equation (26) can be calculated via the following formula [15,17].
log q a ( θ ) = E Ξ θ [ log p ( Ξ , y 1 : t ) ] + c θ
where E ( θ ) denotes the mathematical expectation operation, Ξ θ signifies a grouping of all the components in Ξ apart from θ , and the constant with regard to θ is denoted by c θ . Additionally, the fixed-point iteration technique is utilized to derive the approximate formation of q a ( θ ) due to the fact that estimated parameters are mutually coupled.
The joint PDF p ( Ξ , y 1 : t ) in Equation (26) can be derived as
p ( Ξ , y 1 : t ) = N ( y t ; H t x t , R t ) ζ t b t N ( y t ; H t x t , R t / β t ) ( 1 ζ t ) b t × N ( y t ; 0 , R t ) ζ t ( 1 b t ) N ( y t ; 0 , R t / β t ) ( 1 ζ t ) ( 1 b t ) N ( x t ; x ^ t | t 1 , P t | t 1 ) × G ( β t , μ t 2 , μ t 2 ) φ t ζ t ( 1 φ t ) ( 1 ζ t ) ( 1 γ t ) b t γ t ( 1 b t ) × Be ( φ t ; h 0 , 1 h 0 ) Be ( γ t ; η ^ t | t 1 , δ ^ t | t 1 ) p ( y 1 : t 1 )
Proposition 1.
Let θ = x t and by using Equation (29) in (28), q a ( s + 1 ) ( x t ) can be updated as Gaussian, i.e.,
q a ( s + 1 ) ( x t ) = N ( x t ; x ^ t | t ( s + 1 ) , P t | t ( s + 1 ) )
where q a ( s + 1 ) ( · ) represents the approximate PDF in the ( s + 1 ) t h iteration, while the mean vector x ^ t | t ( s + 1 ) and the covariance matrix P t | t ( s + 1 ) are assumed to be updated in accordance with the traditional Kalman filter as follows.
x ^ t | t ( s + 1 ) = x ^ t | t 1 + K t ( s + 1 ) ( y t H t x ^ t | t 1 )
P t | t ( s + 1 ) = P ^ t | t 1 ( s + 1 ) K t ( s + 1 ) H t P ^ t | t 1 ( s + 1 )
K t ( s + 1 ) = P ^ t | t 1 ( s + 1 ) H t T ( H t P ^ t | t 1 ( s + 1 ) H t T + R ˜ t ( s + 1 ) ) 1
where K t ( s + 1 ) represents the Kalman gain matrix. The modified measurement noise covariance matrix at ( s + 1 ) t h iteration R ˜ t ( s + 1 ) is formulated as
R ˜ t ( s + 1 ) = R t E ( s ) [ ζ t ] E ( s ) [ b t ] + E ( s ) [ 1 ζ t ] E ( s ) [ b t ] E ( s ) [ β t ]
where E ( s ) [ · ] represents the mathematical expectation of variables in the s t h iteration.
Proof: see Appendix A.
Proposition 2.
Let θ = β t and by using Equation (29) in Equation (28), q a ( s + 1 ) ( β t ) can be updated as Gamma, i.e.,
q a ( s + 1 ) ( β t ) = G ( β t ; π t ( s + 1 ) , ν t ( s + 1 ) )
where the shape parameter π t ( s + 1 ) and rate parameter ν t ( s + 1 ) are formulated as
π t ( s + 1 ) = 0.5 ( n E ( s + 1 ) [ 1 ζ t ] + μ t )
ν t ( s + 1 ) = 0.5 [ tr ( G t ( s + 1 ) R t 1 ) + μ t ]
where n represents the dimension of the measurement vector, tr ( · ) represents the trace operation, and G t ( s + 1 ) is defined as
G t ( s + 1 ) = E ( s ) [ 1 ζ t ] { E ( s ) [ b t ] [ H t P t | t ( s + 1 ) H t T + ( y t H t x ^ t | t ( s + 1 ) ) ( y t H t x ^ t | t ( s + 1 ) ) T ] + E ( s ) [ 1 ζ t ] E ( s ) [ 1 b t ] y t ( y t ) T }
Proof: see Appendix B.
Proposition 3.
Let θ = b t and by using Equation (29) in Equation (28), q a ( s + 1 ) ( b t ) can be updated as the Bernoulli distribution. The posterior probabilities p ( b t = 1 ) and p ( b t = 0 ) of BL b t are given as
p ( s + 1 ) ( b t = 1 ) ( s + 1 ) exp { E ( s ) [ log ( 1 γ t ) ] 0.5 tr ( C t ( s + 1 ) R t 1 ) + 0.5 n E ( s ) [ 1 ζ t ] E ( s ) [ log ( β t ) ] }
p ( s + 1 ) ( b t = 0 ) = ( s + 1 ) exp { E ( s ) [ log ( γ t ) ] 0.5 tr ( D t ( s + 1 ) R t 1 ) + 0.5 n E ( s ) [ 1 ζ t ] E ( s ) [ log ( β t ) ] }
where ( s + 1 ) represents the normalizing constant and the parameters C t ( s + 1 ) and D t ( s + 1 ) are, respectively, defined as
C t ( s + 1 ) = { E ( s ) [ ζ t ] + E ( s ) [ β t ] E ( s ) [ 1 ζ t ] } [ H t P t | t ( s + 1 ) H t T + ( y t H t x ^ t | t ( s + 1 ) ) ( y t H t x ^ t | t ( s + 1 ) ) T ]
D t ( s + 1 ) = { E ( s ) [ ζ t ] + E ( s ) [ β t ] E ( s ) [ 1 ζ t ] } y t y t T
Proof: see Appendix C.
Proposition 4.
Let θ = ζ t and by using Equation (29) in (28), q a ( s + 1 ) ( ζ t ) can be also updated as the Bernoulli distribution. The posterior probabilities p ( ζ t = 1 ) and p ( ζ t = 0 ) of BM ζ t are given as
p ( s + 1 ) ( ζ t = 1 ) = ( s + 1 ) exp { E ( s ) [ log ( φ t ) ] 0.5 tr ( V t ( s + 1 ) R t 1 ) }
p ( s + 1 ) ( ζ t = 0 ) = ( s + 1 ) exp { E ( s ) [ log ( 1 φ t ) ] 0.5 tr ( W t ( s + 1 ) R t 1 ) + 0.5 n E ( s ) [ log ( β t ) ] }
where ( s + 1 ) also represents the normalizing constant and the definitions of parameters V t ( s + 1 ) and W t ( s + 1 ) are, respectively, given as
V t ( s + 1 ) = E ( s ) [ b t ] [ H t P t | t ( s + 1 ) H t T + ( y t H t x ^ t | t ( s + 1 ) ) ( y t H t x ^ t | t ( s + 1 ) ) T ] + E ( s ) [ 1 b t ] y t y t T
W t ( s + 1 ) = E ( s ) [ b t ] E ( s ) [ β t ] [ H t P t | t ( s + 1 ) H t T + ( y t H t x ^ t | t ( s + 1 ) ) ( y t H t x ^ t | t ( s + 1 ) ) T ] + E ( s ) [ 1 b t ] E ( s ) [ β t ] y t y t T
Proof: see Appendix D.
Proposition 5.
Let θ = φ t and by using Equation (29) in (28), q a ( s + 1 ) ( φ t ) can be updated as the Beta distribution, i.e.,
q a ( s + 1 ) ( φ t ) = Be ( φ t ; h t ( s + 1 ) , d t ( s + 1 ) )
where the shape parameters h t ( s + 1 ) and d t ( s + 1 ) are formulated as
h t ( s + 1 ) = E ( s + 1 ) [ ζ t ] + h 0
d t ( s + 1 ) = E ( s + 1 ) [ 1 ζ t ] + 1 h 0
Proof: see Appendix E.
Proposition 6.
Let θ = γ t and by using Equation (29) in Equation (28), q a ( s + 1 ) ( γ t ) can be also updated as the Beta distribution, i.e.,
q a ( s + 1 ) ( γ t ) = Be ( γ t ; η ^ t ( s + 1 ) , δ ^ t ( s + 1 ) )
where the definitions of shape parameters η ^ t ( s + 1 ) and δ ^ t ( s + 1 ) are given as
η ^ t ( s + 1 ) = E ( s + 1 ) [ 1 b t ] + η ^ t | t 1
δ ^ t ( s + 1 ) = E ( s + 1 ) [ b t ] + δ ^ t | t 1
Proof: see Appendix F.

3.4. Calculation of the Required Mathematical Expectations

The required mathematical expectations E ( s + 1 ) [ β t ] , E ( s + 1 ) [ log ( β t ) ] , E ( s + 1 ) [ b t ] , E ( s + 1 ) [ 1 b t ] , E ( s + 1 ) [ ζ t ] , E ( s + 1 ) [ 1 ζ t ] , E ( s + 1 ) [ log ( φ t ) ] , E ( s + 1 ) [ log ( 1 φ t ) ] , E ( s + 1 ) [ log ( γ t ) ] , and E ( s + 1 ) [ log ( 1 γ t ) ] in Section 3.3 are defined, respectively, as follows:
E ( s + 1 ) [ β t ] = π t ( s + 1 ) ν t ( s + 1 )
E ( s + 1 ) [ log ( β t ) ] = Ψ ( π t ( s + 1 ) ) log ( ν t ( s + 1 ) )
E ( s + 1 ) [ b t ] = p ( s + 1 ) ( b t = 1 ) p ( s + 1 ) ( b t = 1 ) p ( s + 1 ) ( b t = 0 )
E ( s + 1 ) [ 1 b t ] = 1 E ( s + 1 ) [ b t ]
E ( s + 1 ) [ ζ t ] = p ( s + 1 ) ( ζ t = 1 ) p ( s + 1 ) ( ζ t = 1 ) p ( s + 1 ) ( ζ t = 0 )
E ( s + 1 ) [ 1 ζ t ] = 1 E ( s + 1 ) [ ζ t ]
E ( s + 1 ) [ log ( φ t ) ] = Ψ ( h t ( s + 1 ) ) Ψ ( h t ( s + 1 ) + d t ( s + 1 ) )
E ( s + 1 ) [ log ( 1 φ t ) ] = Ψ ( d t ( s + 1 ) ) Ψ ( h t ( s + 1 ) + d t ( s + 1 ) )
E ( s + 1 ) [ log ( γ t ) ] = Ψ ( η t ( s + 1 ) ) Ψ ( η t ( s + 1 ) + δ t ( s + 1 ) )
E ( s + 1 ) [ log ( 1 γ t ) ] = Ψ ( δ t ( s + 1 ) ) Ψ ( η t ( s + 1 ) + δ t ( s + 1 ) )
where Ψ ( · ) represents the digamma function [18].
The presented variational Bayesian-based Kalman filter with UTVMLP and NSHTMN consists of Equations (18), (19) and (30)–(62). Table 1 describes the implementation of the proposed new KF.

4. Simulations

Aimed at demonstrating the superiority of the presented filter in the scenario with UTVMLP and NSHTMN, a numerical example is simulated. The process and measurement equations of the stochastic system are, respectively, given as [7]
x t = [ 0.6 0.4 0.1 0.9 ] x t 1 + e t 1
y t = b t [ 1 2 ] x t + g t
where the Gaussian process noise e t 1 and the NSHTMN g t are given as [12]
e t 1 ~ N ( 0 , Q t )
{ g t ~ N ( 0 , R t )   t [ 1 , 100 ] ( Gaussian ) g t ~ { N ( 0 , R t ) w . p . = 0.98 N ( 0 , 500 R t ) w . p . = 0.02 t [ 101 , 200 ] ( slightly   heavy tailed ) g t ~ { N ( 0 , R t ) w . p . = 0.95 N ( 0 , 500 R t ) w . p . = 0.05 t [ 201 , 300 ] ( moderately   heavy tailed ) g t ~   N ( 0 , R t )   t [ 301 , 400 ] ( Gaussian )
where w . p . represents “with probability”. The true process noise covariance matrix Q t with parameter M = 1 and the nominal measurement noise covariance matrix R t with parameter N = 150   m 2 are set as
Q t = M [ 1 1 0 1 ]
R t = N
The real UTVMLP is set as
p ( γ t ) = { 0.1   t [ 1 , 100 ]   0.15   t [ 101 , 200 ] 0.3   t [ 201 , 300 ] 0.1   t [ 301 , 400 ]
From Equations (66)–(69), it can be seen that the measurement noise and UTVMLP are divided into four stages, as shown in Table 2. The remaining system parameters are as follows: the sampling interval k = 0.01   s and the total simulation time T = 400   s . The proposed filter is compared with the typical Kalman filter (KF) [2]; the existing variational Bayesian-based adaptive KF with UTVMLP (VBAKF) [7]; the existing Gaussian-Student’s t-mixture distribution-based KF (GSTKF) with Gaussian process noise [14]; and the existing IKF with known real measurement loss probability [8]. The parameters of VBAKF are selected as p 0 = 0.5 , α 0 = 5 , β 0 = 5 , ρ = 1 exp ( 5 ) , and N m = 10 . The parameters of GSTKF are selected as v t = 5 and e 0 = 0.85 . The parameters of the proposed filter are given as ρ = 0.99 , μ = 5 , h 0 = 0.85 , N I = 10 , η 0 = 5 , and δ t = 5 , ς = 10 16 . All filters are programmed with MATLAB R2018a and run on a computer with Intel Core i5-6300HQ CPU at 2.30 GHz and 8 GB of RAM.
Aimed at evaluating the performances in the estimation of the system state vector of all the algorithms, the root-mean square error (RMSE) and the averaged root-mean square error (AGRMSE) are utilized as performance indicators. The definitions of RMSE and AGRMSE of the system state are given as
RMSE x = 1 M c r = 1 M r ( ( x t r x ^ t r ) 2 + ( y t r y ^ t r ) 2 )
AGRMSE x = 1 M c T t = 1 T r = 1 M r ( ( x t r x ^ t r ) 2 + ( y t r y ^ t r ) 2 )
where ( x t r , y t r ) and ( x ^ t r , y ^ t r ) denote the actual and estimated system state at the j th Monte Carlo run and discrete-time t , respectively. M c = 500 represents the total Monte Carlo run time.
Different from the proposed algorithm and VBAKF, the KF, IKF and GSTKF do not estimate UTVMLP. Although IKF can also address the filtering problem with measurement loss, IKF is based on the assumption that the measurement loss probability is known. Therefore, only VBAKF and the proposed algorithm participate in the comparison of the UTVMLP estimation performance.
Figure 1 shows the RMSE x s of the proposed filter and the existing filters over 500 times of the Monte Carlo run. Additionally, the AGRMSE x s and SSRTs of different filters are listed in Table 2. It can be seen from Figure 1 and Table 3 that in the contexts of UTVMLP and NSHTMN, when the measurement is the Gaussian measurement noise and there is slight loss probability, as shown in stages 1 and 4, the estimation accuracy of the proposed filter is close to the IKF with true loss probability and the performance of the proposed algorithm is better than the other algorithms. We can also find that the proposed algorithm still has better performance than the existing algorithms when the measurement has heavy-tailed measurement noise and larger measurement loss probability, as shown in stages 2 and 4. In addition, the proposed algorithm has longer SSRT and higher computational complexity than the existing filters, which can be observed from Table 3.
Figure 2 shows the curves of the true and estimated UTVMLPs of VBAKF and the proposed filter over 500 times of the Monte Carlo run. Obviously, the NSHTMN has a great influence on the filtering performance of VBAKF and the proposed filter has better UTVMLP estimation accuracy than VBAKF in the scenario of NSHTMN.
Figure 3 and Figure 4 show the RMSE x s and the estimated UTVMLPs of the proposed filter with shape parameter μ = 3 , 4 , 5 , 6 over the 500 Monte Carlo run, respectively. The corresponding SSRTs of the proposed filter with μ = 3 , 4 , 5 , 6 are 0.2991, 0.2983, 0.2993, and 0.2989. It can be seen that the proposed filter with different shape parameters has better performance than current algorithms in the system state and UTVMLP estimations. Moreover, the degree of freedom parameter μ has little influence on the estimation accuracy and time complexity of the proposed algorithm, and the recommended value of μ is therefore set as 5.
Figure 5 and Figure 6 show the RMSE x s and the estimated UTVMLPs of the proposed filter with forgetting factor ρ = 0.93 , 0.95 , 0.97 , 0.99 over the 500 Monte Carlo run, respectively. The corresponding SSRT of the proposed filter with ρ = 0.93 , 0.95 , 0.97 , 0.99 is approximately equal to 0.2990. We can find that the proposed filter with ρ = 0.99 has the best performance in the system state and UTVMLP estimations, and the value of ρ has little effect on calculation complexity. Therefore, the recommended value of ρ is 0.99.
Figure 7 shows the AGRMSE x s of the proposed filter and the current algorithms with the iteration number N I = 1 , 2 , 10 . We can see from Figure 7 that the proposed filter has a smaller AGRMSE x than the existing filters when N I 3 and the proposed filter converges faster than the existing filters. However, Table 4 shows that the setting of N I has a huge impact on the time consumption of the proposed filter and the SSRT increases with the increase of N I . Therefore, considering time consumption and estimation accuracy, the recommended value of N I ranges from 4 to 10.

5. Conclusions

In this paper, a new VB-based KF is presented to address the filtering issue with UTVMLP and NSHTMN. The system state vector, BM, BL, the intermediate random variables, the mixing probability, and the UTVMLP are simultaneously inferred by introducing the variational Bayesian technique. Simulation results illustrated that the proposed filter has a better performance than existing filters in the estimations of the system state vector and UTVMLP.

6. Future Work

The environmental factors in practical applications may be more complicated than this paper illustrated. Apart from non-stationary heavy-tailed measurement noise and unknown loss probability, the system process noise may also present non-stationary heavy-tailed distribution. In terms of measurement, random delays in measurement will also appear. Therefore, in our future research, we will further consider the factors of non-stationary heavy-tailed process noise and random measurement delay based on the theoretical content of this paper. Additionally, we will design a non-linear filtering method with lower computational complexity to verify the effectiveness in the real world.

Author Contributions

Conceptualization, C.S.; methodology, C.S.; software, C.S. and H.S.; data curation, W.Z.; writing—original draft preparation, C.S.; writing—review and editing, Y.Y.; visualization, C.S. and Y.Y.; supervision, W.Z.; project administration, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research study was funded by the National Natural Science Foundation of China, grant number 61573113.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

Utilizing Equation (29), log p ( Ξ , y 1 : t ) can be derived as
log p ( Ξ , y 1 : t ) = 0.5 ζ t b t ( y t H t x t ) T R t 1 ( y t H t x t ) 0.5 ( 1 ζ t ) b t β t ( y t H t x t ) T R t 1 ( y t H t x t ) 0.5 ζ t ( 1 b t ) y t T R t 1 y t 0.5 ( 1 ζ t ) ( 1 b t ) β t y t T R t 1 y t 0.5 ( x t x ^ t | t 1 ) T P t | t 1 1 × ( x t x ^ t | t 1 ) logG ( β t , μ t 2 , μ t 2 ) + ζ t log φ t + ( 1 φ t ) log ( 1 ζ t ) + ( 1 γ t ) log b t + γ t log ( 1 b t ) + logBe ( φ t ; h 0 , 1 h 0 ) + logBe ( γ t ; η ^ t | t 1 , δ ^ t | t 1 ) + c Ξ
Exploiting θ = x t in Equation (28) and utilizing (a) yields
log q a ( s + 1 ) ( x t ) = 0.5 E ( s ) [ ζ t ] E ( s ) [ b t ] ( y t H t x t ) T R t 1 ( y t H t x t ) 0.5 E ( s ) [ 1 ζ t ] E ( s ) [ b t ] × E ( s ) [ β t ] ( y t H t x t ) T R t 1 ( y t H t x t ) 0.5 ( x t x ^ t | t 1 ) T P t | t 1 1 ( x t x ^ t | t 1 ) + c x t
By exploiting Equation (34) in Equation (A2), the posterior PDF q a ( s + 1 ) ( x t ) is defined as
q a ( s + 1 ) ( x t ) N ( x t ; x ^ t | t ( s + 1 ) , P t | t ( s + 1 ) ) N ( y t ; H t x t , R ˜ t ( s + 1 ) )
Based on Equation (A3), Equation (30) can be calculated and q a ( s + 1 ) ( x t ) is updated by utilizing the measurement update of the traditional Kalman filter.

Appendix B. Proof of Proposition 2

Exploiting θ = β t in Equation (28) and utilizing Equation (A1) yields
log q a ( s + 1 ) ( β t ) = 0.5 ( n E ( s ) [ 1 ζ t ] + μ t 2 ) log ( β t ) 0.5 β t E ( s ) [ 1 ζ t ] E ( s ) [ b t ] × tr ( A t ( s + 1 ) R t 1 ) 0.5 β t E ( s ) [ 1 ζ t ] E ( s ) [ 1 b t ] tr ( B t ( s + 1 ) R t 1 ) 0.5 μ t β t + c β t = 0.5 ( n E ( s ) [ 1 ζ t ] + μ t 2 ) log ( β t ) 0.5 β t [ tr ( G t ( s + 1 ) R t 1 ) + μ t ] + c β t
where A t ( s + 1 ) and B t ( s + 1 ) are defined as
A t ( s + 1 ) = E ( s + 1 ) [ ( y t H t x t ) ( y t H t x t ) T ] = H t P t | t ( s + 1 ) H t T + ( y t H t x ^ t | t ( s + 1 ) ) ( y t H t x ^ t | t ( s + 1 ) ) T
B t ( s + 1 ) = E ( s + 1 ) [ y t ( y t ) T ] = y t ( y t ) T
By exploiting Equations (36) and (37) in Equation (A5), we have
log q a ( s + 1 ) ( β t ) = ( π t ( s + 1 ) 1 ) log ( β t ) ν t ( s + 1 ) β t + c β t
According to Equation (A7), we can obtain
q a ( s + 1 ) ( β t ) β t ( π t ( s + 1 ) 1 ) exp ( ν t ( s + 1 ) β t )
Based on Equation (A8), Equation (41) can be obtained.

Appendix C. Proof of Proposition 3

Exploiting θ = b t in Equation (28) and utilizing Equation (A1) yields
log q a ( s + 1 ) ( b t ) = b t Π t a ( s + 1 ) + ( 1 b t ) Π t b ( s + 1 ) + c b t
where the intermediate variables Π t a ( s + 1 ) and Π t b ( s + 1 ) are defined as
Π t a ( s + 1 ) = E ( s ) [ log ( 1 γ t ) ] 0.5 tr ( C t ( s + 1 ) R t 1 ) + 0.5 n E ( s ) [ 1 ζ t ] E ( s ) [ log ( β t ) ]
Π t b ( s + 1 ) = E ( s ) E ( s ) [ log ( γ t ) ] 0.5 t r ( D t ( s + 1 ) R t 1 ) + 0.5 n E ( s ) [ 1 ζ t ] E ( s ) [ log ( β t ) ]
By utilizing (A9)–(A11), the posterior PMF q a ( s + 1 ) ( b t ) can be formulated as
q a ( s + 1 ) ( b t ) = [ ( s + 1 ) exp ( Π t a ( s + 1 ) ) ] b t [ ( s + 1 ) exp ( Π t b ( s + 1 ) ) ] ( 1 b t )
Based on Equation (A12), q a ( s + 1 ) ( b t ) can be updated as the Bernoulli distribution.

Appendix D. Proof of Proposition 4

Exploiting θ = ζ t in Equation (28) and utilizing Equation (A1) yields
log q a ( s + 1 ) ( ζ t ) = ζ t Σ t a ( s + 1 ) + ( 1 ζ t ) Σ t b ( s + 1 ) + c b t
where the intermediate variables Σ t a ( s + 1 ) and Σ t b ( s + 1 ) are defined as
Σ t a ( s + 1 ) = { E ( s ) [ log ( φ t ) ] 0.5 tr ( V t ( s + 1 ) R t 1 ) }
Σ t b ( s + 1 ) = E ( s ) [ log ( 1 φ t ) ] 0.5 tr ( W t ( s + 1 ) R t 1 ) + 0.5 n E ( s ) [ log ( β t ) ]
By utilizing Equations (A13)–(A15), the posterior PMF q a ( s + 1 ) ( ζ t ) can be formulated as
q a ( s + 1 ) ( ζ t ) = [ ( s + 1 ) exp ( Σ t a ( s + 1 ) ) ] ζ t [ ( s + 1 ) exp ( Σ t b ( s + 1 ) ) ] ( 1 ζ t )
Based on Equation (A16), q a ( s + 1 ) ( ζ t ) can be updated as the Bernoulli distribution.

Appendix E. Proof of Proposition 5

Exploiting θ = φ t in Equation (28) and utilizing Equation (A1) yields
log q a ( s + 1 ) ( φ t ) = log ( φ t ) { h 0 + E ( s ) [ ζ t ] 1 } + log ( 1 φ t ) { E ( s ) [ 1 ζ t ] h 0 } + c φ t
Substituting Equations (48) and (49) in Equation (A17) yields
log q a ( s + 1 ) ( φ t ) = log ( φ t ) ( h t ( s + 1 ) 1 ) + log ( 1 φ t ) ( d t ( s + 1 ) + 1 ) + c φ t
According to Equation (A18), we obtain
q a ( s + 1 ) ( φ t ) φ t ( h t ( s + 1 ) 1 ) ( 1 φ t ) ( d t ( s + 1 ) + 1 )
Based on Equation (A19), Equation (47) is obtained.

Appendix F. Proof of Proposition 5

Exploiting θ = γ t in Equation (28) and utilizing Equation (A1) yields
log q a ( s + 1 ) ( γ t ) = log ( γ t ) ( η ^ t | t 1 E ( s + 1 ) [ b t ] ) + log ( 1 γ t ) ( δ ^ t | t 1 + E ( s + 1 ) [ 1 b t ] )
Substituting Equations (51) and (52) in Equation (A20) yields
log q a ( s + 1 ) ( γ t ) = log ( γ t ) ( η ^ t ( s + 1 ) + 1 ) + log ( 1 γ t ) ( δ ^ t ( s + 1 ) 1 ) + c φ t
According to Equation (A21), we obtain
q a ( s + 1 ) ( γ t ) γ t ( η ^ t ( s + 1 ) + 1 ) ( 1 γ t ) ( δ ^ t ( s + 1 ) 1 )
Based on Equation (A22), Equation (50) is obtained.

References

  1. Mohinder, S.G.; Angus, P.A. Linear Optimal Filters and Predictors. In Kalman Filtering: Theory and Practice Using MATLAB; IEEE: Manhattan, NY, USA, 2008; pp. 131–181. [Google Scholar]
  2. Simon, D. The discrete-time Kalman filter. In Optimal State Estimation; Wiley-Interscience: New York, NY, USA, 2006; pp. 121–148. [Google Scholar]
  3. Li, Q.; Li, R.; Ji, K.; Dai, W. Kalman Filter and Its Application. In Proceedings of the 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS), Tianjin, China, 1–3 November 2015; pp. 74–77. [Google Scholar]
  4. Yang, Y.J.; Fan, X.G.; Zhuo, Z.F.; Wang, S.D.; Nan, J.G.; Huang, J.K. AFAKF for manoeuvring target tracking based on current statistical model. Iet Sci. Meas. Technol. 2016, 10, 637–643. [Google Scholar] [CrossRef]
  5. Hou, B.W.; He, Z.M.; Zhou, X.Y.; Zhou, H.Y.; Li, D.; Wang, J.Q. Maximum Correntropy Criterion Kalman Filter for alpha-Jerk Tracking Model with Non-Gaussian Noise. Entropy 2017, 19, 648. [Google Scholar] [CrossRef] [Green Version]
  6. Joerger, M.; Pervan, B. Kalman Filter-Based Integrity Monitoring Against Sensor Faults. J. Guid. Control. Dyn. 2013, 36, 349–361. [Google Scholar] [CrossRef] [Green Version]
  7. Jia, G.L.; Huang, Y.L.; Zhang, Y.G.; Chambers, J. A Novel Adaptive Kalman Filter With Unknown Probability of Measurement Loss. IEEE Signal Process. Lett. 2019, 26, 1862–1866. [Google Scholar] [CrossRef]
  8. Sinopoli, B.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 2004, 49, 1453–1464. [Google Scholar] [CrossRef]
  9. Mo, Y.L.; Sinopoli, B. Kalman Filtering With Intermittent Observations: Tail Distribution and Critical Value. IEEE Trans. Autom. Control 2012, 57, 677–689. [Google Scholar]
  10. Zhang, J.Q.; You, K.Y.; Xie, L.H. Bayesian Filtering With Unknown Sensor Measurement Losses. Ieee Trans. Control Netw. Syst. 2019, 6, 163–175. [Google Scholar] [CrossRef] [Green Version]
  11. Bai, M.M.; Huang, Y.L.; Jia, G.L.; Zhang, Y.G. A robust fixed-interval smoother for nonlinear systems with non-stationary heavy-tailed state and measurement noises. Signal Process. 2021, 180, 107898. [Google Scholar] [CrossRef]
  12. Zhu, H.; Zhang, G.R.; Li, Y.F.; Leung, H. A novel robust Kalman filter with unknown non-stationary heavy-tailed noise. Automatica 2021, 127, 109511. [Google Scholar] [CrossRef]
  13. Jia, G.; Huang, Y.; Bai, M.B.; Zhang, Y. A Novel Robust Kalman Filter With Non-stationary Heavy-tailed Measurement Noise. IFAC-PapersOnLine 2020, 53, 368–373. [Google Scholar] [CrossRef]
  14. Huang, Y.; Zhang, Y.; Zhao, Y.; Chambers, J.A. A Novel Robust Gaussian–Student’s t Mixture Distribution Based Kalman Filter. IEEE Trans. Signal Process. 2019, 67, 3606–3620. [Google Scholar] [CrossRef]
  15. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A Novel Adaptive Kalman Filter With Inaccurate Process and Measurement Noise Covariance Matrices. IEEE Trans. Autom. Control 2018, 63, 594–601. [Google Scholar] [CrossRef] [Green Version]
  16. Bishop, C.; Ligne, S. Pattern Recognition and Machine Learning; Elsevier: Amsterdam, The Netherlands, 2006; Volume 1. [Google Scholar]
  17. Tzikas, D.G.; Likas, A.C.; Galatsanos, N.P. The variational approximation for Bayesian inference. IEEE Signal Process. Mag. 2008, 25, 131–146. [Google Scholar] [CrossRef]
  18. Huang, Y.; Zhang, Y.; Li, N.; Chambers, J. A Robust Gaussian Approximate Fixed-Interval Smoother for Nonlinear Systems With Heavy-Tailed Process and Measurement Noises. IEEE Signal Process. Lett. 2016, 23, 468–472. [Google Scholar] [CrossRef] [Green Version]
Figure 1. RMSE x of different filters.
Figure 1. RMSE x of different filters.
Entropy 23 01351 g001
Figure 2. The true and estimated UTVMLPs.
Figure 2. The true and estimated UTVMLPs.
Entropy 23 01351 g002
Figure 3. RMSE x of the proposed filter with μ = 3 , 4 , 5 , 6 .
Figure 3. RMSE x of the proposed filter with μ = 3 , 4 , 5 , 6 .
Entropy 23 01351 g003
Figure 4. The estimated UTVMLPs of the proposed filter with μ = 3 , 4 , 5 , 6 .
Figure 4. The estimated UTVMLPs of the proposed filter with μ = 3 , 4 , 5 , 6 .
Entropy 23 01351 g004
Figure 5. RMSE x of the proposed filter with forgetting factor ρ = 0.93 ,   0.95 ,   0.97 ,   and   0.99 .
Figure 5. RMSE x of the proposed filter with forgetting factor ρ = 0.93 ,   0.95 ,   0.97 ,   and   0.99 .
Entropy 23 01351 g005
Figure 6. The estimated UTVMLPs of the proposed filter with ρ = 0.93 ,   0.95 ,   0.97 ,   and   0.99 .
Figure 6. The estimated UTVMLPs of the proposed filter with ρ = 0.93 ,   0.95 ,   0.97 ,   and   0.99 .
Entropy 23 01351 g006
Figure 7. AGRMSE x of the proposed filter when the iteration number N I = 1 , 2 , 10 .
Figure 7. AGRMSE x of the proposed filter when the iteration number N I = 1 , 2 , 10 .
Entropy 23 01351 g007
Table 1. The proposed variational Bayesian-based Kalman filter with UTVMLP and NSHTMN (one-time step).
Table 1. The proposed variational Bayesian-based Kalman filter with UTVMLP and NSHTMN (one-time step).
Inputs: x ^ t 1 | t 1 , P t 1 | t 1 , Q t 1 | t 1 , R t 1 | t 1 , y t , F t 1 , H t , n , m , μ t , h 0 , η ^ t 1 , δ ^ t 1 , N I , ς
Time update:
1. Obtain x ^ t | t 1 and P t | t 1 utilizing Equations (18) and (19) (time update of typical Kalman filter).
Variational measurement update:
2. Initialization: x ^ t | t ( 0 ) = x t | t 1 , P t | t ( 0 ) = P t | t 1 , E ( 0 ) [ β t ] = 1 , E ( 0 ) [ log ( β t ) ] = 0 , E ( 0 ) [ b t ] = η ^ t 1 / δ ^ t 1 , E ( 0 ) [ 1 b t ] = 1 E ( 0 ) [ b t ] , η ^ t ( 0 ) = η ^ t 1 , δ ^ t ( 0 ) = δ ^ t 1 , E ( 0 ) [ ζ t ] = 1 , E ( 0 ) [ ζ t ] = 1 E ( 0 ) [ ζ t ] , E ( 0 ) [ log ( φ t ) ] = Ψ ( h 0 ) Ψ ( 1 ) , E ( s + 1 ) [ log ( 1 φ t ) ] = Ψ ( 1 h 0 ) Ψ ( 1 ) , E ( s + 1 ) [ log ( γ t ) ] = Ψ ( η ^ t ( 0 ) ) Ψ ( η ^ t ( 0 ) + δ ^ t ( 0 ) ) , E ( s + 1 ) [ log ( 1 γ t ) ] = Ψ ( δ ^ t ( 0 ) ) Ψ ( η ^ t ( 0 ) + δ ^ t ( 0 ) )
for s = 0 : N I 1 .
3. Update q a ( s + 1 ) ( x t ) by Equation (30).
4. Obtain x ^ t | t ( s + 1 ) , P t | t ( s + 1 ) , and R ˜ t ( s + 1 ) by utilizing Equations (31)–(34) (typical Kalman filter).
5. Update the Gamma-distributed q a ( s + 1 ) ( β t ) by Equation (35).
6. Obtain π t ( s + 1 ) , ν t ( s + 1 ) , and G t ( s + 1 ) by utilizing Equations (36)–(38).
7. Update the Bernoulli-distributed q a ( s + 1 ) ( b t ) .
8. Obtain p ( s + 1 ) ( b t = 1 ) , p ( s + 1 ) ( b t = 0 ) , C t ( s + 1 ) , and D t ( s + 1 ) by utilizing Equations (39)–(42).
9. Obtain E ( s + 1 ) [ b t ] and E ( s + 1 ) [ 1 b t ] by utilizing Equations (55) and (56).
10. Update the Bernoulli-distributed q a ( s + 1 ) ( ζ t ) .
11. Obtain p ( s + 1 ) ( ζ t = 1 ) , p ( s + 1 ) ( ζ t = 0 ) , V t ( s + 1 ) , and W t ( s + 1 ) by utilizing Equations (43)–(46).
12. Obtain E ( s + 1 ) [ ζ t ] and E ( s + 1 ) [ 1 ζ t ] by utilizing Equations (57) and (58).
13. Update the Beta-distributed q a ( s + 1 ) ( φ t ) by Equation (47).
14. Obtain h t ( s + 1 ) and d t ( s + 1 ) by utilizing Equations (48) and (49).
15. Obtain E ( s + 1 ) [ log ( φ t ) ] and E ( s + 1 ) [ log ( 1 φ t ) ] by utilizing Equations (59) and (60).
16. Update the Beta-distributed q a ( s + 1 ) ( γ t ) by Equation (50).
17. Obtain η ^ t ( s + 1 ) and δ ^ t ( s + 1 ) by utilizing Equations (51) and (52).
18. Obtain E ( s + 1 ) [ log ( γ t ) ] and E ( s + 1 ) [ log ( 1 γ t ) ] by utilizing Equations (61) and (62).
19. If ( x ^ t | t ( s + 1 ) x ^ t | t ( s ) / x ^ t | t ( s ) ) ς , the iteration stopped.
End for:
20. x ^ t | t = x ^ t | t ( s ) , P t | t = P t | t ( s ) , h t = h t ( s ) , d t = d t ( s ) , η t = η t ( s ) , δ t = δ t ( s )
Outputs: x ^ t | t , P t | t , h t , d t , η t , δ t , h t / ( h t + d t ) , η t / ( η t + δ t )
Table 2. The stages of measurement and the corresponding measurement noise and UTVMLP.
Table 2. The stages of measurement and the corresponding measurement noise and UTVMLP.
Measurement StageMeasurement Noise UTVMLP
Stage 1, time 1 s~100s g t ~ N ( 0 , R t )
(Gaussian)
0.1 (slight loss)
Stage 2, time 101 s~200 s g t ~ { N ( 0 , R t ) w . p . = 0.98 N ( 0 , 500 R t ) w . p . = 0.02
(slightly heavy-tailed)
0.15 (slight loss)
Stage 3, time 201 s~300 s g t ~ { N ( 0 , R t ) w . p . = 0.95 N ( 0 , 500 R t ) w . p . = 0.05
(moderately heavy-tailed)
0.3 (moderate loss)
Stage 4, time 301 s~400 s g t ~     N ( 0 , R t )
(Gaussian)
0.1 (slight loss)
Table 3. AGRMSE x s and single-step running times (SSRT) of different filters.
Table 3. AGRMSE x s and single-step running times (SSRT) of different filters.
FiltersKFIKFVBAKFGSTKFThe Proposed Filter
AGRMSE x in Stage 110.13364.43264.77058.76834.7704
AGRMSE x in Stage 225.400911.104918.760314.43435.0660
AGRMSE x in Stage 358.056117.527435.819428.03546.3044
AGRMSE x in Stage 424.91294.444827.02529.58924.5081
AGRMSE x in all stages29.62599.377421.594415.20685.1623
SSRT (ms)0.02760.09250.11660.14910.2989
Table 4. SSRT of the proposed filter with the iteration number N I = 1 , 2 , 10 .
Table 4. SSRT of the proposed filter with the iteration number N I = 1 , 2 , 10 .
Iteration Number N I 12345
SSRT (ms)0.0327 0.05500.08940.13070.1570
Iteration Number N I 678910
SSRT (ms)0.18830.22430.24510.28350.2989
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shan, C.; Zhou, W.; Yang, Y.; Shan, H. A New Variational Bayesian-Based Kalman Filter with Unknown Time-Varying Measurement Loss Probability and Non-Stationary Heavy-Tailed Measurement Noise. Entropy 2021, 23, 1351. https://0-doi-org.brum.beds.ac.uk/10.3390/e23101351

AMA Style

Shan C, Zhou W, Yang Y, Shan H. A New Variational Bayesian-Based Kalman Filter with Unknown Time-Varying Measurement Loss Probability and Non-Stationary Heavy-Tailed Measurement Noise. Entropy. 2021; 23(10):1351. https://0-doi-org.brum.beds.ac.uk/10.3390/e23101351

Chicago/Turabian Style

Shan, Chenghao, Weidong Zhou, Yefeng Yang, and Hanyu Shan. 2021. "A New Variational Bayesian-Based Kalman Filter with Unknown Time-Varying Measurement Loss Probability and Non-Stationary Heavy-Tailed Measurement Noise" Entropy 23, no. 10: 1351. https://0-doi-org.brum.beds.ac.uk/10.3390/e23101351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop