Next Article in Journal
Comparison of Risk Ratios of Shrinkage Estimators in High Dimensions
Previous Article in Journal
Negative Order KdV Equation with No Solitary Traveling Waves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double Accelerated Convergence ZNN with Noise-Suppression for Handling Dynamic Matrix Inversion

1
College of Information Science and Engineering, Jishou University, Jishou 416000, China
2
College of the Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China
*
Author to whom correspondence should be addressed.
Submission received: 24 November 2021 / Revised: 17 December 2021 / Accepted: 19 December 2021 / Published: 24 December 2021

Abstract

:
Matrix inversion is commonly encountered in the field of mathematics. Therefore, many methods, including zeroing neural network (ZNN), are proposed to solve matrix inversion. Despite conventional fixed-parameter ZNN (FPZNN), which can successfully address the matrix inversion problem, it may focus on either convergence speed or robustness. So, to surmount this problem, a double accelerated convergence ZNN (DAZNN) with noise-suppression and arbitrary time convergence is proposed to settle the dynamic matrix inversion problem (DMIP). The double accelerated convergence of the DAZNN model is accomplished by specially designing exponential decay variable parameters and an exponential-type sign-bi-power activation function (AF). Additionally, two theory analyses verify the DAZNN model’s arbitrary time convergence and its robustness against additive bounded noise. A matrix inversion example is utilized to illustrate that the DAZNN model has better properties when it is devoted to handling DMIP, relative to conventional FPZNNs employing other six AFs. Lastly, a dynamic positioning example that employs the evolution formula of DAZNN model verifies its availability.

1. Introduction

As a fundamental mathematical issue, matrix inversion plays a crucial role in applied mathematics and engineering fields such as control application [1], quaternion [2], MIMO systems [3,4], and robot kinematics [5,6,7]. Therefore, numerous numerical algorithms have been developed to address this issue. For example, Cholesky decomposition algorithm [8] and Newton iteration algorithm [9] were utilized to solve matrix inversion. However, as the dimensionality of matrix issues increases, the original numerical algorithms are no longer able to handle these complex matrix problems rapidly and effectively [10]. In order to counteract the disadvantages of the above methods, neural networks with the capacity to process problems in parallel were introduced [11]. In addition, neural networks have recently been identified as a hotspot of research, and have been applied to numerous areas, such as medical image denoising [12], hydrogen economy [13], biometric verification [14], quadrotor [15], robots [16], synchronization problem [17], optimal control [18], virus forecasting [19], inflation prediction [20], portfolio optimization, and selection [21,22]. Considering the fact that gradient-based neural networks cannot effectively solve time-varying problems, a zeroing neural network (ZNN), a branch of the recurrent neural network (RNN), was proposed for computing time-variant matrix inversion problem [23].
Nevertheless, as a consequence of the convergence speed limitations, conventional ZNNs with linear activation function (AF) may not be capable of solving large-scale applications online [24]. Consequently, a nonlinear sign-bi-power (SBP) AF was presented in order to shorten the convergence time of ZNN models [25]. ZNN models that use SBPAF for acceleration have been widely reported [26]. However, all the studies mentioned above fail to consider the interference of noise, yet noise is an inherent part of all practical applications [27]. As a result, many researchers proposed a novel class of ZNN models using an integral term to address issues that were affected by noise interference [28,29]. For instance, a modified ZNN model with implicit noise tolerance was proposed by Jin et al. in [30] for the solution of quadratic programming problems. A noise-tolerant ZNN model was investigated to calculate complex matrix inversion with noise [31]. For further research, a unified framework for ZNN with an integral term was investigated, and its superiority to the conventional ZNN model was verified [32].
Regrettably, the vast majority of existing ZNN models containing the aforementioned ZNN models employ fixed convergence parameters, albeit they achieve fast convergence or noise tolerance. As a matter of fact, the convergence parameter generally varies with time in the hardware system [33]. In response to this issue, variable-parameter ZNN/RNN with the characteristics of fast convergence and strong robustness is researched [34,35,36,37]. For example, a variable-gain RNN with fast convergence was proposed for dynamic quadratic programming [34]. Xiao et al. developed a novel varying-parameter ZNN (VPZNN) for handling matrix inversion in [35], which exhibits significantly improved convergence speed when compared with the fixed-parameter ZNN (FPZNN). Further, a varying parameter RNN with an exponential gain time-varying term to resolve matrix inversion was considered [36]. Nevertheless, the variable parameters of the aforesaid VPZNN model tend to be infinite, which is clearly unreasonable in hardware implementations. Thus, we design two exponential decay variable parameters in order to further accelerate the model’s rate. Besides, an exponential-type SBPAF (ETSBPAF) is designed to gain more excellent convergence performance. As such, the double accelerated convergence ZNN (DAZNN) is proposed as a new model for dealing with dynamic matrix inversion problem (DMIP) as it is characterized by noise-suppression and arbitrary time convergence. In addition, the two theory analyses demonstrate the arbitrary time convergence of the DAZNN model as well as its robustness when bounded noises are added. What is more, an illustrative example is employed to assess the validity of the theories, as well as the superiority of DAZNN in comparison with FPZNNs utilizing other AFs (i.e., linear AF, bipolar-sigmoid AF, tunable AF, sign-bi-power AF, predefined time AF, and improved predefined time AF). Lastly, the angle of arrival (AOA) dynamic positioning example that employs the evolution formula of DAZNN model verifies its availability.
Throughout the remainder of this paper, five sections are presented. The problem description and models design are shown in Section 2. In Section 3, we analyze the arbitrary time convergence and robustness of the proposed DAZNN model. An illustrative example is provided in Section 4. Besides, the AOA dynamic positioning simulation with sine noise is conducted in Section 5. Section 6 summarizes the entire work in this paper. The main contributions of this paper are indicated as below.
  • As opposed to the ZNN generated by the original error function for the static matrix inversion, this work develops the DAZNN generated by a novel error function to solve dynamic matrix inversion;
  • Two new exponential decay variable parameters and a novel exponential-type SBPAF are incorporated into the DAZNN model in order to achieve double accelerated convergence and more stronger noise suppression;
  • Two rigorous theoretical analyses are employed in order to demonstrate the arbitrary time convergence of the DAZNN model as well as its robustness under additive bounded noise;
  • The illustrative example confirms that the DAZNN model is superior to the fixed-parameter model activated by other six activation functions. Besides, the evolution formula of DAZNN model is applied to the AOA dynamic positioning with sine noise to illustrate the model’s availability further.

2. Problem Formulation and Models Design

In this section, the dynamic matrix inversion problem is presented. Secondly, the design procedures of the fixed-parameter ZNN model and proposed DAZNN model with variable-parameters are introduced.

2.1. Problem Formulation

Consider a dynamic matrix inversion problem (DMIP):
A ( t ) X ( t ) = I ,
where A ( t ) R m × m is a known non-singular and smooth time-varying coefficient matrix; X ( t ) R m × m is an unknown matrix and I R m × m denotes the unit matrix.

2.2. Fixed-Parameter ZNN

The general research considers only the ZNN model generated by the original error function (that is, E ( t ) = A ( t ) X ( t ) I ) design without considering the diversity of ZNN models. The diversity of ZNN models can provide more options for hardware implementation. It is easy to see that by designing different error functions, we can obtain different ZNN models based on [38] since the ZNN model is generated by error function and evolution formula [39,40]. Thus, to increase the variety of ZNN model, a novel error function is designed as
S ( t ) = A 2 ( t ) X ( t ) A ( t ) .
Then according to the noise tolerance evolution formula [32], we have
d S ( t ) d t = λ 1 Ω ( S ( t ) ) λ 2 Ω S ( t ) + λ 1 0 t Ω ( S ( τ ) ) d τ ,
in which λ 1 , λ 2 > 0 R , and Ω ( · ) denotes the odd increasing activation function (AF) array, that is
Ω ( S ( t ) ) = ω ( s 11 ( t ) ) ω ( s 12 ( t ) ) ω ( s 1 m ( t ) ) ω ( s 21 ( t ) ) ω ( s 22 ( t ) ) ω ( s 2 m ( t ) ) ω ( s m 1 ( t ) ) ω ( s m 2 ( t ) ) ω ( s m m ( t ) ) ,
where ω ( · ) is the element of Ω ( · ) . In this manner, we can obtain the fix-parameter ZNN (FPZNN) model:
A 2 ( t ) X ˙ ( t ) = A ( t ) A ˙ ( t ) X ( t ) + A ˙ ( t ) A ( t ) X ( t ) + A ˙ ( t ) λ 1 Ω A 2 ( t ) X ( t ) A ( t ) λ 2 Ω A 2 ( t ) X ( t ) A ( t ) + λ 1 0 t Ω ( A 2 ( τ ) X ( τ ) A ( τ ) ) d τ .
Listed below are some commonly used AF that we use to compare to the novel AF.
(1)
Linear AF (LAF) [41]:
ω ( z ) = z .
(2)
Bipolar-sigmoid AF (BSAF) [25]:
ω ( z ) = 1 + exp ( r ) 1 exp ( r ) · 1 + exp ( r z ) 1 exp ( r z ) ,
where r > 2 .
(3)
Tunable AF (TAF) [42]:
ω ( z ) = k 1 | z | q sign ( z ) + ϖ 1 z ,
where 0 < q < 1 , k 1 , ϖ 1 > 0 .
(4)
Sign-bi-power AF (SBPAF) [25]:
ω ( z ) = 0.5 | z | q + 0.5 | z | 1 / q sign ( z ) ,
where sign ( · ) is defined as
sign ( z ) = 1 , if z > 0 , 0 , if z = 0 , 1 , if z < 0 .
(5)
Predefined time AF (PTAF) [43]:
ω ( z ) = k 1 | z | q + k 2 | z | p sign ( z ) + k 3 z + k 4 sign ( z ) ,
where p > 1 , k 2 > 0 and k 3 , k 4 0 .
(6)
Improved predefined time AF (IPTAF) [10]:
ω ( z ) = k 1 | z | q sign ( z ) + k 5 z , if | z | 1 , k 2 | z | p sign ( z ) + k 5 z , if | z | > 1 ,
where k 1 , k 2 are the same as defined above and k 5 > 0 .

2.3. DAZNN Model

While the FPZNN is a powerful tool for handling DMIP, we are aware that it is inferior to the variable-parameter ZNN from [10,34,35] due to its fix-parameter nature. Therefore, a double accelerated convergence ZNN (DAZNN) model is proposed to solve DMIP.
Firstly, according to the novel variable parameters, the variable-parameter evolution formula can be depicted as [10,32]:
d S ( t ) d t = ϱ 1 ( t ) Ω 1 ( S ( t ) ) ϱ 2 ( t ) Ω 2 S ( t ) + 0 t ϱ 1 ( τ ) Ω 1 ( S ( τ ) ) d τ ,
where Ω 1 ( · ) and Ω 2 ( · ) represent AF arrays. Besides, ω 1 ( · ) and ω 2 ( · ) are the elements in Ω 1 ( · ) and Ω 2 ( · ) . And ω 1 ( · ) and ω 2 ( · ) denote exponential-type SBPAF (ETSBPAF) as
ω 1 ( z ) = ω 2 ( z ) = exp ( μ 1 | z | + μ 2 ) 0.5 | z | q + 0.5 | z | 1 / q sign ( z ) ,
with μ 1 , μ 2 > 0 R and ϱ 1 ( t ) and ϱ 2 ( t ) can be defined by
ϱ 1 ( t ) = λ 1 exp ( φ 1 arccot ( t ) ) , ϱ 2 ( t ) = λ 2 exp ( φ 2 arccot ( t ) ) ,
with λ 1 , λ 2 , φ 1 , φ 2 > 0 R . Therefore, the DAZNN model is written as
A 2 ( t ) X ˙ ( t ) = A ( t ) A ˙ ( t ) X ( t ) + A ˙ ( t ) A ( t ) X ( t ) + A ˙ ( t ) ϱ 1 ( t ) Ω 1 A 2 ( t ) X ( t ) A ( t ) ϱ 2 ( t ) Ω 2 A 2 ( t ) X ( t ) A ( t ) + 0 t ϱ 1 ( t ) Ω 1 ( A 2 ( τ ) X ( τ ) A ( τ ) ) d τ .
In this part, by deriving the noise tolerance evolution design formula, a specific DAZNN model is proposed for solving the DMIP (1).

3. Theoretical Analysis

In this section, the arbitrary time convergence and robustness performance of the DAZNN model (14) are theoretically analyzed. The corresponding results are given as below.

3.1. Arbitrary Time Convergence

Theorem 1.
Given an invertible matrix A ( t ) , and starting from any initial value X ( 0 ) , DAZNN model (14) with ETSBPAF can converge to zero in the arbitrary time
t c q ( θ 1 + θ 2 ) θ 1 θ 2 exp ( μ 1 ) ( 1 q ) + θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , i f H ( 0 ) > 1 a n d L ( t c 1 ) > 1 , q θ 2 exp ( μ 1 ) ( 1 q ) + θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , i f H ( 0 ) > 1 a n d L ( t c 1 ) 1 , q θ 1 exp ( μ 1 ) ( 1 q ) + θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , i f H ( 0 ) 1 a n d L ( t c 1 ) > 1 , θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , i f H ( 0 ) 1 a n d L ( t c 1 ) 1 ,
in which θ 1 = 0.5 λ 1 exp ( μ 2 ) , θ 2 = 0.5 λ 2 exp ( μ 2 ) .
Proof of Theorem 1.
The DAZNN model (14) can convert to evolution formula (11), let s i j ( t ) denote the elements of S ( t ) , and the subsystem of DAZNN model (14) can be directly analyzed as
s ˙ i j ( t ) = ϱ 1 ( t ) ω 1 ( s i j ( t ) ) ϱ 2 ( t ) ω 2 s i j ( t ) + 0 t ϱ 1 ( τ ) ω 1 ( s i j ( τ ) ) d τ ,
with i , j = 1 , 2 , , m . For the sake of better carrying on convergence analysis, an intermediate variable is introduced as
u i j ( t ) = s i j ( t ) + 0 t ϱ 1 ( τ ) ω 1 ( s i j ( τ ) ) d τ ,
which implies that
u ˙ i j ( t ) = s ˙ i j ( t ) + ϱ 1 ( t ) ω 1 ( s i j ( t ) ) = ϱ 2 ( t ) ω 2 ( u i j ( t ) ) .
For this dynamic system, a Lyapunov function (LF) is constructed as H ( t ) = | u i j ( t ) | . Note that H ( t ) is a positive definite function and radially unbounded, then
H ˙ ( t ) = u ˙ i j ( t ) sign ( u i j ( t ) ) = ϱ 2 ( t ) ω 2 ( u i j ( t ) ) sign ( u i j ( t ) ) = 0.5 λ 2 exp ( φ 2 arccot ( t ) ) exp ( μ 1 | u i j ( t ) | + μ 2 ) | u i j | q ( t ) + | u i j | 1 / q ( t ) = 0.5 λ 2 exp ( μ 2 ) exp ( μ 1 | u i j ( t ) | ) | u i j | q ( t ) + | u i j | 1 / q ( t ) = θ 2 exp ( μ 1 H ( t ) ) H q ( t ) + H 1 / q ( t ) ,
in which θ 2 = 0.5 λ 2 exp ( μ 2 ) . It is evident that H ˙ ( t ) is negative definite, so the zero solution of H ( t ) is globally asymptotically stable. That is, H ˙ ( t ) can decay to zero within time t c 1 = t 1 + t 2 . Without losing generality, we consider H ( 0 ) = | u i j ( 0 ) | > 1 and H ( 0 ) = | u i j ( 0 ) | 1 .
Case one: H ( 0 ) = | u i j ( 0 ) | > 1 . First step, H ( 0 ) > 1 will decay to H ( t 1 ) = 1 after time t 1 ; the second step, H ( t 1 ) = 1 will decay to H ( t 1 + t 2 ) = 0 after time t 1 + t 2 .
Step (a): H ( 0 ) > 1 H ( t 1 ) = 1 . Equation (18) can be written as
H ˙ ( t ) = θ 2 exp ( μ 1 H ( t ) ) H q ( t ) + H 1 / q ( t ) θ 2 exp ( μ 1 ) H 1 / q ( t ) .
Inequality (19) can be expressed as
H ( t ) 1 / q d H θ 2 exp ( μ 1 ) d t .
Integrate both sides of the inequality (20) from 0 to t 1 :
H ( 0 ) H ( t 1 ) H ( t ) 1 / q d H 0 t 1 θ 2 exp ( μ 1 ) d t .
By virtue of H ( t 1 ) = 1 ,
t 1 q 1 H 1 1 / q ( 0 ) θ 2 exp ( μ 1 ) ( 1 q ) q θ 2 exp ( μ 1 ) ( 1 q ) .
Step (b): | u i j ( t 1 ) | = 1 | u i j ( t 1 + t 2 ) | = 0 . Equation (18) is written as
H ˙ ( t ) = θ 2 exp ( μ 1 H ( t ) ) H q ( t ) + H 1 / q ( t ) θ 2 H q ( t ) .
Integrate both sides of the inequality (23) from t 1 to t 1 + t 2 :
H ( t 1 ) H ( t 1 + t 2 ) H ( t ) q d H t 1 t 1 + t 2 θ 2 d t .
We can derive
t 2 1 θ 2 ( 1 q ) .
Then, we can get convergence time t c 1 of the first process in case one:
t c 1 = t 1 + t 2 q θ 2 exp ( μ 1 ) ( 1 q ) + 1 θ 2 ( 1 q ) .
At this point, H ( t ) can converge to zero in case-one.
Case two: H ( 0 ) = | u i j ( 0 ) | 1 . Then, H ( 0 ) 1 will decay to H ( t 21 ) = 0 after time t 21 . We have: | u i j ( 0 ) | 1 | u i j ( t 21 ) | = 0 . Equation (18) is written as
H ˙ ( t ) = θ 2 exp ( μ 1 H ( t ) ) H q ( t ) + H 1 / q ( t ) θ 2 H q ( t ) .
Integrate both sides of the inequality (27) from 0 to t 21 :
H ( 0 ) H ( t 21 ) H ( t ) q d H 0 t 21 θ 2 d t .
We can derive
t 21 H 1 q ( 0 ) θ 2 ( 1 q ) 1 θ 2 ( 1 q ) .
Then, we can get convergence time t c 1 of the first process in case two:
t c 1 = t 21 1 θ 2 ( 1 q ) .
Thus, the convergence time t c 1 of the first process can be calculated:
t c 1 q θ 2 exp ( μ 1 ) ( 1 q ) + 1 θ 2 ( 1 q ) , if H ( 0 ) = | u i j ( 0 ) | > 1 , 1 θ 2 ( 1 q ) , if H ( 0 ) = | u i j ( 0 ) | 1 .
So, the subsystem (15) can be expressed as
s ˙ i j ( t ) = ϱ 1 ( t ) ω 1 ( s i j ( t ) ) .
Devise a new LF L ( t ) as
L ( t ) = | s i j ( t ) | .
Further, the derivative of L ( t ) can be obtained:
L ˙ ( t ) = s ˙ i j ( t ) sign ( s i j ( t ) ) = ϱ 1 ( t ) ω 1 ( s i j ( t ) ) sign ( s i j ( t ) ) = 0.5 λ 1 exp ( φ 1 arccot ( t ) ) exp ( μ 1 | s i j ( t ) | + μ 2 ) | s i j | q ( t ) + | s i j | 1 / q ( t ) = 0.5 λ 1 exp ( μ 2 ) exp ( μ 1 | s i j ( t ) | ) | s i j | q ( t ) + | s i j | 1 / q ( t ) = θ 1 exp ( μ 1 L ( t ) ) L q ( t ) + L 1 / q ( t ) ,
where θ 1 = 0.5 λ 1 exp ( μ 2 ) . Evidently, the zero solution of L ( t ) is globally asymptotically stable. Analogously, consider L ( t c 1 ) = | s i j ( t c 1 ) | > 1 and L ( t c 1 ) = | s i j ( t c 1 ) | 1 .
Case one: L ( t c 1 ) = | s i j ( t c 1 ) | > 1 . L ( t c 1 ) > 1 decreased to L ( t c 1 + t 3 ) = 1 in the first step; L ( t c 1 + t 3 ) = 1 decreased to L ( t c 1 + t 3 + t 4 ) = 0 in the second step. In a similar way, t 3 and t 4 can be calculated:
t 3 q θ 1 exp ( μ 1 ) ( 1 q ) , t 4 1 θ 1 ( 1 q ) .
Case two: L ( t c 1 ) = | s i j ( t c 1 ) | 1 . Then, H ( 0 ) 1 will decay to H ( t 22 ) = 0 after time t 22 . Similarly, we have:
t 22 1 θ 1 ( 1 q ) .
Furthermore, convergence time t c 2 of the second process is calculated as
t c 2 q θ 1 exp ( μ 1 ) ( 1 q ) + 1 θ 1 ( 1 q ) , if L ( t c 1 ) = | s i j ( t c 1 ) | > 1 , 1 θ 1 ( 1 q ) , if L ( t c 1 ) = | s i j ( t c 1 ) | 1 .
Then, we can get the total convergence time t c :
t c q ( θ 1 + θ 2 ) θ 1 θ 2 exp ( μ 1 ) ( 1 q ) + θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , if H ( 0 ) > 1 and L ( t c 1 ) > 1 , q θ 2 exp ( μ 1 ) ( 1 q ) + θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , if H ( 0 ) > 1 and L ( t c 1 ) 1 , q θ 1 exp ( μ 1 ) ( 1 q ) + θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , if H ( 0 ) 1 and L ( t c 1 ) > 1 , θ 1 + θ 2 θ 1 θ 2 ( 1 q ) , if H ( 0 ) 1 and L ( t c 1 ) 1 .
Note that the upper bound of our calculated time t c is independent of initial value X ( 0 ) . Thus the proof is now completed. □

3.2. Robustness

Theorem 2.
Given a nonsingular matrix A ( t ) , and starting from any initial value X ( 0 ) , the DAZNN model (14) with additive bounded noise ξ ( t ) ( | ξ i j ( t ) | a and a is constant) can converge to zero or be bounded by
lim t S ( t ) F 2 a m λ 1 exp ( μ 2 ) ,
where m represents dimension of coefficient matrix.
Proof of Theorem 2.
According to the subsystem of DAZNN model (14) and additive noise ξ ( t ) , it can be obtained:
s ˙ i j ( t ) = ϱ 1 ( t ) ω 1 ( s i j ( t ) ) ϱ 2 ( t ) ω 2 s i j ( t ) + 0 t ϱ 1 ( τ ) ω 1 ( s i j ( τ ) ) d τ + ξ i j ( t ) .
Then, we separate into two parts for analysis, the first part is set a intermediate variable, the same as Equation (16), and the time derivative of the intermediate variable is u ˙ i j ( t ) = s ˙ i j ( t ) + ϱ 1 ( t ) ω 1 ( s i j ( t ) ) , then substituting u i j ( t ) and u ˙ i j ( t ) into Equation (37), the first system can be acquired:
u ˙ i j ( t ) = ϱ 2 ( t ) ω 2 ( u i j ( t ) ) + ξ i j ( t ) .
Introduce the Lyapunov function to analyze the first system (38):
G ( t ) = 1 2 u i j 2 ( t ) .
Then
G ˙ ( t ) = u i j ( t ) u ˙ i j ( t ) = u i j ( t ) ( ϱ 2 ( t ) ω 2 ( u i j ( t ) ) + ξ i j ( t ) ) .
(a) If G ˙ ( t ) > 0 , | u i j ( t ) | will gradually increase, and u i j ( t ) ξ i j ( t ) > 0 . In this case, G ˙ ( t ) > 0 , that is
u i j ( t ) ( ϱ 2 ( t ) ω 2 ( u i j ( t ) ) + ξ i j ( t ) ) > 0 .
Hence, we get
| ξ i j ( t ) | > | ϱ 2 ( t ) ω 2 ( u i j ( t ) ) | .
By virtue of the noise ξ i j ( t ) is bounded, it is easy to know that | ϱ 2 ( t ) ω 2 ( u i j ( t ) ) + ξ i j ( t ) | will decrease as | u i j ( t ) | increases. Besides, | u i j ( t ) | will stop increasing until ϱ 2 ( t ) ω 2 ( u i j ( t ) ) + ξ i j ( t ) = 0 . It shows that the change will eventually enter a stable state, that is, the final G ˙ ( t ) = 0 holds true. It is not hard to get:
s ˙ i j ( t ) = ϱ 1 ( t ) ω 1 ( s i j ( t ) ) .
Besides, s i j ( t ) = 0 is globally asymptotically stable from Theorem 1, namely
lim t S ( t ) F = 0 .
(b) If G ˙ ( t ) < 0 , then | u i j ( t ) | will gradually decrease to zero. Then, the degraded form of the subsystem (33) is expressed as
s ˙ i j ( t ) = ϱ 1 ( t ) ω 1 ( s i j ( t ) ) + ξ i j ( t ) .
(b1) s ˙ i j ( t ) < 0 . At this time, s i j ( t ) = 0 is globally asymptotically stable. That is
lim t S ( t ) F = 0 .
(b2) s ˙ i j ( t ) > 0 . This situation is similar to the above situation (a). In the end, we can get ϱ 1 ( t ) ω 1 ( s i j ( t ) ) + ξ i j ( t ) = 0 . Obviously, when t , | s i j ( t ) | is bounded by
0 | s i j ( t ) | ω 1 1 ξ i j ( t ) ϱ 1 ( t ) ,
where ω 1 1 ( · ) is the inverse function of ω 1 ( · ) . Due to | ω 1 ( z ) | | 0.5 exp ( μ 2 ) z | such that | ω 1 1 ( z ) | | 2 z / exp ( μ 2 ) | keeps correct. And the bound of | s i j ( t ) | can be written as
0 | s i j ( t ) | 2 ξ i j ( t ) exp ( μ 2 ) ϱ 1 ( t ) = | 2 ξ i j ( t ) λ 1 exp ( μ 2 ) exp ( φ 1 arccot ( t ) ) | .
Then, by virtue of | ξ i j ( t ) | a ,
lim t S ( t ) F = lim t i = 1 m j = 1 m | s i j ( t ) | 2 lim t i = 1 m j = 1 m 2 ξ i j ( t ) λ 1 exp ( μ 2 ) exp ( φ 1 arccot ( t ) ) 2 2 a m λ 1 exp ( μ 2 ) .
(c) If G ˙ ( t ) = 0 , we can know that u i j ( t ) = 0 or ϱ 2 ( t ) ω 2 ( u i j ( t ) ) + ξ i j ( t ) = 0 . This case is similar as case (a) or case (b), so lim t S ( t ) F can converge to zero or be bounded.
In the end, it is concluded that in the unknown bounded noise environment, the error of model (14) can converge to zero or be bounded. □

4. Illustrative Verification

In this section, the FPZNN models with different AFs and DAZNN model (14) are applied to online solving dynamic matrix inversion problem, and the comparison results are shown as below.
Consider this time-varying matrix:
A ( t ) = sin ( 3 t ) cos ( 3 t ) cos ( 3 t ) sin ( 3 t ) ,
and A * ( t ) denotes the theoretical solution of the Equation (1), in other words, A * ( t ) = A 1 ( t ) . Besides, in this simulation, some public parameters are: k 1 = k 2 = 0.5 , q = 1 / p = 0.2 ; the parameter of AF (6) is set as r = 3 ; the parameter of AF (7) is set as ϖ 1 = 10 ; the parameters of AF (9) are set as k 3 = 0.2 , k 4 = 0 ; the parameter of AF (10) is set as k 5 = 0.2 ; the parameters of AF (12) are set as μ 1 = 1 , μ 2 = 0.5 ; variable parameters (13) are set as φ 1 = φ 2 = 0.3 ; and X ( 0 ) [ 1 , 1 ] 2 × 2 .

4.1. Discussion of Convergence

As seen in Figure 1, the state solutions (i.e., X 11 ( t ) , X 12 ( t ) , X 21 ( t ) , X 22 ( t ) ) and theoretical solutions (i.e., X 11 * ( t ) , X 12 * ( t ) , X 21 * ( t ) , X 22 * ( t ) ) are put together to compare. According to the results, in addition to the FPZNN models activated by AFs (5)–(7), other FPZNN models with different AFs and DAZNN model (14) can converge without noise. Moreover, it is clear that FPZNN with AF (12) has the fastest convergence speed.
Secondly, Figure 2 comprehensively reveals the errors S ( t ) F of these ZNN models in the absence of noise. As illustrated in Figure 2a,b, it is evident that with no noise and λ 1 = λ 2 = 1.5 , the errors of both of these models can drop to zero. However, the error accuracy of model (4) activated by AF (7) can only reach 5 × 10 2 , while other models can reach a higher error accuracy. Aside from model (4) activated by AF (7), the convergence rate of these models rank as follows: model (14) activated by AF (12), model (4) activated by AF (9), model (4) activated by AF (10), model (4) activated by AF (8), model (4) activated by AF (6), model (4) activated by AF (5). In addition, Figure 2c suggests that the errors of the convergence time corresponding to these ZNN models is reduced a lot with λ 1 = λ 2 = 15 , which also verifies Theorem 1.
Finally, we control the variables in order to further demonstrate the specific impact of the two improvements (i.e., variable parameters (13) and AF (12)) in the new model, as shown in Figure 3. Specifically, Figure 3a illustrates the model errors S ( t ) F for FPZNN with various AFs. Figure 3b discloses the errors of the FPZNN model (4) activated by AF (12) and DAZNN model (14). It is not difficult to discover that the FPZNN model activated by AF (12) has the best convergence performance in the seven AFs used from Figure 3a. This means that AF (12) has the best acceleration effect on the model. Moreover, for the condition of employing AF (12), the DAZNN model (14) performs better than the FPZNN model (4). Therefore, we can draw a conclusion that both AF (12) and variable parameters (13) have a gain effect on the convergence rate of the model.

4.2. Discussion of Robustness

In fact, in the circuit implementation of RNN, additive noise is inevitable. Additionally, the additive noise can result in the failure of the original algorithm or other undesirable effects. Therefore, the robustness of a model is a key indicator of a model’s performance.
First, five kinds of noise dynamic characteristics are shown in Figure 4a. Specifically, there are five types of noise: constant noise ξ i j ( t ) = 0.8 , Gaussian noise ξ i j with zero mean and one standard deviation, exponentially attenuated noise ξ i j ( t ) = exp ( 1.2 t + 2 ) , harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) and blended harmonic noise ξ i j ( t ) = sin ( 10 π t π ) + sin ( 30 π t + 3 ) + sin ( 34 π t 4 ) + sin ( 50 π t + 7 ) . Figure 4b reveals the errors S ( t ) F of these ZNN models under noise ξ i j ( t ) = 0.8 . The error accuracy and convergence speed of all models display a certain degree of decline, even when they are subject to disturbances of constant noise ξ i j ( t ) = 0.8 . Compared with other models, the DAZNN model (14) is the least affected and still has the highest convergence speed and accuracy.
As a means of further verifying the robustness of the DAZNN model (14), we tested its dynamic characteristic of errors under four other bounded noises in Figure 5. Observing Figure 5a,b, it is found that in the case of Gaussian noise, the convergence speed of these models is not affected much, but the error accuracy is significantly reduced. On the contrary, in an exponential attenuated noise environment, the error accuracy of these models is basically unchanged, but the convergence time is significantly increased. In Figure 5c, in the case of harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) and λ 1 = λ 2 = 2.5 , the DAZNN model (14) is not greatly affected, but the accuracy of other models is greatly reduced. The result in Figure 5d is similar, in the case of blended harmonic noise ξ i j ( t ) = sin ( 10 π t π ) + sin ( 30 π t + 3 ) + sin ( 34 π t 4 ) + sin ( 50 π t + 7 ) and λ 1 = λ 2 = 8 , the DAZNN model (14) is not markedly affected. In general, the error accuracy and convergence speed of all models decrease to varying degrees, but relatively speaking, the DAZNN model (14) is the least affected. These results provide support for our theoretical analysis that the DAZNN model (14) can accomplish robustness to bounded noise.

4.3. Sensitivity of Initial Values

The convergence performance of many ZNN models will be affected by the initial value, but related studies have rarely been addressed. Therefore, the sensitivity of the model to the initial value is worth discussing.
To demonstrate the impact of random starting values on these models’ errors, an experiment is conducted. Four random initial values belonging to different intervals are tested, as shown in Figure 6. The error accuracy of four models including DAZNN model (14) and FPZNN models with AFs (8)–(10) can reach 10 5 under various X ( 0 ) . The convergence times of the four models above under different initial values are as follows: DAZNN model (14) takes about 0.48 s, 0.58 s, 0.71 s, and 0.71 s; FPZNN model with AF (9) is about 1.45 s, 1.86 s, 2.35 s, and 2.35 s; FPZNN model with AF (10) is about 1.49 s, 2.05 s, 2.72 s, and 2.72 s; FPZNN model with AF (8) is about 1.60 s, 2.15 s, 2.78 s, and 2.78 s. Relatively speaking, although the convergence time of DAZNN model (14) also relies on X ( 0 ) , the dependence on the initial value is smaller. Besides, it is not difficult to know that the convergence time of the above four models has an upper bound that has nothing to do with the initial value X ( 0 ) . It is worth noting that these models have a characteristic, that is, the activation functions used all have | z | q sign ( z ) term, which shows to a certain extent that power term is a necessary term for the predefined convergent activation function.
In this part, this example verifies the convergence and robustness performance of the DAZNN model (14) using multiple model comparisons when solving dynamic matrix inversion problem (1).

4.4. High Dimensional Example Verification

Considering another 4 × 4 time-variant Toeplitz matrix:
A ( t ) = 3 + sin ( 0.5 t ) cos ( 0.5 t ) cos ( 0.5 t ) / 2 cos ( 0.5 t ) / 3 cos ( 0.5 t ) 3 + sin ( 0.5 t ) cos ( 0.5 t ) cos ( 0.5 t ) / 2 cos ( 0.5 t ) / 2 cos ( 0.5 t ) 3 + sin ( 0.5 t ) cos ( 0.5 t ) cos ( 0.5 t ) / 3 cos ( 0.5 t ) / 2 cos ( 0.5 t ) 3 + sin ( 0.5 t ) .
Note that the performance of the first three models (including PZNN with LAF (5), FPZNN with BSAF (6), and FPZNN with TAF (7)) is not good, so we just consider other models for comparison in this example. Besides, the conditions for this example are set to be the same as before, except that λ 1 = λ 2 = 3 .
Figure 7 shows the dynamic trajectories of error norms S ( t ) F of these ZNN models under noise or bounded noise. At this time, the DAZNN model still shows excellent model performance. It is known from Figure 7a that all models can converge without noise. From Figure 7b,e, when the noise is constant noise ξ i j ( t ) = 0.8 or harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) , the accuracy of DAZNN model (14) is significantly higher than the other three models. Moreover, as shown in Figure 7c,d,f, under the interference of other noises, although the accuracy of DAZNN model (14) is similar to the other three models, its convergence speed is still much faster. In general, Section 4.4 fully demonstrates that the proposed DAZNN model has the best performance among these four models in a higher dimensional example.

5. Application to Dynamic Positioning Algorithm

This section first briefly describes the dynamic positioning problem based on the angle of arrival (AOA). Secondly, to validate the efficacy of the DAZNN model, design formula (11) and design formula (3) are applied to the AOA positioning algorithm.

5.1. Problem Description

The angle of arrival (AOA) positioning method’s main principle is to calculate the angle of arrival between the target and the sensor node [44]. Taking the sensor node as the starting point and passing through the target node will form a ray. The point where the two rays intersect is the position of the target node.
Suppose the coordinates of n fixed sensor nodes are ( w l , z l ) , represented by the matrix B, where l = 1 , 2 , , n ; the incident angle of sensor nodes is β l ( t ) , represented by the vector β ( t ) ; consider the two-dimensional situation, the target node position at time t uses the unknown vector y ( t ) = [ w ( t ) , z ( t ) ] T . The specific mathematical expression is as follows
B = w 1 w 2 w 3 w n z 1 z 2 z 3 z n R 2 × n , β ( t ) = β 1 ( t ) β 2 ( t ) β n ( t ) T R n .
Hence, the incident angle β l ( t ) satisfies
tan ( β l ( t ) ) = z ( t ) z l w ( t ) w l .
The following equation can be further derived:
z 1 w 1 tan ( β 1 ( t ) ) z 2 w 2 tan ( β 2 ( t ) ) z n w n tan ( β n ( t ) ) = tan ( β 1 ( t ) ) 1 tan ( β 2 ( t ) ) 1 tan ( β n ( t ) ) 1 w ( t ) z ( t ) .
Equation (44) can be converted to the following form:
Q ( t ) = P ( t ) y ( t ) ,
where P ( t ) R n × 2 represents the smooth dynamic matrix with full column rank, Q ( t ) R n × 1 denotes a smooth dynamic vector.

5.2. Model Application

First define an error:
e ( t ) = P ( t ) y ( t ) Q ( t ) .
Multiplying both sides of Equation (46) by P T ( t ) , we can get the new error:
ϵ ( t ) = P T ( t ) P ( t ) y ( t ) P T ( t ) Q ( t ) R 2 × 1 .
The derivative of Equation (47) can be obtained:
ϵ ˙ ( t ) = P ˙ T ( t ) P ( t ) y ( t ) + P T ( t ) P ˙ ( t ) y ( t ) + P T ( t ) P ( t ) y ˙ ( t ) P ˙ T Q ( t ) P T Q ˙ ( t ) .
Based on design formula (11), a variable parameters dynamic positioning (VPDP) model is obtain:
P T ( t ) P ( t ) y ˙ ( t ) = P ˙ T ( t ) P ( t ) y ( t ) P T ( t ) P ˙ ( t ) y ( t ) + P ˙ T Q ( t ) + P T Q ˙ ( t ) + η ( t ) ϱ 1 ( t ) Ω 1 ( ϵ ( t ) ) ϱ 2 ( t ) Ω 2 ϵ ( t ) + 0 t ϱ 1 ( τ ) Ω 1 ( ϵ ( τ ) ) d τ ,
in which η ( t ) denotes additive noise. Analogously, design formula (3) with AF (5) and design formula (3) with AF (10) are incorporated into AOA positioning, so we can get two kinds of fixed-parameter dynamic positioning (FPDP) models to compare with VPDP model:
P T ( t ) P ( t ) y ˙ ( t ) = P ˙ T ( t ) P ( t ) y ( t ) P T ( t ) P ˙ ( t ) y ( t ) + P ˙ T Q ( t ) + P T Q ˙ ( t ) λ 1 Ω ( ϵ ( t ) ) λ 2 Ω ϵ ( t ) + λ 1 0 t Ω ( ϵ ( τ ) ) d τ + η ( t ) , s . t . ω ( ϵ l ( t ) ) = ϵ l ( t ) . P T ( t ) P ( t ) y ˙ ( t ) = P ˙ T ( t ) P ( t ) y ( t ) P T ( t ) P ˙ ( t ) y ( t ) + P ˙ T Q ( t ) + P T Q ˙ ( t ) λ 1 Ω ( ϵ ( t ) ) λ 2 Ω ϵ ( t ) + λ 1 0 t Ω ( ϵ ( τ ) ) d τ + η ( t ) , s . t . ω ( ϵ l ( t ) ) = k 1 | ϵ l ( t ) | q sign ( z ) + k 5 ϵ l ( t ) , if | ϵ l ( t ) | 1 , k 2 | ϵ l ( t ) | p sign ( z ) + k 5 ϵ l ( t ) , if | ϵ l ( t ) | > 1 .

5.3. Example 1

In this example, four sensor nodes are placed on the plane, and the sensor coordinates are
B = 9.6 9.6 9.6 9.6 9.6 9.6 9.6 9.6 .
Besides, the specified target trajectory y * ( t ) = [ w * ( t ) , z * ( t ) ] T is
9 cos ( 2 π sin 2 ( π t / 12 ) + π / 6 ) sin ( 4 π sin 2 ( π t / 12 ) + π / 3 ) cos ( π / 6 ) sin ( π / 3 ) + 2 sin ( 2 π sin 2 ( π t / 12 ) + π / 6 ) sin ( 4 π sin 2 ( π t / 12 ) + π / 3 ) sin ( π / 6 ) sin ( π / 3 ) + 1 .
Taking the initial position y ( 0 ) = [ 6 , 2 ] T , k 1 = k 2 = 0.5 , q = 1 / p = 0.2 ; the parameter of AF (10) is set as k 5 = 0.2 ; the parameters of AF (12) are set as μ 1 = 1 , μ 2 = 0.5 ; variable parameters (13) are set as λ 1 = λ 2 = 5 and φ 1 = φ 2 = 0.5 .
Figure 8 shows the target trajectory and actual trajectory for positioning employing FPDP model with AF (5), FPDP model with AF (10), the VPDP model, and the pseudo-inverse method under sine noise η l ( t ) = sin ( 4 π t + π ) . Figure 9 shows the error y ( t ) y * ( t ) 2 of these models. As shown in Figure 8, the target trajectory and the actual trajectory are basically coincident for all models except the pseudo-inverse method. In addition, a significant gap between these methods exists in convergence time from Figure 9. To be more specific, except for the pseudo-inverse method that cannot converge, the convergence time of FPDP model with AF (5), FPDP model with AF (10), and the VPDP model is 1.05 s, 0.6 s, 0.15 s. Besides, the error upper bounds of the VPDP model, FPDP model with AF (5), and FPDP model with AF (10), are about 9.5 × 10 3 , 9.5 × 10 3 , 9 × 10 2 , respectively. It is indicative that the VPDP model is superior to other two FPDP models. To sum up, this demonstrates the effectiveness of design formula (11) in realizing plane positioning issue.

5.4. Example 2

In this example, the four sensor nodes coordinates are the same as in Section 5.3. Besides, the specified target trajectory y * ( t ) = [ w * ( t ) , z * ( t ) ] T is
9 cos ( 2 π sin 2 ( π t / 12 ) + π / 6 ) sin ( 8 π sin 2 ( π t / 12 ) + 2 π / 3 ) cos ( π / 6 ) sin ( 2 π / 3 ) + 2 sin ( 2 π sin 2 ( π t / 12 ) + π / 6 ) sin ( 8 π sin 2 ( π t / 12 ) + 2 π / 3 ) sin ( π / 6 ) sin ( 2 π / 3 ) + 1 .
Take the initial position y ( 0 ) = [ 8 , 4 ] T , k 1 = k 2 = 0.5 , q = 1 / p = 0.2 ; the parameter of AF (10) is set as k 5 = 0.2 ; the parameters of AF (12) are set as μ 1 = 0 , μ 2 = 0.5 ; variable parameters (13) are set as λ 1 = λ 2 = 5 and φ 1 = φ 2 = 0.5 .
Figure 10 shows the target trajectory and actual trajectory for positioning and employing the FPDP model with AF (5), the FPDP model with AF (10), the VPDP model, and the pseudo-inverse method under sine noise η l ( t ) = sin ( 4 π t + π ) . Figure 11 shows the error y ( t ) y * ( t ) 2 of these models. As shown in Figure 10, the target trajectory and the actual trajectory are basically coincident for all models except for the pseudo-inverse method. Besides, a significant gap between these methods exists in convergence time from Figure 11. Specifically, except for the pseudo-inverse method that cannot converge, the convergence time of FPDP model with AF (5), FPDP model with AF (10), and the VPDP model is 1.82 s, 0.85 s, 0.26 s. Furthermore, the error upper bounds of the VPDP model, FPDP model with AF (5), and FPDP model with AF (10) are the same as in Figure 9. It is indicate that the VPDP model is better than the other two FPDP models, even for different target trajectory positioning tasks.

6. Conclusions

In this paper, the DAZNN model (14) with exponential decay variable parameters (13) and exponential-type SBPAF (12) is proposed to solve the dynamic matrix inversion problem. The ETSBPAF and variable parameters (13) contribute positively to the convergence and robustness of the model, resulting in better performance when compared with the other mentioned fixed-parameter models. Furthermore, Theorem 1 theoretically establishes the upper bounds of convergence for the DAZNN model (14) with arbitrary time. As bounded additive noise is added, the robustness of the DAZNN model is theoretically demonstrated in Theorem 2. The results indicate that the DAZNN model has better performance than other models with various AFs based upon the three aspects of convergence, robustness, and initial value sensitivity. Finally, the AOA dynamic positioning example that employs evolution formula (11) of the DAZNN model verifies its availability. A simplified model with high performance may be considered in future research. Additionally, by using drones for assistance in positioning, dynamic positioning tasks can be extended to three dimensions.

Author Contributions

Conceptualization, B.L. and Y.H.; methodology, Y.H. and B.L.; software, L.X., L.H. and X.X.; validation, Y.H. and B.L.; formal analysis, Y.H.; investigation, L.H. and X.X.; data curation, L.H. and X.X.; writing—original draft preparation, Y.H.; writing—review and editing, B.L. and L.X.; visualization, Y.H.; supervision, B.L. and L.X.; project administration, B.L.; funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China Grants 62066015, 61866013, 61976089 and 61966014; and the Natural Science Foundation of Hunan Province of China under grants 2020JJ4511, 2021JJ20005, 18A289, 2018TP1018 and 2018RS3065; and the Hunan Provincial Innovation Foundation For Postgraduate under grant CX20211042.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ZNNzeroing neural networks
FPZNNfixed-parameter ZNN
DAZNNdouble accelerated convergence ZNN
DMIPdynamic matrix inversion problem
AFactivation function
RNNrecurrent neural network
SBPsign-bi-power
VPZNNvarying-parameter ZNN
ETSBPAFexponential-type SBPAF
LAFlinear AF
BSAFbipolar-sigmoid AF
TAFtunable AF
SBPAFsign-bi-power AF
PTAFpredefined time AF
IPTAFimproved predefined time AF
AOAangle of arrival
VPDPvariable parameters dynamic positioning
FPDPfixed-parameter dynamic positioning

References

  1. Stefanovski, J. Novel all-pass factorization, all solutions to rational matrix equation and control application. IEEE Trans. Autom. Control 2019, 65, 3176–3183. [Google Scholar] [CrossRef]
  2. Xiao, L.; Liu, S.; Wang, X.; He, Y.; Jia, L.; Xu, Y. Zeroing neural networks for dynamic quaternion-valued matrix inversion. IEEE Trans. Ind. Inform. 2021, 18, 1562–1571. [Google Scholar] [CrossRef]
  3. Quan, Z.; Liu, J. Efficient complex matrix inversion for MIMO OFDM systems. J. Commun. Netw. 2017, 19, 637–647. [Google Scholar] [CrossRef]
  4. Wang, Y.; Leib, H. Sphere decoding for MIMO systems with Newton iterative matrix inversion. IEEE Commun. Lett. 2013, 17, 389–392. [Google Scholar] [CrossRef]
  5. Guo, D.; Zhang, Y. Zhang neural network, Getz–Marsden dynamic system, and discrete-time algorithms for time-varying matrix inversion with application to robots’ kinematic control. Neurocomputing 2012, 97, 22–32. [Google Scholar] [CrossRef]
  6. Jin, L.; Zhang, Y. G2-type SRMPC scheme for synchronous manipulation of two redundant robot arms. IEEE Trans. Cybern. 2014, 45, 153–164. [Google Scholar] [PubMed]
  7. Chen, D.; Li, S.; Wu, Q.; Luo, X. Super-twisting ZNN for coordinated motion control of multiple robot manipulators with external disturbances suppression. Neurocomputing 2020, 371, 78–90. [Google Scholar] [CrossRef]
  8. Krishnamoorthy, A.; Menon, D. Matrix inversion using Cholesky decomposition. In Proceedings of the 2013 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 26–28 September 2013; pp. 70–72. [Google Scholar]
  9. Tang, C.; Liu, C.; Yuan, L.; Xing, Z. High precision low complexity matrix inversion based on Newton iteration for data detection in the massive MIMO. IEEE Commun. Lett. 2016, 20, 490–493. [Google Scholar] [CrossRef]
  10. Xiao, L.; He, Y. A noise-suppression ZNN model with new variable parameter for dynamic Sylvester equation. IEEE Trans. Ind. Inform. 2021, 17, 7513–7522. [Google Scholar] [CrossRef]
  11. Stanimirović, P.S.; Živković, I.S.; Wei, Y. Recurrent neural network for computing the Drazin inverse. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2830–2843. [Google Scholar] [CrossRef] [PubMed]
  12. Elhoseny, M.; Shankar, K. Optimal bilateral filter and convolutional neural network based denoising method of medical image measurements. Measurement 2019, 143, 125–135. [Google Scholar] [CrossRef]
  13. Koyuncu, I.; Yilmaz, C.; Alcin, M.; Tuna, M. Design and implementation of hydrogen economy using artificial neural network on field programmable gate array. Int. J. Hydrogen Energy 2020, 45, 20709–20720. [Google Scholar] [CrossRef]
  14. Wrobel, K.; Doroz, R.; Porwik, P.; Naruniec, J.; Kowalski, M. Using a probabilistic neural network for lip-based biometric verification. Eng. Appl. Artif. Intell. 2017, 64, 112–127. [Google Scholar] [CrossRef]
  15. Yañez-Badillo, H.; Beltran-Carbajal, F.; Tapia-Olvera, R.; Favela-Contreras, A.; Sotelo, C.; Sotelo, D. Adaptive robust motion control of quadrotor systems using artificial neural networks and particle swarm optimization. Mathematics 2021, 9, 2367. [Google Scholar] [CrossRef]
  16. Khan, A.T.; Li, S.; Cao, X. Control framework for cooperative robots in smart home using bio-inspired neural network. Measurement 2021, 167, 108253. [Google Scholar] [CrossRef]
  17. Wang, S.; Zhang, H.; Zhang, W.; Zhang, H. Finite-time projective synchronization of Caputo type fractional complex-valued delayed neural networks. Mathematics 2021, 9, 1406. [Google Scholar] [CrossRef]
  18. Li, Y.; Liu, Y.; Tong, S. Observer-based neuro-adaptive optimized control of strict-feedback nonlinear systems with state constraints. IEEE Trans. Neural Netw. Learn Syst. 2021. [Google Scholar] [CrossRef] [PubMed]
  19. Cogollo, M.R.; González-Parra, G.; Arenas, A.J. Modeling and forecasting cases of RSV using artificial neural networks. Mathematics 2021, 9, 2958. [Google Scholar] [CrossRef]
  20. Šestanović, T.; Arnerić, J. Can recurrent neural networks predict inflation in euro zone as good as professional forecasters? Mathematics 2021, 9, 2486. [Google Scholar] [CrossRef]
  21. Simos, T.E.; Mourtas, S.D.; Katsikis, V.N. Time-varying Black–Litterman portfolio optimization using a bio-inspired approach and neuronets. Appl. Soft Comput. 2021, 112, 107767. [Google Scholar] [CrossRef]
  22. Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Li, S.; Cao, X. Time-varying mean–variance portfolio selection problem solving via LVI-PDNN. Comput. Oper. Res. 2022, 138, 105582. [Google Scholar] [CrossRef]
  23. Xiao, L. A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion. Theor. Comput. Sci. 2016, 647, 50–58. [Google Scholar] [CrossRef]
  24. Xiao, L. A nonlinearly activated neural dynamics and its finite-time solution to time-varying nonlinear equation. Neurocomputing 2016, 173, 1983–1988. [Google Scholar] [CrossRef]
  25. Li, S.; Chen, S.; Liu, B. Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process Lett. 2013, 37, 189–205. [Google Scholar] [CrossRef]
  26. Xiao, L. A finite-time convergent Zhang neural network and its application to real-time matrix square root finding. Neural Comput. Appl. 2019, 31, 793–800. [Google Scholar] [CrossRef]
  27. Yan, Z.; Zhong, S.; Lin, L.; Cui, Z. Adaptive Levenberg–Marquardt algorithm: A new optimization strategy for Levenberg–Marquardt neural networks. Mathematics 2021, 9, 2176. [Google Scholar] [CrossRef]
  28. Stanimirović, P.S.; Katsikis, V.N.; Li, S. Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 2019, 329, 129–143. [Google Scholar] [CrossRef]
  29. Li, W. Design and analysis of a novel finite-time convergent and noise-tolerant recurrent neural network for time-variant matrix inversion. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 4362–4376. [Google Scholar] [CrossRef]
  30. Jin, L.; Zhang, Y.; Li, S.; Zhang, Y. Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 2016, 63, 6978–6988. [Google Scholar] [CrossRef]
  31. Xiao, L.; Zhang, Y.; Zuo, Q.; Dai, J.; Li, J.; Tang, W. A noise-tolerant zeroing neural network for time-dependent complex matrix inversion under various kinds of noises. IEEE Trans. Ind. Inform. 2019, 16, 3757–3766. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, Z.; Xiao, L.; Dai, J.; Xu, Y.; Zuo, Q.; Liu, C. A Unified Predefined-Time Convergent and Robust ZNN Model for Constrained Quadratic Programming. IEEE Trans. Ind. Inform. 2020, 17, 1998–2010. [Google Scholar] [CrossRef]
  33. Shen, L.; Wu, J.; Yang, S. Initial position estimation in SRM using bootstrap circuit without predefined inductance parameters. IEEE Trans. Power Electr. 2011, 26, 2449–2456. [Google Scholar] [CrossRef]
  34. Li, W.; Su, Z.; Tan, Z. A variable-gain finite-time convergent recurrent neural network for time-variant quadratic programming with unknown noises endured. IEEE Trans. Ind. Inform. 2019, 15, 5330–5340. [Google Scholar] [CrossRef]
  35. Xiao, L.; Zhang, Y.; Dai, J.; Zuo, Q.; Wang, S. Comprehensive analysis of a new varying parameter zeroing neural network for time varying matrix inversion. IEEE Trans. Ind. Inform. 2020, 17, 1604–1613. [Google Scholar] [CrossRef]
  36. Stanimirović, P.; Gerontitis, D.; Tzekis, P.; Behera, R.; Sahoo, J.K. Simulation of varying parameter recurrent neural network with application to matrix inversion. Math. Comput. Simul. 2021, 185, 614–628. [Google Scholar] [CrossRef]
  37. Li, X.; Li, S.; Xu, Z.; Zhou, X. A vary-parameter convergence-accelerated recurrent neural network for online solving dynamic matrix pseudoinverse and its robot application. Neural Process Lett. 2021, 53, 1287–1304. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Jiang, D.; Wang, J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans. Neural Netw. 2002, 13, 1053–1063. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, Y.; Qiu, B.; Jin, L.; Guo, D.; Yang, Z. Infinitely many Zhang functions resulting in various ZNN models for time-varying matrix inversion with link to Drazin inverse. Inf. Process. Lett. 2015, 115, 703–706. [Google Scholar] [CrossRef]
  40. Xiao, L.; Tan, H.; Jia, L.; Dai, J.; Zhang, Y. New error function designs for finite-time ZNN models with application to dynamic matrix inversion. Neurocomputing 2020, 402, 395–408. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Chen, K. Comparison on Zhang neural network and gradient neural network for time-varying linear matrix equation AXB=C solving. In Proceedings of the 2008 IEEE International Conference on Industrial Technology, Chengdu, China, 21–24 April 2008; pp. 1–6. [Google Scholar]
  42. Miao, P.; Shen, Y.; Xia, X. Finite time dual neural networks with a tunable activation function for solving quadratic programming problems and its application. Neurocomputing 2014, 143, 80–89. [Google Scholar] [CrossRef] [Green Version]
  43. Li, W.; Liao, B.; Xiao, L.; Lu, R. A recurrent neural network with predefined-time convergence and improved noise tolerance for dynamic matrix square root finding. Neurocomputing 2019, 337, 262–273. [Google Scholar] [CrossRef]
  44. Dai, J.; Li, Y.; Xiao, L.; Jia, L. Zeroing neural network for time-varying linear equations with application to dynamic positioning. IEEE Trans. Ind. Inform. 2021. [Google Scholar] [CrossRef]
Figure 1. Trajectories of the solutions X ( t ) of different models, of which the red line denotes the theoretical solution X i j * ( t ) with no noise. (a) Theoretical solution X 11 * ( t ) and state solutions X 11 ( t ) . (b) Theoretical solution X 12 * ( t ) and state solutions X 12 ( t ) . (c) Theoretical solution X 21 * ( t ) and state solutions X 21 ( t ) . (d) Theoretical solution X 22 * ( t ) and state solutions X 22 ( t ) .
Figure 1. Trajectories of the solutions X ( t ) of different models, of which the red line denotes the theoretical solution X i j * ( t ) with no noise. (a) Theoretical solution X 11 * ( t ) and state solutions X 11 ( t ) . (b) Theoretical solution X 12 * ( t ) and state solutions X 12 ( t ) . (c) Theoretical solution X 21 * ( t ) and state solutions X 21 ( t ) . (d) Theoretical solution X 22 * ( t ) and state solutions X 22 ( t ) .
Mathematics 10 00050 g001
Figure 2. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with no noise and different λ 1 = λ 2 . (a) With λ 1 = λ 2 = 1.5 . (b) With λ 1 = λ 2 = 1.5 . (c) With λ 1 = λ 2 = 15 .
Figure 2. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with no noise and different λ 1 = λ 2 . (a) With λ 1 = λ 2 = 1.5 . (b) With λ 1 = λ 2 = 1.5 . (c) With λ 1 = λ 2 = 15 .
Mathematics 10 00050 g002
Figure 3. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14). (a) FPZNN model (4) with various AFs. (b) FPZNN model (4) with AF (12) and DAZNN model (14).
Figure 3. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14). (a) FPZNN model (4) with various AFs. (b) FPZNN model (4) with AF (12) and DAZNN model (14).
Mathematics 10 00050 g003
Figure 4. Profile of different noises, and dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with noise. (a) Profile of different noises. (b) With constant noise ξ i j ( t ) = 0.8 .
Figure 4. Profile of different noises, and dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with noise. (a) Profile of different noises. (b) With constant noise ξ i j ( t ) = 0.8 .
Mathematics 10 00050 g004
Figure 5. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with disparate bounded noises and different λ 1 , λ 2 . (a) Gaussian noise and λ 1 = λ 2 = 1.5 . (b) Exponentially attenuated noise ξ i j ( t ) = exp ( 1.2 t + 2 ) and λ 1 = λ 2 = 1.5 . (c) Harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) and λ 1 = λ 2 = 2.5 . (d) Blended harmonic noise ξ i j ( t ) = sin ( 10 π t π ) + sin ( 30 π t + 3 ) + sin ( 34 π t 4 ) + sin ( 50 π t + 7 ) and λ 1 = λ 2 = 8 .
Figure 5. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with disparate bounded noises and different λ 1 , λ 2 . (a) Gaussian noise and λ 1 = λ 2 = 1.5 . (b) Exponentially attenuated noise ξ i j ( t ) = exp ( 1.2 t + 2 ) and λ 1 = λ 2 = 1.5 . (c) Harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) and λ 1 = λ 2 = 2.5 . (d) Blended harmonic noise ξ i j ( t ) = sin ( 10 π t π ) + sin ( 30 π t + 3 ) + sin ( 34 π t 4 ) + sin ( 50 π t + 7 ) and λ 1 = λ 2 = 8 .
Mathematics 10 00050 g005
Figure 6. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) in different initial values X ( 0 ) . (a) X ( 0 ) [ 0.7 , 0.7 ] 2 × 2 . (b) X ( 0 ) [ 4 , 4 ] 2 × 2 . (c) X ( 0 ) [ 70 , 70 ] 2 × 2 . (d) X ( 0 ) [ 400 , 400 ] 2 × 2 .
Figure 6. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) in different initial values X ( 0 ) . (a) X ( 0 ) [ 0.7 , 0.7 ] 2 × 2 . (b) X ( 0 ) [ 4 , 4 ] 2 × 2 . (c) X ( 0 ) [ 70 , 70 ] 2 × 2 . (d) X ( 0 ) [ 400 , 400 ] 2 × 2 .
Mathematics 10 00050 g006
Figure 7. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with no noise or disparate bounded noises. (a) No noise. (b) Constant noise ξ i j ( t ) = 0.8 . (c) Gaussian noise. (d) Exponentially attenuated noise ξ i j ( t ) = exp ( 1.2 t + 2 ) . (e) Harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) . (f) Blended harmonic noise ξ i j ( t ) = sin ( 10 π t π ) + sin ( 30 π t + 3 ) + sin ( 34 π t 4 ) + sin ( 50 π t + 7 ) .
Figure 7. Dynamic characteristics of the S ( t ) F by FPZNN model (4) and DAZNN model (14) with no noise or disparate bounded noises. (a) No noise. (b) Constant noise ξ i j ( t ) = 0.8 . (c) Gaussian noise. (d) Exponentially attenuated noise ξ i j ( t ) = exp ( 1.2 t + 2 ) . (e) Harmonic noise ξ i j ( t ) = sin ( 4 π t + π ) . (f) Blended harmonic noise ξ i j ( t ) = sin ( 10 π t π ) + sin ( 30 π t + 3 ) + sin ( 34 π t 4 ) + sin ( 50 π t + 7 ) .
Mathematics 10 00050 g007
Figure 8. The target trajectories and actual trajectories by the two FPDP models, VPDP model and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Figure 8. The target trajectories and actual trajectories by the two FPDP models, VPDP model and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Mathematics 10 00050 g008
Figure 9. Dynamic characteristics of the y ( t ) y * ( t ) 2 by the two FPDP models, VPDP model, and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Figure 9. Dynamic characteristics of the y ( t ) y * ( t ) 2 by the two FPDP models, VPDP model, and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Mathematics 10 00050 g009
Figure 10. The target trajectories and actual trajectories by the two FPDP models, VPDP model, and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Figure 10. The target trajectories and actual trajectories by the two FPDP models, VPDP model, and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Mathematics 10 00050 g010
Figure 11. Dynamic characteristics of the y ( t ) y * ( t ) 2 by the two FPDP models, VPDP model and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Figure 11. Dynamic characteristics of the y ( t ) y * ( t ) 2 by the two FPDP models, VPDP model and the pseudo-inverse method with noise η l ( t ) = sin ( 4 π t + π ) . (a) FPDP model with AF (5). (b) FPDP model with AF (10). (c) VPDP model with AF (12). (d) The pseudo-inverse method.
Mathematics 10 00050 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, Y.; Liao, B.; Xiao, L.; Han, L.; Xiao, X. Double Accelerated Convergence ZNN with Noise-Suppression for Handling Dynamic Matrix Inversion. Mathematics 2022, 10, 50. https://0-doi-org.brum.beds.ac.uk/10.3390/math10010050

AMA Style

He Y, Liao B, Xiao L, Han L, Xiao X. Double Accelerated Convergence ZNN with Noise-Suppression for Handling Dynamic Matrix Inversion. Mathematics. 2022; 10(1):50. https://0-doi-org.brum.beds.ac.uk/10.3390/math10010050

Chicago/Turabian Style

He, Yongjun, Bolin Liao, Lin Xiao, Luyang Han, and Xiao Xiao. 2022. "Double Accelerated Convergence ZNN with Noise-Suppression for Handling Dynamic Matrix Inversion" Mathematics 10, no. 1: 50. https://0-doi-org.brum.beds.ac.uk/10.3390/math10010050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop