Next Article in Journal
Preface to Numerical and Symbolic Computation: Developments and Applications—2019
Previous Article in Journal
An m-Polar Fuzzy PROMETHEE Approach for AHP-Assisted Group Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Projection Hestenes–Stiefel Method with Spectral Parameter for Nonlinear Monotone Equations and Signal Processing

1
Fixed Point Theory and Applications Research Group, Theoretical and Computational Science Center, Science Laboratory Building, Faculty of Science, King Mongkut’s University of Technology Thonburi, 126 Pracha-Uthit Road, Bang Mod, Thung Khru, Bangkok 10140, Thailand
2
Department of Mathematics, Faculty of Science, Gombe State University, Gombe 760214, Nigeria
3
Office of Science and Research, Yunnan University of Finance and Economics, Kunming 650221, Yunnan, China
4
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi, 126 Pracha-Uthit Road, Bang Mod, Thung Khru, Bangkok 10140, Thailand
5
Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
*
Authors to whom correspondence should be addressed.
Math. Comput. Appl. 2020, 25(2), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/mca25020027
Submission received: 27 March 2020 / Revised: 18 April 2020 / Accepted: 20 April 2020 / Published: 1 May 2020

Abstract

:
A number of practical problems in science and engineering can be converted into a system of nonlinear equations and therefore, it is imperative to develop efficient methods for solving such equations. Due to their nice convergence properties and low storage requirements, conjugate gradient methods are considered among the most efficient for solving large-scale nonlinear equations. In this paper, a modified conjugate gradient method is proposed based on a projection technique and a suitable line search strategy. The proposed method is matrix-free and its sequence of search directions satisfies sufficient descent condition. Under the assumption that the underlying function is monotone and Lipschitzian continuous, the global convergence of the proposed method is established. The method is applied to solve some benchmark monotone nonlinear equations and also extended to solve 1 -norm regularized problems to reconstruct a sparse signal in compressive sensing. Numerical comparison with some existing methods shows that the proposed method is competitive, efficient and promising.

1. Introduction

Let R n be an n dimensional Euclidean space with inner product · , · and norm · . Suppose that D is a nonempty closed convex subset of R n . We denote by R + n the set { ( x 1 , x 2 , , x n ) T R n | x i 0 , i = 1 , 2 , , n } . In this paper, we consider the problem of finding a point x ^ in the set D for which
F ( x ^ ) = 0 .
It is interesting to note that nonlinear equations in the form of (1) has various background and applications in science and engineering, such as the first-order necessary condition of the unconstrained convex optimization problems [1], the 1 -norm problem arising from compressing sensing [2], chemical equilibrium systems and optimal power flow equations [3], etc.
The classical methods for solving (1) include Newton’s method, quasi-Newton methods and inexact-Newton methods (see, Chapters 3 and 5 in [4], Chapter 11 in [5]). Unfortunately, these methods are not suitable for solving large-scale problems because they require solving systems of linear equations using the Jacobian matrix or its approximation at each iteration. As a result, several matrix-free methods for solving nonlinear systems of equations are proposed (see, e.g., [6,7,8,9,10] among others). On the other hand, Bellavia et al. [11] proposed an affine scaling trust-region iterative algorithm for bound-constrained systems of nonlinear equations. Their algorithm is based on the classical trust-region Newton method for unconstrained nonlinear equations and it possesses global and local fast convergence properties. They provided preliminary numerical experiments to show the efficiency of the method. Based on the nonmonotonic interior backtracking line search technique, Zhu [12] proposed an affine scaling trust-region method for nonlinear equality systems subject to bounds on variables. Under some standard assumptions, he proved the global convergence as well as the fast rate of local convergence of the method and reported some numerical experiments to demonstrate its effectiveness. In [13], Ahookhosh et al. presented a trust-region algorithm to solve symmetric nonlinear system of equations. Their algorithm combines an effective adaptive trust-region radius and a non-monotone strategy. They established the global convergence of the algorithm without the non-degeneracy assumption of the exact Jacobian and presented some numerical comparison to show its performance. For more detail on trust-region methods for nonlinear equations, the reader may refer to the survey paper by Yuan [14].
Among numerous methods for solving large-scale general unconstrained optimization problems, conjugate gradient methods (CG) are popular because they are simple to implement and require low storage with good convergence properties. Based on this, a quite number of researchers have made effort to extend some of the classical conjugate gradient methods to solve nonlinear monotone equations with convex constraints. For example, Xiao and Zhu [15] combined the conjugate gradient method of Hager and Zhang [16] and the projection technique of Solodov and Svaiter [17] to solve constrained system of monotone nonlinear equations. Their numerical experiments show that the method works well and its convergence analysis was established under some reasonable assumptions. The modified Perry conjugate gradient method [18] for general unconstrained optimization was extended by Dai et al. [19] to solve the unconstrained version of problem (1). The numerical performance of their method was compared with the methods in [20,21] and the results show their method stands out. Based on the popular Broyden–Fletcher–Goldfarb–Shanno (BFGS) updating formula for solving general unconstrained optimization, Gao et al. [22] recently proposed two adaptive projection algorithms to solve the system of monotone nonlinear equations with convex constraints. The sequence of the search directions generated by their algorithms satisfy sufficient descent property and under the assumption that the underlying function is Lipschitzian continuous, the convergence analysis of the method was established. Recently, different interesting results on spectral and conjugate gradient methods for addressing unconstrained minimization problems and nonlinear equations were presented in [23,24,25,26,27,28,29,30,31,32] and the references therein.
Inspired by the contributions discussed above, in this paper, we extend a modified Hestenes–Stiefel-like proposed by Amini et al. [33] to solve system of monotone nonlinear equations with convex constraints. We built the proposed algorithm by modifying the search direction of Amini et al. [33] and combining it with the projection technique of Solodov and Svaiter [17]. We proved the global convergence of the algorithm using a more general line search strategy that is mostly utilized in literature. Additional contribution of this paper is the application of the proposed algorithm to solve signal processing problem arising from compressive sensing. We present numerical experiments to demonstrate the efficiency of the proposed method and also establish the convergence analysis under standard assumptions. We organize the rest of this paper as follows. In Section 2, we present the algorithm and its global convergence. In Section 3, we present the numerical experiments and give the conclusions in Section 4.

2. Proposed Algorithm for Monotone Equations and Its Convergence Analysis

Definition 1.
Let x , y R n , a function F : R n R n is said to be
(i) 
monotone if
0 F ( x ) F ( y ) , x y .
(ii) 
Lipschitzian continuous if there exists L > 0 such that
F ( x ) F ( y ) L x y .
We begin by recalling the well-known Hestenes–Stiefel (HS) conjugate gradient method for solving unconstrained optimization problem of the form
min { f ( x ) : x R n } ,
where f : R n R is a continuously differentiable function and bounded from below. HS CG method updates its sequence of iterates using the following recursive formula:
x k + 1 = x k + α k d k ,
where α k > 0 is a suitable stepsize. The search direction d k is defined by
d k = F k , if k = 0 , F k + β k H S d k 1 , if k > 0 ,
where the CG parameter is given by
β k H S = F k , y k 1 y k 1 , d k 1 ,
and y k 1 = F k F k 1 and F k denotes the gradient of f at x k .
Definition 2.
Let f : R n R be differentiable at x R n and d R n satisfies
F , d < 0 ,
where F denotes the gradient of f , then d is called a descent direction of f at x .
It is well known that (7) is a crucial property for iterative methods (5) to be globally convergent. However, the HS CG method does not guarantee (7), and therefore not globally convergent for general nonlinear functions.
The HS-CG method has enjoyed some forms of modifications in order to improve its convergence properties and/or its numerical performance. Recently, Amini et al. [33] studied some modified HS CG for solving (4). Based on the work of Narushima et al. [34] and Dai and Kou [35], Amini et al. proposed a modified HS CG method with the search direction defined as follows
d k = F k , if k = 0 , or β ¯ k M H S 0 F k + β ¯ k M H S d k 1 , if k > 0 ,
where
β ¯ k M H S = F k , y k 1 y k 1 , d k 1 θ k η y k 1 θ k y k 1 , d k 1 2 F k , d k 1 ,
and θ k = 1 F k , d k 1 2 / F k 2 d k 1 2 . They showed that if y k 1 , d k 1 0 and η > 1 / 4 , then the search direction d k satisfies the sufficient descent condition (7). The natural question that comes to mind is, can we modified (9) in such a way that these two restrictions are removed and still retain the nice properties associated with (8)?
In this paper we provide answer to the above question. Let w k + 1 = x k + α k d k , we define the proposed search direction as follows d 0 = F ( x 0 ) and
d k = v k F ( x k ) + max { β k P M H S , 0 } d k 1 , k = 1 , 2 , ,
where
β k P M H S = F ( x k ) , d k 1 d k 1 2 γ k 1 2 γ k 1 , d k 1 2 F ( x k ) , d k 1 ,
v k = s k 1 2 γ k 1 , s k 1 , γ k 1 = F ( w k ) F ( x k 1 ) + a s k 1 , s k 1 = w k x k 1 , a > 0 .
The incorporation of the spectral parameter v k in the definition of the search direction is to improve the numerical performance of the proposed algorithm. Next, we describe projection operator which is usually used in iterative algorithms for solving problems such as fixed point problem, variational inequality problem, and so on. Let x R n and define an operator P D : R n D by P D ( x ) = argmin { x y : y D } . The operator P D is called a projection onto the feasible set D and it enjoys the nonexpansive property, that is, P D ( x ) P D ( y ) x y , x , y R n . If y D , then P D ( y ) = y and therefore, we have
P D ( x ) y x y .
We now state the steps of the proposed algorithm in Algorithm 1.
Algorithm 1: Modified Hestene–Stiefel with spectral parameter (HSS).
Input: Given x 0 D , 0 < κ 1 , T o l > 0 , a > 0 , σ , ϱ ( 0 , 1 ) . Set k = 0 .
  • Step 1: Compute F ( x k ) . If F ( x k ) T o l , stop, otherwise go to step 2.
  • Step 2: Compute the search direction as follows. If k = 0 , d k = F ( x k ) .
    Else
    d k = v k F ( x k ) + max { β k P M H S , 0 } d k 1 ,
    β k P M H S = F ( x k ) , d k 1 d k 1 2 γ k 1 2 γ k 1 , d k 1 2 F ( x k ) , d k 1 ,
    v k = s k 1 2 γ k 1 , s k 1 , γ k 1 = F ( w k ) F ( x k 1 ) + a s k 1 , s k 1 = w k x k 1 .
  • Step 3: Determine the stepsize α k = κ ϱ i where i is the smallest nonegative integer such that
    F ( x k + κ ϱ i d k ) , d k σ κ ϱ i d k 2 F ( x k + κ ϱ i d k ) 1 / r , r 1 ,
  • Step 4: Set w k + 1 = x k + α k d k . If w k + 1 D and F ( w k + 1 ) T o l then stop. Else compute the next iterate using
    x k + 1 = P D x k F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) .
  • Step 5: Set k : = k + 1 and repeat the process from Step 1.
Remark 1.
The line search defined by (14) is more general than that of [36,37]. When r = 1 , then the line search (14) reduces to the line search in [36] and when r is sufficiently large enough, it reduces to the line search in [37].
Assumption 1.
Throughout this paper, we assume the following
(i) 
The solution set of problem (1) is nonempty.
(ii) 
The function F : R n R n satisfies (2) and (3).
The following lemma shows that the proposed search direction is well-defined and satisfies (7) independent of line search strategy.
Lemma 1.
The parameters β k P M H S and v k defined by (11) and (12), respectively, are well-defined. In addition, k 0 , the search direction d k defined by (10) satisfies
F ( x k ) , d k t F k 2 , t > 0 .
Proof of Lemma 1. 
From the monotonicity of F , we have F ( w k ) F ( x k 1 ) , w k x k 1 0 . This implies that
γ k 1 , s k 1 a s k 1 2 > 0 .
By the definition of s k 1 and w k in Steps 2 and 4 of Algorithm 1, we have s k 1 = w k x k 1 = α k 1 d k 1 . Therefore, it holds
γ k 1 , d k 1 a α k 1 d k 1 2 > 0 .
By (3) we have
γ k 1 , s k 1 = F ( w k ) F ( x k 1 ) , s k 1 + a s k 1 2 ( L + a ) s k 1 2 .
So (17) and (19) give
1 L + a v k 1 a .
Hence v k is well-defined and so is β k P M H S by (18).
Now taking the inner product of the search direction defined by (10) with F k , for k = 0 , it follows that F ( x k ) , d k F ( x k ) 2 , that is t = 1 . Suppose k > 0 and β k P M H S 0 , we get
F ( x k ) , d k = v k F ( x k ) 2 1 L + a F ( x k ) 2 .
Again, suppose k > 0 and β k P M H S > 0 , we have
F ( x k ) , d k = v k F ( x k ) 2 + β k P M H S F ( x k ) , d k 1 = v k F ( x k ) 2 + F ( x k ) , d k 1 d k 1 2 γ k 1 2 γ k 1 , d k 1 2 F ( x k ) , d k 1 F ( x k ) , d k 1 v k F ( x k ) 2 + F ( x k ) , d k 1 2 d k 1 2 γ k 1 2 γ k 1 2 d k 1 2 F ( x k ) , d k 1 2 1 L + a F ( x k ) 2 + F ( x k ) , d k 1 2 d k 1 2 F ( x k ) , d k 1 2 d k 1 2 = 1 L + a F ( x k ) 2 .
The two inequalities follow from Cauchy–Schwarz inequality and (20) respectively. Hence, the desired result holds. □
Lemma 2.
Let the sequence of iterates { x k } and the search direction { d k } be generated by Algorithm 1, then there always exists a step-size α k satisfying the line search defined by (14) for any k 0 .
Proof of Lemma 2. 
Suppose on the contrary that there exists some k 0 such that for any i = 0 , 1 , 2 , , the line search (14) does not hold, that is
F ( x k 0 + κ ϱ i d k 0 ) , d k 0 < σ κ ϱ i d k 0 2 F ( x k 0 + κ ϱ i d k 0 ) 1 / r .
By the fact that F is continuous, ϱ ( 0 , 1 ) and { d k } is bounded for all k (see Lemma 3 below), letting i yields
F ( x k 0 ) , d k 0 0 .
It is clear that the inequality (22) contradicts (16). Hence the line search (14) is well-defined. □
Lemma 3.
Let Assumption 1 hold and x ^ be a solution of problem (1). Suppose the sequences { w k } , { x k } and { d k } are generated by Algorithm 1. Then the following hold
(i) 
{ x k } and { F ( x k ) } are bounded and lim k x k x ^ exists.
(ii) 
The sequence of the search direction { d k } is bounded.
(iii) 
{ w k } and { F ( w k ) } are bounded.
(iv) 
lim k α k d k = 0 .
(v) 
lim k x k + 1 x k = 0 .
Proof of Lemma 3. 
To show (i), let k 0 and x ^ be a solution of problem (1), then F ( x ^ ) , w k + 1 x ^ = 0 . Since F is monotone, we have F ( w k + 1 ) , w k + 1 x ^ F ( x ^ ) , w k + 1 x ^ = 0 . Therefore, it holds that
F ( w k + 1 ) , x k x ^ = F ( w k + 1 ) , x k w k + 1 + w k + 1 x ^ F ( w k + 1 ) , x k w k + 1 .
From the property (13) we have
P D x k F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) x ^ x k F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) x ^ .
From the definition of x k + 1 , we have
x k + 1 x ^ 2 x k x ^ F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) 2 = x k x ^ 2 2 F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) , x k x ^ + F ( w k + 1 ) , x k w k + 1 2 F ( w k + 1 ) 2 x k x ^ 2 2 F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) , x k w k + 1 + F ( w k + 1 ) , x k w k + 1 2 F ( w k + 1 ) 2 = x k x ^ 2 F ( w k + 1 ) , x k w k + 1 2 F ( w k + 1 ) 2 x k x ^ 2 .
The first and second inequalities respectively follow from (24) and (23). The last inequality holds by dropping the second negative term on the right hand side. This implies that x k x ^ x 0 x ^ for all k , and therefore the sequence { x k } is bounded and lim k x k x ^ exists. Let c 1 = L x 0 x ^ , since F is Lipschitzian continuous and { x k x ^ } is decreasing then
F ( x k ) = F ( x k ) F ( x ^ ) L x k x ^ L x 0 x ^ = c 1 .
To show (ii), let k = 0 , by the definition of the search direction (10) we have
d 0 = F ( x 0 ) c 1 .
Suppose k > 0 and β k P M H S 0 , then max { β k P M H S , 0 } = 0 and (10) reduces
d k = v k F ( x k ) , k = 1 , 2 , ,
Therefore, combining with (20) and (26) yields
d k = v k F ( x k ) c 1 a .
By the Lipschitzian continuity, we have
γ k 1 = F ( w k ) F ( x k 1 ) + a ( w k x k 1 ) L w k x k 1 + a w k x k 1 ( L + a ) α k 1 d k 1 .
Again, suppose k > 0 and β k P M H S > 0 , we have
d k = v k F ( x k ) + β k P M H S d k 1 v k F ( x k ) + | β k P M H S | d k 1 = v k F ( x k ) + F ( x k ) , d k 1 d k 1 2 γ k 1 2 γ k 1 , d k 1 2 F ( x k ) , d k 1 d k 1 v k F ( x k ) + F ( x k ) d k 1 d k 1 2 + γ k 1 2 γ k 1 , d k 1 2 F ( x k ) d k 1 d k 1 = v k F ( x k ) + F ( x k ) + γ k 1 2 γ k 1 , d k 1 2 F ( x k ) d k 1 2 1 a F ( x k ) + F ( x k ) + ( L + a ) 2 α k 1 2 F ( x k ) d k 1 4 a 2 α k 1 2 d k 1 4 = 1 a F ( x k ) + F ( x k ) + ( L + a ) 2 a 2 F ( x k ) = 1 a + 1 + ( L + a ) 2 a 2 F ( x k ) 1 a + 1 + ( L + a ) 2 a 2 c 1 .
The first and second inequalities follow from triangle inequality and Cauchy–Schwartz inequality, respectively. The third inequality follows from (18) and (29), while the fourth inequality follows from (26). If we let c : = 1 a + 1 + ( L + a ) 2 a 2 c 1 , then it holds that
d k c , k .
Next, we show (iii). By the boundedness { x k } and (30), it follows from the definition w k that { w k } is also bounded. By Lipschitzian continuity of F , there exists some constant c 2 for which
F ( w k ) c 2 , k 0 .
To show (iv), from (25), we deduce
F ( w k + 1 ) , α k d k 2 F ( w k + 1 ) 2 ( x k x ^ 2 x k + 1 x ^ 2 ) .
Since the stepsize α k in Step 3 of Algorithm 1 satisfies α k 1 , k , then from (14), we have
σ 2 α k 4 d k 4 F ( w k + 1 ) 2 / r σ 2 α k 2 d k 4 F ( w k + 1 ) 2 / r F ( w k + 1 ) , α k d k 2 .
Combining (32) and (33) gives
σ 2 α k 4 d k 4 F ( w k + 1 ) 2 / r F ( w k + 1 ) 2 ( x k x ^ 2 x k + 1 x ^ 2 ) .
Multiplying both sides of (34) by F ( w k + 1 ) 2 / r and using (31) gives
σ 2 α k 4 d k 4 F ( w k + 1 ) 2 2 / r ( x k x ^ 2 x k + 1 x ^ 2 ) c 2 2 2 / r ( x k x ^ 2 x k + 1 x ^ 2 ) .
Taking limits of both sides give
σ 2 lim k α k 4 d k 4 = 0 .
Thus,
lim k α k d k = 0 .
Finally, we show (v). Equation (37), together with the definition of w k + 1 in Step 4 of Algorithm 1 yields
lim k w k + 1 x k = 0 .
By the property of projection (13), we have
lim k x k + 1 x k = lim k P D x k F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) x k lim k x k F ( w k + 1 ) , x k w k + 1 F ( w k + 1 ) 2 F ( w k + 1 ) x k lim k x k w k + 1 = 0 .
 □
Theorem 1.
Let { x k } be the sequence generated by Algorithm 1. Suppose Assumption 1 holds then
(i) 
lim inf k F ( x k ) = 0 .
(ii) 
the sequence { x k } converges to a point x ^ which satisfies F ( x ^ ) = 0 .
Proof of Theorem 1. 
We prove (i) by contradiction. Now suppose that
lim inf k F ( x k ) > 0 , k ,
then we can find some positive constant, say ϑ such that
F ( x k ) ϑ , k .
Applying the Cauchy–Schwarz inequality to (16) yields t F ( x k ) d k . This, together with (40) gives d k t ϑ . Combining with (37), we obtain
lim k α k = 0 .
Since F satisfies Lipschitzian continuity, then it holds
F ( x k + ϱ 1 α k d k ) = F ( x k + ϱ 1 α k d k ) F ( x ^ ) L x k + ϱ 1 α k d k x ^ L x k x ^ + ϱ 1 α k d k m ,
where m is a positive constant. The last inequality follows from the boundedness of { x k } and { d k } together with the definition of α k . If α k κ , since Algorithm 1 uses a backtracking process to compute α k starting from κ , then ϱ 1 α k does not satisfy (14), that is,
F ( x k + ϱ 1 α k d k ) , d k < σ ϱ 1 α k d k 2 F ( x k + ϱ 1 α k d k ) 1 / r σ ϱ 1 α k c 2 m 1 / r = M α k ,
where M : = σ ϱ 1 c 2 m 1 / r and the second inequality follows from (30) and (42). Since { x k } and { d k } are bounded, there exist some accumulation points x ^ and d ^ of { x k } and { d k } respectively and some infinite index sets K and K for which lim k K x k = x ^ and lim k K d k = d ^ where K K . Therefore by the continuity of F and taking limit on both sides of inequality (43) for k K we obtain
F ( x ^ ) , d ^ 0 .
On the other hand, (16) and (40) gives F k , d k t ϑ 2 . Taking limit for k K we obtain
F ( x ^ ) , d ^ > 0 .
The inequalities (44) and (45) yield contradiction and therefore (i) must hold.
Finally, we show (ii) holds. Now, since F is continuous and the sequence { x k } is bounded, then there is some accumulation point of { x k } say x ^ for which F ( x ^ ) = 0 . By boundedness of { x k } , we can find subsequence { x k j } of { x k } for which lim j x k j x ^ = 0 . Since lim k x k x ^ exists, from (i) of Lemma 3, we can conclude that lim k x k x ^ = 0 and the proof is complete. □

3. Experiment on Monotone Equations and Application in Signal Processing

In this section, we demonstrate the numerical performance of Algorithm 1 (HSS) and its computational advantage by comparing with some existing methods. We divide the experiment into two subsections where the first subsection is devoted to solving some test problems while the second subsection discussed the application of HSS algorithm in signal recovery. For the monotone nonlinear equations experiment, all solvers were coded in MATLAB R2019b and run on a PC with intel Core(TM) i5-8250u processor with 4GB of RAM and CPU 1.60GHZ.

3.1. First Experiment on Monotone Equations

In this experiment, we apply HSS algorithm to solve eleven test problems and compared the numerical results with some state of the art existing methods namely: modified Harger and Zhang CG method denoted as CGD (also known as CG_Descent) by Xiao and Zhu [15], modified Dai–Yuan CG method denoted as PDY by Liu and Feng [32] and modified Fletcher–Reeves CG method denoted as MFRM by Abubakar et al. [25]. We considered eleven problems where Problems 1–10 are solved with dimension n = 1000 , 5000 , 10 , 000 , 50 , 000 , 100 , 000 and Problem 11 with dimension n = 4 (see Appendix A). We used six different starting points, that is, x 1 = ( 0.1 , 0.1 , 0.1 , , 0.1 ) T , x 2 = ( 1 2 , 1 2 2 , 1 2 3 , , 1 2 n ) T , x 3 = ( 2 , 2 , , 2 ) T , x 4 = ( 1 , 1 2 , 1 3 , , 1 n ) T , x 5 = ( 1 1 n , 1 2 n , 1 3 n , , 0 ) T and x 6 = rand ( 0 , 1 ) . This brings the total number of problem runs in this experiment to 306.
We implemented HSS method using the following parameters κ = 1 , σ = 0.01 , ϱ = 0.5 , r = 5 and a = 0.01 while the parameters used for CGD, PDY and MFRM methods come from [15,25,32], respectively. We set the terminating criterion for the iteration process as F ( x k ) 10 6 or F ( w k ) 10 6 and declare failure (denoted by "-") whenever the number of iterations exceeds 1000 and the terminating criterion has not been satisfied.
We carried out the comparison based on ITER (number of iterations), FVAL (number of function evaluations) and TIME (CPU time(s)) and the numerical results obtained by each solver are reported in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11. We see from the NORM (norm of the objective function) reported in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 that the proposed HSS algorithm obtained the solutions of all the test problems in each instance while the other three methods (CGD, PDY and MFRM) failed to obtain the solutions of some problems in some instances. This means our proposed algorithm can serve as an alternative to some existing methods. The numerical results reported show that the proposed HSS algorithm recorded least ITER, FVAL and TIME in most instances. We summarized all the information from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 in Figure 1, Figure 2 and Figure 3 based on the performance profile by Dolan and Mor e ´ [38] which tells the percentage win by each solver. In terms of ITER and FVAL, we see from Figure 1 and Figure 2 that the HSS algorithm performs well and won about 70 % of the whole experiments compared to the existing methods. In addition, Figure 3 shows that the HSS algorithm is faster than all the three methods compared with. Therefore, we can regard the proposed HSS algorithm as more efficient than CGD, PDY and MFRM methods with respect to the numerical experiments performed.

3.2. Second Experiment on Signal Processing

The major advantage of the proposed HSS algorithm is that it does not require the knowledge of the derivative of an objective function and therefore suitable for solving nonsmooth functions. As mentioned in the introduction section, many applications can be converted into monotone nonlinear equation. In particular, we consider the reconstruction of an original signal, say x ^ R n by minimizing an objective function that contains a linear least square error term and a sparseness-including 1 regularization term
min x 1 2 y E x 2 2 + μ x 1 ,
where x R n y R k is an observation, E R k × n ( k < < n ) is a linear operator and μ 0 a parameter. Problem (46) has received much attention and different iterative methods for solving it have been proposed by many researchers (see, [39,40,41,42,43,44,45]). One of the popular methods for solving (46) is the gradient projection method for sparse reconstruction (also known as GPSR) proposed by Figueiredo et al. [46]. Other gradient-based iterative algorithms include the coordinate gradient descent [47] and the accelerating gradient projection methods [48] among others. These methods enjoyed good performance, even though they required the knowledge of the gradient.
The derivative-free nature of our proposed algorithm makes suitable to deal with the nonsmooth problem (46). However, Algorithm 1 is designed to handle problems in the form of (1) and therefore we need to rewrite problem (46) into the form of problem (1). Fortunately, the work of Figueiredo et al. [46] shows that if we let q = [ u v ] T , then (46) can be translated into the following bound-constrained quadratic programming problem
min q 1 2 q T G q + c T q , s . t q 0 ,
where c = μ e 2 n + b b , b = E T y , G = E T E E T E E T E E T E .
It is not difficult to see that the matrix G is a positive semi-definite. On the other hand, Xiao et al. [2] solve problem (47) in a different way by writing it as the following linear variable inequality problem
Find q R n such that G q + c , q q 0 , q 0 .
By taking the special structure of the feasible region of q into consideration, they further showed that problem (48) is equivalent to the following linear complementary problem
Find q R n such that q 0 , G q + c 0 and G q + c , q = 0 .
We can see that q R n is a solution of problem (49) if and only if it satisfies the following
F ( q ) : = min { q , G q + c } = 0 .
In (50), the function F is a vector-valued and the “min” operator denotes the componentwise minimum of two vectors. Problem (50) is in the form of problem (1) and interestingly, Lemma 3 of [49] and Lemma 2.2 of [2] show that the function F satisfies Assumption 1 (ii) that is, Lipschitzian continuity and monotonicity. Hence, the proposed derivative-free HSS algorithm can be applied to solve it. At each iteration, our proposed algorithm is applied to the resulting problem (50) without requiring the Jacobian matrix information.
We compared HSS algorithm and the modified self-adaptive CG method (MSCG) [50] based on their performance in restoring a length-n sparse signal from k observations. We used three metrics; namely ITER (number of iterations), CPU (CPU time) and mean of squared error (MSE) to evaluate the performance of each method. The MSE is usually used to assess the quality of reconstruction and is calculated using the following formula
MSE = 1 n x x ^ 2 ,
where x and x ^ denote the original and restored signal respectively. The measurement y contains noise, y = G x + ω , where ω is the Gaussian noise disturbed as N ( 0 , 10 4 ) and G is the Gaussian matrix generated by command randn ( m , n ) , in MATLAB. The size of the signal is selected with n = 2 15 and m = 2 13 . The original signal contains 2 7 random nonzero elements. For the signal recovery experiment, the algorithms were coded in MATLAB R2019b and run on a PC with intel(R) Core(TM) i7-10510U processor with 8.00 GB (7.80 GB usable) of RAM and CPU 1.80GHZ–2.30GHz.
We started the iteration process by the measurement signal, i.e., x 0 = G T y , and used f ( x ) = 1 2 y A x 2 2 + μ x 1 as the merit function. We used the same parameters as in the first experiment for HSS method except for a = 0.2 while the parameters for MSCG come from [50]. We terminated the iteration process when the relative change of the objective function satisfies f ( x k ) f ( x k 1 ) f ( x k 1 ) < 10 5 . In order to have relatively fair assessment for both methods, we run each code with the same initial point, using the same continuation technique on parameter μ and observed the convergence behaviour of each method to obtain a solution with similar accuracy. We presented numerical results of the experiment for fifteen different noise samples in Table 12 and the average of each column is also computed. In addition, we presented the graphical view of the original, disturbed and recovered signals in Figure 4 and Figure 5. From Table 12, we see that HSS algorithm restored the disturbed signal with least ITER values and also converged faster than MSCG based on the CPU time(s). Moreover, the quality of reconstruction by HSS algorithm is slightly better than that of MSCG method since the MSE with respect to the former is a little less than that of the later.

4. Conclusions

A Hestenes–Stiefel-like derivative-free method with spectral parameter for nonlinear monotone equations has been proposed based on a suitable line search strategy and projection technique. The method is a modification of the conjugate gradient algorithm proposed by Amini et al. [33]. Two types of experiments were presented to show the efficiency of the proposed method. Numerical results reported show that the proposed outperformed three existing methods [15,25,32] and was able to recover some disturbing signals in compressive sensing with better quality than the method in [50]. The convergence analysis of the proposed method was established under standard assumptions.

Author Contributions

Conceptualization, L.W. and A.M.A.; methodology, A.M.A. and W.W.; software, A.M.A. and H.M.; validation, A.M.A., L.W., P.K., H.M. and W.W.; formal analysis, A.M.A., L.W. and W.W.; investigation, A.M.A. and H.M.; resources, L.W. and P.K.; data curation, A.M.A. and H.M.; writing—original draft preparation, A.M.A.; writing—review and editing, A.M.A., L.W., P.K., H.M. and W.W.; visualization, P.K.; supervision, L.W. and P.K.; project administration, L.W. and P.K.; funding acquisition, P.K. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support provided by the Petchra Pra Jom Klao Doctoral Scholarship for Ph.D. program of King Mongkut’s University of Technology Thonburi (KMUTT) and Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. The first author was supported by the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi (Contract No. 42/2560).

Acknowledgments

The authors would like to thank the anonymous referees for their useful comments. This project was supported by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE) Center under Computational and Applied Science for Smart Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. This work was done while the first author visits Yunnan University of Finance and Economics, Kunming China.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. List of Test Problems

The following are the list of test problems used in our experiments where F = ( f 1 , f 2 , , f n ) T .
Problem 1 [51]
f 1 ( x ) = e x 1 1 f i ( x ) = e x i + x i 1 1 , i = 1 , 2 , , n 1 ,
where Ω = R + n .
Problem 2 [51]
f i ( x i ) = log ( x i + 1 ) x i n , i = 1 , 2 , , n ,
where Ω = { x R n : i = 1 n x i n , x i > 1 , i = 1 , 2 , , n } .
Problem 3 [51]
f i ( x ) = 2 x i sin | x i | , i = 1 , 2 , , n ,
where Ω = R + n .
Problem 4 [15]
f i ( x ) = e x i 1 , i = 1 , 2 , , n ,
where Ω = R + n .
Problem 5 [32]
f 1 ( x ) = x 1 e cos ( h ( x 1 + x 2 ) ) f i ( x ) = x i e cos ( h ( x i 1 + x i + x i + 1 ) ) , i = 2 , , n 1 , f n ( x ) = x n e cos ( h ( x n 1 + x n ) ) ,
where h = 1 n + 1 and Ω = R + n .
Problem 6 [15]
f i ( x ) = x i sin ( | x i 1 | ) , i = 1 , 2 , , n 1 ,
where Ω = { x R n : i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .
Problem 7 [52]
f i ( x ) = e x i + 3 2 sin ( 2 x i ) 1 , i = 1 , 2 , , n ,
where G = R + n .
Problem 8 [51]
f i ( x ) = min [ min ( | x i | , x i 2 ) , max ( | x i | , x i 3 ) ] , i = 1 , 2 , , n ,
where Ω = R + n .
Problem 9 [23]
f 1 ( x ) = 2 x 1 x 2 + e x 1 1 , f i ( x ) = x i 1 + 2 x i x i + 1 + e x i 1 , i = , 2 , , n 1 , f n ( x ) = x n 1 + 2 x n + e x n 1 ,
where Ω = R + n .
Problem 10 [53]
f 1 ( x ) = 5 2 x 1 + x 2 1 , f i ( x ) = x i 1 + 5 2 x i + x i + 1 1 , i = , 2 , , n 1 , f n ( x ) = x n 1 + 5 2 x n 1 ,
where Ω = R + n .
Problem 11 [51] F : R 4 R 4 defined by
F ( x ) = 1 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 x 1 x 2 x 3 x 4 + x 1 3 x 2 3 2 x 3 3 2 x 4 3 + 10 1 3 0
where Ω = { x R n : i = 1 4 x i = 3 , x i 0 , i = 1 , 2 , 3 , 4 } .

References

  1. Iusem, N.A.; Solodov, V.M. Newton-type methods with generalized distances for constrained optimization. Optimization 1997, 41, 257–278. [Google Scholar] [CrossRef]
  2. Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based methods for l1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  3. Ghaddar, B.; Marecek, J.; Mevissen, M. Optimal power flow as a polynomial optimization problem. IEEE Trans. Power Syst. 2016, 31, 539–546. [Google Scholar] [CrossRef] [Green Version]
  4. Sun, W.; Yuan, Y.X. Optimization Theory and Methods: Nonlinear Programming; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
  5. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer Science: New York, NY, USA, 2006. [Google Scholar]
  6. La Cruz, W.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef] [Green Version]
  7. Leong, W.J.; Hassan, M.A.; Yusuf, M.W. A matrix-free quasi-Newton method for solving large-scale nonlinear systems. Comput. Math. Appl. 2011, 62, 2354–2363. [Google Scholar] [CrossRef] [Green Version]
  8. Wan, Z.; Chen, Y.; Huang, S.; Feng, D. A modified nonmonotone BFGS algorithm for solving smooth nonlinear equations. Optim. Lett. 2014, 8, 1845–1860. [Google Scholar] [CrossRef]
  9. Mohammad, H.; Waziri, M.Y. On Broyden-like update via some quadratures for solving nonlinear systems of equations. Turk. J. Math. 2015, 39, 335–345. [Google Scholar] [CrossRef]
  10. Mohammad, H. Barzilai–Borwein-Like Method for Solving Large-Scale Nonlinear Systems of Equations. J. Niger. Math. Soc. 2017, 36, 71–83. [Google Scholar]
  11. Bellavia, S.; Macconi, M.; Morini, B. An affine scaling trust-region approach to bound-constrained nonlinear systems. Appl. Numer. Math. 2003, 44, 257–280. [Google Scholar] [CrossRef]
  12. Zhu, D. An affine scaling trust-region algorithm with interior backtracking technique for solving bound-constrained nonlinear systems. J. Comput. Appl. Math. 2005, 184, 343–361. [Google Scholar] [CrossRef] [Green Version]
  13. Ahookhosh, M.; Esmaeili, H.; Kimiaei, M. An effective trust-region-based approach for symmetric nonlinear systems. Int. J. Comput. Math. 2013, 90, 671–690. [Google Scholar] [CrossRef]
  14. Yuan, Y.-X. Recent advances in trust region algorithms. Math. Program. 2015, 151, 249–281. [Google Scholar] [CrossRef]
  15. Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  16. Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef] [Green Version]
  17. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Boston, MA, USA, 1998; pp. 355–369. [Google Scholar]
  18. Livieris, I.E.; Pintelas, P. Globally convergent modified Perry’s conjugate gradient method. Appl. Math. Comput. 2012, 218, 9197–9207. [Google Scholar] [CrossRef]
  19. Dai, Z.; Chen, X.; Wen, F. A modified Perry’s conjugate gradient method-based derivative-free method for solving large-scale nonlinear monotone equations. Appl. Math. Comput. 2015, 270, 378–386. [Google Scholar] [CrossRef] [Green Version]
  20. Li, Q.; Li, D.H. A class of derivative-free methods for large-scale nonlinear monotone equations. IMA J. Numer. Anal. 2011, 31, 1625–1635. [Google Scholar] [CrossRef]
  21. Yan, Q.-R.; Peng, X.-Z.; Li, D.-H. A globally convergent derivative-free method for solving large-scale nonlinear monotone equations. J. Comput. Appl. Math. 2010, 234, 649–657. [Google Scholar] [CrossRef] [Green Version]
  22. Gao, P.; He, C.; Liu, Y. An adaptive family of projection methods for constrained monotone nonlinear equations with applications. Appl. Math. Comput. 2019, 359, 1–16. [Google Scholar] [CrossRef]
  23. Muhammed, A.A.; Kumam, P.; Abubakar, A.B.; Wakili, A.; Pakkaranang, N. A new hybrid spectral gradient projection method for monotone system of nonlinear equations with convex constraints. Thai J. Math. 2018. Available online: http://thaijmath.in.cmu.ac.th/index.php/thaijmath/article/view/3376 (accessed on 21 April 2020).
  24. Liu, J.K.; Li, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  25. Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M.; Sitthithakerngkiet, K. A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications. Mathematics 2019, 7, 745. [Google Scholar] [CrossRef] [Green Version]
  26. Awwal, A.M.; Kumam, P.; Abubakar, A.B. Spectral modified Polak–Ribiére–Polyak projection conjugate gradient method for solving monotone systems of nonlinear equations. Appl. Math. Comput. 2019, 362, 124514. [Google Scholar] [CrossRef]
  27. Abubakar, A.B.; Kumam, P.; Awwal, A.M. An inexact conjugate gradient method for symmetric nonlinear equations. Comput. Math. Methods 2019, 6, e1065. [Google Scholar] [CrossRef] [Green Version]
  28. Mohammad, H.; Abubakar, A.B. A descent derivative-free algorithm for nonlinear monotone equations with convex constraints. RAIRO Oper. Res. 2020, 54, 489–505. [Google Scholar] [CrossRef] [Green Version]
  29. Dai, Y.-H.; Huang, Y.; Liu, X.-W. A family of spectral gradient methods for optimization. Comput. Optim. Appl. 2019, 74, 43–65. [Google Scholar] [CrossRef] [Green Version]
  30. Huang, S.; Wan, Z. A new nonmonotone spectral residual method for nonsmooth nonlinear equations. J. Comput. Appl. Math. 2017, 313, 82–101. [Google Scholar] [CrossRef]
  31. Zheng, L.; Yang, L.; Liang, Y. A conjugate gradient projection method for solving equations with convex constraints. J. Comput. Appl. Math. 2020, 375, 112781. [Google Scholar] [CrossRef]
  32. Liu, J.; Feng, Y. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2019, 82, 245–262. [Google Scholar] [CrossRef]
  33. Amini, K.; Faramarzi, P.; Pirfalah, N. A modified Hestenes–Stiefel conjugate gradient method with an optimal property. Optim. Methods Softw. 2019, 34, 770–782. [Google Scholar] [CrossRef]
  34. Narushima, Y.; Yabe, H.; Ford, J.A. A three-term conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 2011, 21, 212–230. [Google Scholar] [CrossRef] [Green Version]
  35. Dai, Y.-H.; Kou, C.-X. A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 2013, 23, 296–320. [Google Scholar] [CrossRef] [Green Version]
  36. Cheng, W. A PRP type method for systems of monotone equations. Math. Comput. Model. 2009, 50, 15–20. [Google Scholar] [CrossRef]
  37. Zhang, L.; Zhou, W. Spectral gradient projection method for solving nonlinear monotone equations. J. Comput. Appl. Math. 2006, 196, 478–484. [Google Scholar] [CrossRef] [Green Version]
  38. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. Ser. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  39. Daubechies, I.; Defrise, M.; Mol, C.D. An efficient thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  40. Figueiredo, M.A.T.; Nowak, R. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  41. Mol, C.D.; Defrise, M. A note on wavelet-based inversion algorithms. Contemp. Math. 2002, 313, 85–96. [Google Scholar]
  42. Hale, E.T.; Yin, W.; Zhang, Y. A Fixed-Point Continuation Method for ell1-Regularized Minimization with Applications to Compressed Sensing; CAAM Technical Report TR07-07 for Rice University: Houston, TX, USA, 2007. [Google Scholar]
  43. Yang, J.; Yin, W.; Zhang, Y.; Wang, Y. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sci. 2009, 2, 569–592. [Google Scholar] [CrossRef]
  44. Yang, J.; Zhang, Y.; Yin, W. An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM J. Sci. Comput. 2009, 31, 2842–2865. [Google Scholar] [CrossRef]
  45. Xiao, Y.-H.; Jin, Z.-F. An alternating direction method for linear-constrained matrix nuclear norm minimization. Numer. Linear Algebra Appl. 2012, 19, 541–554. [Google Scholar] [CrossRef]
  46. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  47. Yun, S.; Toh, K.-C. A coordinate gradient descent method for 1-regularized convex minimization. Comput. Optim. Appl. 2011, 48, 273–307. [Google Scholar] [CrossRef]
  48. Loris, I.; Bertero, M.; Mol, D.C.; Zanella, R.; Zanni, L. Accelerating gradient projection methods for 1-constrained signal recovery by steplength selection rules. Appl. Comput. Harmon. Anal. 2009, 27, 247–254. [Google Scholar] [CrossRef] [Green Version]
  49. Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
  50. Abubakar, A.B.; Kumam, P.; Awwal, A.M.; Thounthong, P. A Modified Self-Adaptive Conjugate Gradient Method for Solving Convex Constrained Monotone Nonlinear Equations for Signal Recovery Problems. Mathematics 2019, 7, 693. [Google Scholar] [CrossRef] [Green Version]
  51. Awwal, A.M.; Kumam, P.; Abubakar, A.B. A modified conjugate gradient method for monotone nonlinear equations with convex constraints. Appl. Numer. Math. 2019, 145, 507–520. [Google Scholar] [CrossRef]
  52. Gao, P.; He, C. An efficient three-term conjugate gradient method for nonlinear monotone equations with convex constraints. Calcolo 2018, 55, 53. [Google Scholar] [CrossRef]
  53. Abubakar, A.B.; Kumam, P.; Awwal, A.M. A descent Dai–Liao projection method for convex constrained nonlinear monotone equations with applications. Thai J. Math. 2018. Available online: http://thaijmath.in.cmu.ac.th/index.php/thaijmath/article/view/3372 (accessed on 21 April 2020).
Figure 1. Dolan and Mor e ´ performance profile with respect to number of iterations.
Figure 1. Dolan and Mor e ´ performance profile with respect to number of iterations.
Mca 25 00027 g001
Figure 2. Dolan and Moré performance profile with respect to number of function evaluation.
Figure 2. Dolan and Moré performance profile with respect to number of function evaluation.
Mca 25 00027 g002
Figure 3. Dolan and Moré performance profile with respect to CPU time.
Figure 3. Dolan and Moré performance profile with respect to CPU time.
Mca 25 00027 g003
Figure 4. From top to bottom: the original signal, the measurement, the recovered signal by HSS and the recovered signal by MSCG.
Figure 4. From top to bottom: the original signal, the measurement, the recovered signal by HSS and the recovered signal by MSCG.
Mca 25 00027 g004
Figure 5. Comparison result of HSS and MSCG. The x-axis represents the number of Iterations top left and bottom left and CPU time in seconds top right and bottom right. The y-axis represents the mean of squared error (MSE) top left and top right and the objective function values bottom left and bottom right.
Figure 5. Comparison result of HSS and MSCG. The x-axis represents the number of Iterations top left and bottom left and CPU time in seconds top right and bottom right. The y-axis represents the mean of squared error (MSE) top left and top right and the objective function values bottom left and bottom right.
Mca 25 00027 g005
Table 1. Test results obtained by the four solvers for Problem 1.
Table 1. Test results obtained by the four solvers for Problem 1.
Problem 1HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x15120.44541.2 × 10 8 41840.0296659.97 × 10 7 36740.02913.49 × 10 7 47960.0498234.58 × 10 7
x225510.0307729.12 × 10 7 571160.0204458.17 × 10 7 42850.0150798.84 × 10 7 651310.0548949.19 × 10 7
x38170.0058679.42 × 10 9 501020.0117758.56 × 10 7 46940.0172042.21 × 10 8 40820.0326475.07 × 10 7
x424500.0108837.2 × 10 7 581180.0154819.51 × 10 7 40820.0184373.16 × 10 7 44900.0377265.45 × 10 7
x57150.008251.49 × 10 8 511040.0128978.46 × 10 7 511040.0192569.89 × 10 7 641300.0722067.35 × 10 7
x67150.0213862.37 × 10 8 41840.0112118.86 × 10 7 46940.0187997.01 × 10 7 801620.0724486.36 × 10 8
5000x15110.0249341.79 × 10 7 40820.0403488.34 × 10 7 27560.0349971.3 × 10 8 35720.198116.59 × 10 7
x225510.0435359.12 × 10 7 571160.042538.17 × 10 7 42850.083158.84 × 10 7 651310.29739.19 × 10 7
x37160.0633392.09 × 10 8 48980.056128.76 × 10 7 28580.0670634.05 × 10 7 511040.199575.68 × 10 7
x424500.0450377.2 × 10 7 581180.0836549.51 × 10 7 36740.0669753.44 × 10 7 571160.24156.8 × 10 7
x57150.0168243.17 × 10 8 491000.0690988.61 × 10 7 511040.0938077.98 × 10 7 511040.428786.81 × 10 7
x67150.0130233.37 × 10 8 38780.0617288.02 × 10 7 49990.0837438.02 × 10 7 881780.791696.7 × 10 7
10000x15110.0253221.82 × 10 7 39800.0878348.97 × 10 7 29600.109882.14 × 10 8 37760.31353.44 × 10 7
x225510.0925619.12 × 10 7 571160.138388.17 × 10 7 42850.130598.84 × 10 7 651310.580469.19 × 10 7
x37160.0219962.96 × 10 8 47960.106679.17 × 10 7 31640.113671.18 × 10 7 931881.43962.81 × 10 7
x424500.0770327.2 × 10 7 581180.144769.51 × 10 7 41840.155321.1 × 10 7 48980.385828.67 × 10 7
x57150.0209354.45 × 10 8 48980.105448.97 × 10 7 47950.160125.62 × 10 7 971962.81349.86 × 10 7
x67150.023724.53 × 10 8 42860.0908518.97 × 10 7 541090.277445.59 × 10 7 1142303.21432.62 × 10 7
50000x15110.128613.17 × 10 7 38780.337468.43 × 10 7 28580.444728.48 × 10 8 47961.81093.45 × 10 7
x225510.408959.12 × 10 7 571160.43128.17 × 10 7 42850.584678.84 × 10 7 651312.23629.19 × 10 7
x37160.0897726.61 × 10 8 45920.380089.78 × 10 7 44900.762632.61 × 10 7 491003.7096.73 × 10 7
x424500.498317.2 × 10 7 581180.497679.51 × 10 7 45920.645482.84 × 10 7 511041.61058.18 × 10 7
x57150.086959.9 × 10 8 46940.401039.31 × 10 7 511040.725556.27 × 10 7 20340827.19119.8 × 10 7
x67150.0810759.63 × 10 8 41840.377149.08 × 10 7 44900.632766.26 × 10 7 17334817.37456.02 × 10 7
100000x15110.18974.34 × 10 7 38780.681247.72 × 10 7 21440.784097.7 × 10 7 40822.38717.69 × 10 7
x225511.12649.12 × 10 7 571161.0928.17 × 10 7 42851.24098.84 × 10 7 651313.72929.19 × 10 7
x37160.195329.34 × 10 8 45920.768748.38 × 10 7 37761.53894.43 × 10 8 10721616.15369.91 × 10 7
x424500.696047.2 × 10 7 581181.10089.51 × 10 7 32661.03877.38 × 10 7 541103.06357.87 × 10 7
x57150.135711.4 × 10 7 45920.819939.9 × 10 7 511043.01367.08 × 10 7 451904155.57528.9 × 10 7
x67150.158491.38 × 10 7 41840.832329.94 × 10 7 491002.7417.08 × 10 7 18236653.90135.58 × 10 7
Table 2. Test results obtained by the four solvers for Problem 2.
Table 2. Test results obtained by the four solvers for Problem 2.
Problem 2HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x14100.0655114.79 × 10 8 541100.0515378.99 × 10 7 260.0040495.17 × 10 7 260.004285.17 × 10 7
x27160.0049443.08 × 10 7 551120.0207588.77 × 10 7 18380.0145244.14 × 10 7 601220.0307798.53 × 10 8
x38180.0074085.76 × 10 8 801620.0244118.64 × 10 7 5120.0059141.74 × 10 8 5120.0024811.74 × 10 8
x48180.0047865.33 × 10 8 601220.0308949.09 × 10 7 20420.0143788.06 × 10 7 861740.0417376.46 × 10 7
x58180.0061472.74 × 10 7 711440.0325928.41 × 10 7 27560.0259851.81 × 10 7 961940.0447082.04 × 10 7
x68180.0071742.78 × 10 7 711440.0339338.54 × 10 7 25520.0482522.21 × 10 7 891800.0517548.75 × 10 8
5000x14100.0131441.06 × 10 7 581180.084238.02 × 10 7 260.0113781.75 × 10 7 260.004461.75 × 10 7
x27160.0180963.13 × 10 7 551120.107698.7 × 10 7 30620.190221.54 × 10 7 731480.156766.86 × 10 7
x38180.0184381.28 × 10 7 831680.120049.71 × 10 7 5120.0119732.36 × 10 9 5120.0096052.36 × 10 9
x48180.0221495.4 × 10 8 601220.120489 × 10 7 18380.0940735.88 × 10 8 611240.130658.76 × 10 7
x58180.0348739.51 × 10 7 741500.126299.49 × 10 7 22460.0465548.16 × 10 7 771560.200591.08 × 10 7
x68180.0356239.55 × 10 7 741500.135649.58 × 10 7 22460.0793877.45 × 10 7 871760.23464.59 × 10 7
10000x14100.0154671.5 × 10 7 591200.156819.04 × 10 7 260.0126051.21 × 10 7 260.0150881.21 × 10 7
x27160.0301363.13 × 10 7 551120.151558.69 × 10 7 28580.528171.05 × 10 7 721460.333879.65 × 10 7
x38180.0308591.82 × 10 7 851720.527628.77 × 10 7 5120.108383.62 × 10 9 5120.0181151.24 × 10 9
x48180.0621295.41 × 10 8 601220.364798.99 × 10 7 15320.0521875.56 × 10 7 781580.34051.51 × 10 7
x59200.0642091.33 × 10 8 761540.206148.58 × 10 7 25520.0893069 × 10 7 1012040.533945.47 × 10 7
x69200.0674111.33 × 10 8 761540.34438.58 × 10 7 31640.128648.79 × 10 7 731480.448656.23 × 10 7
50000x14100.0687333.34 × 10 7 631280.76668.26 × 10 7 260.033216.32 × 10 8 260.0316816.32 × 10 8
x27160.208613.14 × 10 7 551121.29948.68 × 10 7 21440.309536.18 × 10 10 491000.997312.56 × 10 7
x38180.241234.06 × 10 7 891801.11968.02 × 10 7 6140.257419.31 × 10 9 5120.0611254.01 × 10 10
x48180.407215.42 × 10 8 601221.62538.98 × 10 7 15320.429822.59 × 10 7 611241.23897.49 × 10 7
x59200.256662.98 × 10 8 791601.22689.81 × 10 7 23480.349811.82 × 10 7 1082181.93827.03 × 10 7
x69200.14282.98 × 10 8 791601.91219.81 × 10 7 24500.409737.86 × 10 7 1112242.37377.48 × 10 7
100000x14100.144324.73 × 10 7 641302.86469.34 × 10 7 260.0612325.4 × 10 8 260.0589235.4 × 10 8
x27160.185633.14 × 10 7 551121.60578.68 × 10 7 29600.867966.52 × 10 8 711442.4829.83 × 10 7
x38180.260775.74 × 10 7 901822.62089.07 × 10 7 7160.294911.1 × 10 9 5120.118392.71 × 10 10
x48180.307285.42 × 10 8 601221.89688.98 × 10 7 15320.647252.34 × 10 7 611242.75444.46 × 10 7
x59200.293644.22 × 10 8 811642.48828.87 × 10 7 20420.952543.99 × 10 7 861743.81818.38 × 10 7
x69200.303764.21 × 10 8 811642.16768.87 × 10 7 21440.673737.01 × 10 8 1302624.7415.23 × 10 7
Table 3. Test results obtained by the four solvers for Problem 3.
Table 3. Test results obtained by the four solvers for Problem 3.
Problem 3HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x14100.020311.97 × 10 8 671360.0333749.14 × 10 7 21440.0165947.52 × 10 7 6140.0033082.51 × 10 20
x24100.0043643.86 × 10 8 591200.0143329.74 × 10 7 19400.0209925.27 × 10 7 561140.0287851.95 × 10 7
x35120.0091664.62 × 10 7 791600.0208389.14 × 10 7 24500.0289468.19 × 10 7 7160.0038575.28 × 10 7
x44100.0033874.2 × 10 7 631280.0153098.43 × 10 7 20420.0235075.35 × 10 7 591190.0343869.47 × 10 7
x55120.0074514.72 × 10 8 751520.0185228.3 × 10 7 23480.0287489.58 × 10 7 621260.0361432.58 × 10 7
x65120.0031564.83 × 10 8 751520.0184018.24 × 10 7 23480.019979.53 × 10 7 851720.0411962.93 × 10 7
5000x14100.006344.4 × 10 8 711440.0568798.37 × 10 7 22460.0321422.34 × 10 22 6140.0136856.96 × 10 7
x24100.0077623.86 × 10 8 591200.0644479.74 × 10 7 19400.0455925.27 × 10 7 561140.13891.95 × 10 7
x36140.0369291.02 × 10 8 831680.0970788.37 × 10 7 26540.0439529.45 × 10 7 8180.0174210
x44100.0134074.21 × 10 7 631280.0663948.43 × 10 7 20420.0511695.35 × 10 7 47960.149424.76 × 10 7
x55120.0102951.06 × 10 7 781580.114869.51 × 10 7 25520.0504115.36 × 10 7 841700.238015.46 × 10 7
x65120.0166841.07 × 10 7 781580.153259.49 × 10 7 25520.0372975.39 × 10 7 581170.228685.38 × 10 7
10000x14100.0226996.23 × 10 8 721460.167959.47 × 10 7 22460.0603963.31 × 10 22 6140.0203647.28 × 10 20
x24100.0141393.86 × 10 8 591200.149019.74 × 10 7 19400.182715.27 × 10 7 561140.34391.95 × 10 7
x36140.0192241.45 × 10 8 841700.696389.47 × 10 7 27560.658266.68 × 10 7 8180.0309733.47 × 10 20
x44100.0119954.21 × 10 7 631280.330748.43 × 10 7 20420.0764165.35 × 10 7 36740.253763.81 × 10 7
x55120.0151411.49 × 10 7 801620.43288.61 × 10 7 25520.100347.58 × 10 7 721460.506422.51 × 10 7
x65120.0228581.49 × 10 7 801620.226928.63 × 10 7 25520.161077.59 × 10 7 771560.495671.88 × 10 7
50000x14100.0526511.39 × 10 7 761541.21238.67 × 10 7 24501.17956.65 × 10 7 7160.176038.43 × 10 20
x24100.0571563.86 × 10 8 591201.04149.74 × 10 7 19400.358565.27 × 10 7 561141.3951.95 × 10 7
x36140.0637953.24 × 10 8 881781.54488.67 × 10 7 27560.411632.37 × 10 20 7160.108834.33 × 10 18
x44100.046994.21 × 10 7 631280.800968.43 × 10 7 20420.29295.35 × 10 7 42861.09359.35 × 10 7
x55120.172353.34 × 10 7 831681.05499.86 × 10 7 26540.466068.48 × 10 7 831682.03173.14 × 10 7
x65120.0610833.34 × 10 7 831681.31789.87 × 10 7 26540.883051.54 × 10 22 861742.07824.3 × 10 7
100000x14100.136291.97 × 10 7 771561.5079.81 × 10 7 19400.848673.35 × 10 20 7160.183.11 × 10 7
x24100.110023.86 × 10 8 591201.17629.74 × 10 7 19400.490615.27 × 10 7 561142.38471.95 × 10 7
x36140.171074.58 × 10 8 891802.36319.81 × 10 7 34701.58755.62 × 10 7 7160.204175.04 × 10 18
x44100.170334.21 × 10 7 631282.01718.43 × 10 7 20420.452785.35 × 10 7 531082.38372.24 × 10 7
x55120.274334.73 × 10 7 851722.40788.92 × 10 7 27560.785599.24 × 10 7 801623.29273.65 × 10 7
x65120.427434.72 × 10 7 851722.16168.94 × 10 7 29600.697635.07 × 10 7 861743.06912.38 × 10 7
Table 4. Test results obtained by the four solvers for Problem 4.
Table 4. Test results obtained by the four solvers for Problem 4.
Problem 4HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x1130.0224890130.0477190130.0072450130.0009320
x2130.0021720130.0024290130.00375701092190.0294839.28 × 10 7
x3130.0016340130.002620130.0028750130.0022450
x4380.0056284.47 × 10 8 130.0024405120.0076677.88 × 10 9 130.002350
x55120.0040216.49 × 10 7 130.003671029590.0198149.22 × 10 7 130.0014770
x65120.0124387.19 × 10 7 130.001087024490.0128857.25 × 10 7 130.001340
5000x1130.0059650130.0030180130.0022840130.0022720
x2130.0044590130.0022050130.00303301092190.138519.28 × 10 7
x3130.0022320130.0066970130.0032550130.0078350
x4380.0074854.15 × 10 8 130.00250405120.0085256.9 × 10 9 130.0033380
x56140.0150531.46 × 10 8 130.002627025520.025486.94 × 10 7 130.0037990
x66140.0102081.48 × 10 8 130.002331026530.0329456.87 × 10 7 130.0041360
10000x1130.0048320130.0137940130.0036710130.003630
x2130.0037930130.003650130.00400101092190.366489.28 × 10 7
x3130.0049740130.0040410130.0054930130.013890
x4380.0105924.11 × 10 8 130.00937505120.0266026.79 × 10 9 130.0085080
x56140.0217742.07 × 10 8 130.009331027550.050586.22 × 10 7 130.0061770
x66140.0138762.05 × 10 8 130.006686023480.0481028.31 × 10 7 130.007210
50000x1130.0215240130.0178930130.0211820130.0136170
x2130.054730130.0212530130.01071301092191.1489.28 × 10 7
x3130.0211190130.0135690130.0187470130.042270
x4380.0733594.07 × 10 8 130.00953205120.062476.69 × 10 9 130.0171470
x56140.0638644.64 × 10 8 130.0113560130.0263350130.0203160
x66140.103044.7 × 10 8 130.0089210130.027590130.0224850
100000x1130.0354240130.0164410130.0349870130.0244020
x2130.0279290130.033530130.03691401092192.22399.28 × 10 7
x3130.0397060130.0170340130.119070130.0822340
x4380.0835964.07 × 10 8 130.01752905120.0835316.68 × 10 9 130.046280
x56140.253596.57 × 10 8 130.0163060130.0339980130.0390020
x66140.14166.95 × 10 8 130.0314490130.0249310130.0412110
Table 5. Test results obtained by the four solvers for Problem 5.
Table 5. Test results obtained by the four solvers for Problem 5.
Problem 5HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x14100.0597513.96 × 10 7 821660.0498378.42 × 10 7 26540.0234366.16 × 10 7 681380.0778113.54 × 10 7
x24100.0046654.11 × 10 7 821660.0366948.73 × 10 7 26540.0187916.39 × 10 7 741500.0626044.38 × 10 7
x34100.0033021.09 × 10 7 761540.0589038.81 × 10 7 24500.0149536.76 × 10 7 641300.053195.6 × 10 7
x44100.0048254.1 × 10 7 821660.0368688.71 × 10 7 26540.0135226.38 × 10 7 781580.0600245.4 × 10 7
x54100.0038493.39 × 10 7 811640.0319568.99 × 10 7 26540.0189615.26 × 10 7 751520.0673819.61 × 10 7
x64100.003153.39 × 10 7 811640.0774369.01 × 10 7 26540.0142815.27 × 10 7 541100.0600387.5 × 10 7
5000x14100.0126548.89 × 10 7 851720.155769.65 × 10 7 27560.0817966.9 × 10 7 541100.244528.65 × 10 7
x24100.0114539.23 × 10 7 861740.44078.01 × 10 7 27560.0687747.16 × 10 7 581180.353628.98 × 10 7
x34100.0116822.44 × 10 7 801620.334848.08 × 10 7 25520.0670687.57 × 10 7 43880.255347.14 × 10 7
x44100.0139089.23 × 10 7 861740.243788.01 × 10 7 27560.0881277.16 × 10 7 631280.309157.22 × 10 7
x54100.0118497.6 × 10 7 851720.156418.24 × 10 7 27560.0838185.89 × 10 7 551120.282147.38 × 10 7
x64100.0135197.61 × 10 7 851720.218648.24 × 10 7 27560.179615.9 × 10 7 871760.339129.84 × 10 7
10000x15120.0493446.35 × 10 7 871760.323998.73 × 10 7 28580.338597.32 × 10 7 511040.620717.26 × 10 7
x25120.0542546.59 × 10 7 871760.567319.06 × 10 7 29600.286345.7 × 10 7 491000.491798.68 × 10 7
x34100.0250273.45 × 10 7 811640.819269.14 × 10 7 26540.135395.35 × 10 7 581180.534579.29 × 10 7
x45120.0371076.59 × 10 7 871760.334299.06 × 10 7 29600.184075.69 × 10 7 48980.585128.06 × 10 7
x55120.0297315.43 × 10 7 861740.315979.33 × 10 7 28580.181796.25 × 10 7 781580.70646.91 × 10 7
x65120.0306795.41 × 10 7 861740.341239.33 × 10 7 28580.132666.26 × 10 7 591200.609425.85 × 10 7
50000x16140.134387.17 × 10 7 901821.95811 × 10 6 32661.33550621262.08536.8 × 10 7
x26140.146767.45 × 10 7 911841.48148.3 × 10 7 33680.714510591201.95649.73 × 10 7
x34100.0831757.72 × 10 7 851721.57018.37 × 10 7 26540.69494043881.51664.88 × 10 7
x46140.141687.45 × 10 7 911841.89018.3 × 10 7 33681.1252046941.78684.16 × 10 7
x56140.206316.13 × 10 7 901821.24328.54 × 10 7 31640.601080671361.75096.22 × 10 7
x66140.153716.13 × 10 7 901822.0318.54 × 10 7 31640.96708048981.41237.24 × 10 7
100000x17160.365365.12 × 10 7 921863.42239.05 × 10 7 34702.0212037762.23228.4 × 10 7
x27160.416085.32 × 10 7 921863.41779.39 × 10 7 35722.38690661344.39389.15 × 10 7
x35120.267515.51 × 10 7 861743.17569.47 × 10 7 26541.3934034702.85437.66 × 10 7
x47160.361655.32 × 10 7 921862.93379.39 × 10 7 35721.9980571162.94969.55 × 10 7
x56140.411288.67 × 10 7 911842.35459.66 × 10 7 33682.17470711443.54776.17 × 10 7
x66140.273288.67 × 10 7 911842.34149.66 × 10 7 33682.20560591203.26396.57 × 10 7
Table 6. Test results obtained by the four solvers for Problem 6.
Table 6. Test results obtained by the four solvers for Problem 6.
Problem 6HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x15120.0302483.21 × 10 8 36740.0323529.48 × 10 7 6140.0066582.09 × 10 7 380.003723.24 × 10 7
x26140.0042534.69 × 10 7 37760.0103117.65 × 10 7 34700.0189386.1 × 10 7 861740.0682927.13 × 10 7
x35120.0040812.21 × 10 7 36740.0104839.07 × 10 7 6140.0050916.55 × 10 7 4100.0046456.55 × 10 8
x410220.0140771.03 × 10 8 37760.0101887.56 × 10 7 33680.0130955.47 × 10 7 631280.07788.47 × 10 7
x56140.0065382.76 × 10 7 36740.0102696.45 × 10 7 48980.0184948.73 × 10 7 801620.102963.08 × 10 7
x66140.0059772.91 × 10 7 36740.0121786.54 × 10 7 521060.028323.6 × 10 7 881780.120788.96 × 10 7
5000x15120.0240247.18 × 10 8 38780.0444928.3 × 10 7 6140.0142414.68 × 10 7 380.0186037.25 × 10 7
x26140.0186346.56 × 10 7 39800.0597816.7 × 10 7 35720.0844095.59 × 10 7 691400.397686.38 × 10 7
x35120.0174534.95 × 10 7 38780.069797.94 × 10 7 7160.028629.35 × 10 8 4100.0177631.46 × 10 7
x48180.0265075.07 × 10 7 39800.0539076.68 × 10 7 36740.153476.96 × 10 7 711440.471279.1 × 10 7
x56140.0191056.22 × 10 7 37760.0631829.02 × 10 7 34700.134527.37 × 10 7 851720.663793.47 × 10 7
x66140.0149686.44 × 10 7 37760.0519719.06 × 10 7 48980.224969.43 × 10 7 831680.553299.71 × 10 7
10000x15120.0232481.02 × 10 7 39800.0986557.34 × 10 7 6140.02896.62 × 10 7 4100.0443465.12 × 10 9
x26140.0273237.49 × 10 7 39800.106919.48 × 10 7 16340.0589028.16 × 10 7 48980.554987.33 × 10 7
x35120.021357 × 10 7 39800.102127.02 × 10 7 7160.0295551.36 × 10 7 4100.0351272.07 × 10 7
x48180.0301694.59 × 10 7 39800.221129.46 × 10 7 35720.15975.62 × 10 7 831680.873436.35 × 10 7
x56140.0282488.8 × 10 7 38780.329037.98 × 10 7 40820.136936.2 × 10 7 831680.804127.74 × 10 7
x66140.0260417.92 × 10 7 38780.23858.04 × 10 7 42860.26047.67 × 10 7 791600.894367.84 × 10 7
50000x15120.159622.27 × 10 7 41840.410186.42 × 10 7 7160.167549.46 × 10 8 4100.140451.15 × 10 8
x26140.129.08 × 10 7 41840.480698.3 × 10 7 28580.679667.57 × 10 7 841702.94662.31 × 10 7
x36140.252838.32 × 10 9 40820.555249.82 × 10 7 8180.11352.69 × 10 7 4100.130514.63 × 10 7
x48180.113634.12 × 10 7 41840.736878.29 × 10 7 26540.348481.44 × 10 7 1052123.78426.72 × 10 7
x57160.11641.05 × 10 8 40820.413446.98 × 10 7 37760.532497.23 × 10 7 861743.01835.09 × 10 7
x67160.111971.04 × 10 8 40820.470396.99 × 10 7 501021.43897.13 × 10 7 881782.91973.2 × 10 7
100000x15120.30683.21 × 10 7 41841.4179.08 × 10 7 7160.187728.58 × 10 7 4100.217461.62 × 10 8
x26140.568869.63 × 10 7 42860.812917.34 × 10 7 35721.04287.61 × 10 7 ----
x36140.17541.18 × 10 8 41840.882658.69 × 10 7 8180.366033.81 × 10 7 4100.185376.55 × 10 7
x48180.26284.34 × 10 7 42861.2857.34 × 10 7 29601.03245.25 × 10 7 921865.95977.9 × 10 7
x57160.217321.48 × 10 8 40820.693989.87 × 10 7 35721.12864.43 × 10 7 1162346.0969.91 × 10 8
x67160.288341.52 × 10 8 40821.40879.87 × 10 7 501021.90376.08 × 10 7 871764.76629.35 × 10 7
Table 7. Test results obtained by the four solvers for Problem 7.
Table 7. Test results obtained by the four solvers for Problem 7.
Problem 7HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x14100.0121831.68 × 10 8 1513040.186919.17 × 10 7 16340.0149578.17 × 10 7 6140.0123229.16 × 10 8
x25110.0047851.65 × 10 7 1332680.0560329.21 × 10 7 15310.0111097.97 × 10 7 39790.0651698.34 × 10 7
x3130.0015470250.004122018380.0130798.51 × 10 7 130.0024910
x47150.0044767.97 × 10 8 1402820.0465279.19 × 10 7 17360.0119094.98 × 10 7 2735476.08352.24 × 10 10
x510210.0123942.24 × 10 9 1643300.0560949.22 × 10 7 19400.0188644.77 × 10 7 ----
x610210.0100666.38 × 10 7 1643300.0673069.42 × 10 7 19400.0147434.64 × 10 7 3587174.66587.11 × 10 7
5000x14100.117893.76 × 10 8 1583180.215189.8 × 10 7 17360.0733956.85 × 10 7 6140.0428782.05 × 10 7
x25110.0150991.65 × 10 7 1332680.175889.21 × 10 7 15310.0390817.97 × 10 7 39790.257958.34 × 10 7
x36140.0373961.09 × 10 7 250.00964020420.0600588.59 × 10 7 130.0066580
x47150.0321927.96 × 10 8 1402820.598139.19 × 10 7 17360.0685994.99 × 10 7 22945982.10770
x510210.0208277.11 × 10 7 1713440.462599.87 × 10 7 20420.0680944.02 × 10 7 17234521.1120
x612250.0283978.18 × 10 9 1713440.249549.82 × 10 7 20420.116064.01 × 10 7 21142392.84250
10000x14100.0170555.32 × 10 8 1623260.456749.1 × 10 7 17360.190219.69 × 10 7 6140.113612.9 × 10 7
x25110.019811.65 × 10 7 1332680.564739.21 × 10 7 15310.163517.97 × 10 7 39790.547268.34 × 10 7
x36140.0646271.55 × 10 7 250.016405021440.228514.56 × 10 7 130.0160890
x47150.0258517.96 × 10 8 1402820.83449.19 × 10 7 17360.0859034.99 × 10 7 8501701193.65070
x511230.329586.94 × 10 9 1753520.498589.16 × 10 7 20420.100455.69 × 10 7 ----
x610210.0764883.74 × 10 7 1753520.52239.16 × 10 7 20420.103895.68 × 10 7 821656.88050
50000x14100.0784961.19 × 10 7 1693402.50249.73 × 10 7 18380.60568.13 × 10 7 6140.377636.48 × 10 7
x25110.0993971.65 × 10 7 1332682.11599.21 × 10 7 15310.683637.97 × 10 7 39792.57188.34 × 10 7
x36140.151183.46 × 10 7 250.026231023480.610799.93 × 10 7 130.0582230
x47150.103667.96 × 10 8 1402822.17229.19 × 10 7 17360.391424.99 × 10 7 479517.97215.55 × 10 9
x511230.352293.92 × 10 7 1823662.33229.79 × 10 7 20420.628617.76 × 10 7 ----
x612250.186081.01 × 10 8 1823662.48559.81 × 10 7 20420.927.75 × 10 7 ----
100000x14100.207291.68 × 10 7 1733484.64029.03 × 10 7 19400.753354.31 × 10 7 ----
x25110.146821.65 × 10 7 1332682.94359.21 × 10 7 15310.89337.97 × 10 7 ----
x36140.197414.89 × 10 7 250.045153027561.78794.72 × 10 7 ----
x47150.209397.96 × 10 8 1402823.4529.19 × 10 7 17360.927054.99 × 10 7 ----
x512250.609326.89 × 10 9 1863745.02859.09 × 10 7 21441.234.11 × 10 7 ----
x613270.35191.84 × 10 8 1863744.66659.08 × 10 7 21440.934684.11 × 10 7 ----
Table 8. Test results obtained by the four solvers for Problem 8.
Table 8. Test results obtained by the four solvers for Problem 8.
Problem 8HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x1661340.247519.78 × 10 7 ------------
x227560.0341589.5 × 10 7 80816180.318859.99 × 10 7 --------
x3120.0037850250.0015310120.0012230120.0067340
x4671360.0493399.88 × 10 7 --------671350.0357975.66 × 10 7
x5731480.04619.68 × 10 7 --------641290.0655859.67 × 10 7
x6721460.047619.93 × 10 7 --------631270.0356559.71 × 10 7
5000x1941900.542249.91 × 10 7 ------------
x227560.0622399.5 × 10 7 80816181.22699.99 × 10 7 --------
x3120.0025940250.0128590120.0059970120.002570
x4801620.380389.91 × 10 7 --------731470.186829.29 × 10 7
x51012040.437459.83 × 10 7 --------1082170.308259.98 × 10 7
x61012040.318979.88 × 10 7 --------1082170.281769.91 × 10 7
10000x11102220.752919.94 × 10 7 ------------
x227560.10579.5 × 10 7 80816183.03979.99 × 10 7 --------
x3120.0037310250.0090010120.0077660120.0039490
x4831680.47339.88 × 10 7 --------741490.42449.69 × 10 7
x51172360.767129.86 × 10 7 --------1523050.708729.97 × 10 7
x61172360.821389.85 × 10 7 --------1533070.7029.95 × 10 7
50000x11603224.42569.95 × 10 7 ------------
x227560.522969.5 × 10 7 80816189.77939.99 × 10 7 --------
x3120.0160820250.0693270120.0198480120.0102720
x4841702.30999.99 × 10 7 --------751511.70069.31 × 10 7
x51663344.17169.99 × 10 7 --------3967938.15669.99 × 10 7
x61663344.26829.99 × 10 7 --------3967936.96119.99 × 10 7
100000x11893808.51939.9 × 10 7 ------------
x227560.840019.5 × 10 7 808161817.3819.99 × 10 7 --------
x3120.0331660250.126170120.019280120.0292270
x4851726.68579.79 × 10 7 --------751513.73529.32 × 10 7
x519539211.76019.93 × 10 7 --------623124721.26541 × 10 6
x61953927.94299.93 × 10 7 --------624124921.56089.99 × 10 7
Table 9. Test results obtained by the four solvers for Problem 9.
Table 9. Test results obtained by the four solvers for Problem 9.
Problem 9HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x1611240.0667029.54 × 10 7 671360.0999217.96 × 10 7 1392800.0769239.73 × 10 7 3176360.718299.8 × 10 7
x231640.0183038.25 × 10 7 35720.0438918.99 × 10 7 1873760.096778.84 × 10 7 2454920.522259.87 × 10 7
x3521060.0292997.74 × 10 7 921860.100589.64 × 10 7 2274560.121429.23 × 10 7 2915840.68249.91 × 10 7
x436740.0179349.87 × 10 7 511040.0185268.96 × 10 7 2184380.114089.53 × 10 7 2825660.942919.88 × 10 7
x5491000.0232889.36 × 10 7 981980.0384899.43 × 10 7 2334680.221779.99 × 10 7 3206421.33949.96 × 10 7
x6521060.0295258.02 × 10 7 1392800.0808359.97 × 10 7 1883780.117879.81 × 10 7 ----
5000x1631280.329988.55 × 10 7 691400.0893259.55 × 10 7 1883780.575378.86 × 10 7 2955923.84789.96 × 10 7
x231640.180528.25 × 10 7 35720.0598328.99 × 10 7 1873761.25738.84 × 10 7 2454922.43019.87 × 10 7
x3661340.263088.96 × 10 7 841700.135729.76 × 10 7 2775560.85179.68 × 10 7 3066143.35869.83 × 10 7
x436740.13879.89 × 10 7 511040.090488.94 × 10 7 2184381.12129.56 × 10 7 2825662.85879.88 × 10 7
x5601220.359957.44 × 10 7 871760.51478.48 × 10 7 1813640.769979.64 × 10 7 3857714.83749.86 × 10 7
x6541100.226688.29 × 10 7 1152320.489168.87 × 10 7 1994000.669659.62 × 10 7 ----
10000x1651320.575939.3 × 10 7 711440.25659.49 × 10 7 1933882.02999.37 × 10 7 2915846.50639.97 × 10 7
x231640.194868.25 × 10 7 35720.114288.99 × 10 7 1873761.15468.84 × 10 7 2454925.79299.87 × 10 7
x3711440.582887.63 × 10 7 831680.305249.96 × 10 7 2204422.0489.39 × 10 7 3136289.09269.72 × 10 7
x436740.256179.89 × 10 7 511040.182798.94 × 10 7 2184382.15299.56 × 10 7 2825667.56729.88 × 10 7
x5641300.442098.52 × 10 7 881780.69758.93 × 10 7 2324661.45479.66 × 10 7 38577113.3369.98 × 10 7
x6551120.319038.87 × 10 7 1212440.695659.14 × 10 7 2024061.93269.15 × 10 7 ----
50000x1561141.4439.12 × 10 7 751521.03059.7 × 10 7 1903824.949.44 × 10 7 29258622.27239.85 × 10 7
x231640.64588.25 × 10 7 35720.775618.99 × 10 7 1873764.27588.84 × 10 7 24549218.33789.87 × 10 7
x3771562.34049.2 × 10 7 831681.44059.09 × 10 7 2334686.02049.9 × 10 7 31362831.48969.86 × 10 7
x436740.769589.89 × 10 7 511040.95688.94 × 10 7 2184385.5089.56 × 10 7 28256621.00929.88 × 10 7
x5771561.8397.95 × 10 7 811641.42638.13 × 10 7 2384786.03239.93 × 10 7 47795666.18663.76 × 10 7
x6591201.43136.21 × 10 7 1302622.699.73 × 10 7 2074165.35119.34 × 10 7 ----
100000x1561142.74819.28 × 10 7 771562.77698.94 × 10 7 1973969.21789.98 × 10 7 29058244.50199.93 × 10 7
x231641.41458.25 × 10 7 35720.971348.99 × 10 7 1873768.42598.84 × 10 7 24549236.70539.87 × 10 7
x3921865.45458.29 × 10 7 851723.07679.13 × 10 7 1843708.50419.73 × 10 7 357716118.66759.98 × 10 7
x436741.6549.89 × 10 7 511042.08148.94 × 10 7 21843810.42079.56 × 10 7 28256643.74529.88 × 10 7
x5691404.42598.51 × 10 7 831682.96398.07 × 10 7 25751612.91429.96 × 10 7 331664111.46679.84 × 10 7
x6591202.71198.79 × 10 7 1412845.83948.87 × 10 7 20641410.06699.33 × 10 7 ----
Table 10. Test results obtained by the four solvers for Problem 10.
Table 10. Test results obtained by the four solvers for Problem 10.
Problem 11HSSCGDPDYMFRM
DIMSPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000x1721460.0460639.69 × 10 7 38780.135525.47 × 10 7 2254520.090389.68 × 10 7 41840.0632628.97 × 10 7
x2691400.0269.32 × 10 7 37760.0245126.72 × 10 7 2054120.0808019.84 × 10 7 46940.0643668.96 × 10 7
x3771560.0275499.89 × 10 7 40820.0120668.38 × 10 7 1523060.0609649.82 × 10 7 47960.0653767.43 × 10 7
x4631280.020259.03 × 10 7 42860.0131166.97 × 10 7 2264540.0890119.59 × 10 7 39800.054658.98 × 10 7
x5731480.0244539.01 × 10 7 37760.0160019.83 × 10 7 2044100.0828529.77 × 10 7 38780.0546826.94 × 10 7
x6931880.0295339.47 × 10 7 45920.0372559.99 × 10 7 3286580.144179.93 × 10 7 47960.11948.09 × 10 7
5000x1691400.180518.25 × 10 7 37760.0726437.15 × 10 7 2084180.388389.98 × 10 7 37760.25637.23 × 10 7
x2751520.130238.3 × 10 7 37760.0442124.24 × 10 7 2084180.656639.66 × 10 7 38780.24769 × 10 7
x3831680.127869.58 × 10 7 46940.0799378.06 × 10 7 1573160.846699.77 × 10 7 44900.290739.45 × 10 7
x4661340.143438.4 × 10 7 31640.0277425.17 × 10 7 2024060.418729.74 × 10 7 43880.300944.26 × 10 7
x5761540.114379.54 × 10 7 41840.0589614.94 × 10 7 2054120.393379.48 × 10 7 47960.503568.33 × 10 7
x6981980.168558.42 × 10 7 501020.0708357.33 × 10 7 3517041.33989.62 × 10 7 501020.435953.95 × 10 7
10000x1741500.313358.07 × 10 7 41840.119083.11 × 10 7 2194401.32769.71 × 10 7 41840.854074.69 × 10 7
x2771560.330629.84 × 10 7 42860.143357.17 × 10 7 2004021.84499.98 × 10 7 47960.982986.67 × 10 7
x3731480.27389.79 × 10 7 39800.123666.46 × 10 7 1482980.711349.58 × 10 7 45920.895288.1 × 10 7
x4771560.287799.88 × 10 7 39800.120374.26 × 10 7 2224461.90269.6 × 10 7 41841.01119.98 × 10 7
x5791600.307459.61 × 10 7 35720.108519.48 × 10 7 1973960.938039.86 × 10 7 36740.781859.23 × 10 7
x61012040.448269.82 × 10 7 511040.143686.93 × 10 7 3486982.5159.97 × 10 7 42860.887745.76 × 10 7
50000x1801621.70877.83 × 10 7 41840.496214.59 × 10 7 2044105.17939.76 × 10 7 40823.13024.85 × 10 7
x2811642.22689.19 × 10 7 41840.460782.73 × 10 7 2014044.26439.87 × 10 7 46943.42816.69 × 10 7
x3761541.7429.9 × 10 7 42860.564199.12 × 10 7 1523062.99339.9 × 10 7 491003.36877.99 × 10 7
x4831681.30657.66 × 10 7 37760.577856.02 × 10 7 1983984.38929.96 × 10 7 40822.47884.09 × 10 7
x5811641.45138.87 × 10 7 40820.546159.17 × 10 7 1963943.79839.62 × 10 7 501023.01688.14 × 10 7
x61062142.07659.35 × 10 7 501020.63777.26 × 10 7 3647307.37889.52 × 10 7 491002.96096.42 × 10 7
100000x1801623.71289.06 × 10 7 36741.25626.72 × 10 7 2104228.40599.9 × 10 7 47966.33034.09 × 10 7
x2811642.80539.51 × 10 7 42861.38925.03 × 10 7 2154327.8511 × 10 6 40825.6026.53 × 10 7
x3851723.28049.1 × 10 7 46941.4063.58 × 10 7 1553126.23939.74 × 10 7 48986.86497.43 × 10 7
x4831683.20799.94 × 10 7 43881.2079.09 × 10 7 2114248.72749.6 × 10 7 581187.29796.17 × 10 7
x5811643.12158.28 × 10 7 42861.17244.39 × 10 7 2124268.72169.92 × 10 7 491006.57939.17 × 10 7
x61092203.72749.75 × 10 7 541101.43927 × 10 7 37374813.99379.54 × 10 7 501026.85524.66 × 10 7
Table 11. Test results obtained by the four solvers for Problem 11.
Table 11. Test results obtained by the four solvers for Problem 11.
Problem 11HSSCGDPDYMFRM
SPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
x1691390.0714829.81 × 10 7 2434870.0606889.56 × 10 7 571150.0335028.4 × 10 7 48970.0590398.64 × 10 7
x2631270.0098919.41 × 10 7 2284570.0257279.77 × 10 7 521050.0180458.24 × 10 7 571150.0544438.39 × 10 7
x3701410.0116279.32 × 10 7 2464930.0288269.75 × 10 7 591190.0149098.08 × 10 7 511030.045447.37 × 10 7
x4681370.0101989.27 × 10 7 2354710.025459.49 × 10 7 571150.0154138.21 × 10 7 611230.0525519.35 × 10 7
x542850.0073757.69 × 10 7 1933870.0217629.83 × 10 7 39790.0133969.28 × 10 7 501010.0478349.42 × 10 7
x6681370.0089278.96 × 10 7 2505010.0268789.68 × 10 7 561130.0146678.47 × 10 7 611230.0509777.61 × 10 7
Table 12. Numerical results of HSS and modified self-adaptive conjugate gradient (MSCG) methods with the initial points randomly generated.
Table 12. Numerical results of HSS and modified self-adaptive conjugate gradient (MSCG) methods with the initial points randomly generated.
HSSMSCG
S/No.MSEITERCPUMSEITERCPU
12.23 × 10 6 79178.114.61 × 10 6 143309.23
23.44 × 10 6 82198.386.46 × 10 6 150359.03
32.84 × 10 6 71175.067.43 × 10 6 163386.78
44.71 × 10 6 72160.221.49 × 10 5 139305.39
51.99 × 10 6 76184.174.07 × 10 6 144341.41
61.68 × 10 6 83197.363.12 × 10 6 148342.94
73.92 × 10 6 65158.096.43 × 10 6 162396.00
82.23 × 10 6 79173.634.43 × 10 6 145314.28
91.59 × 10 6 71170.023.94 × 10 6 152365.58
105.72 × 10 6 65124.779.63 × 10 6 160311.38
113.71 × 10 6 67152.914.44 × 10 6 141306.33
122.09 × 10 6 70148.053.58 × 10 6 145307.59
132.39 × 10 6 72155.384.93 × 10 6 146287.72
141.96 × 10 6 72148.695.41 × 10 6 151308.14
152.36 × 10 6 79177.474.61 × 10 6 143318.89
Average2.86 × 10 6 73.53166.825.87 × 10 6 148.80330.71

Share and Cite

MDPI and ACS Style

Awwal, A.M.; Wang, L.; Kumam, P.; Mohammad, H.; Watthayu, W. A Projection Hestenes–Stiefel Method with Spectral Parameter for Nonlinear Monotone Equations and Signal Processing. Math. Comput. Appl. 2020, 25, 27. https://0-doi-org.brum.beds.ac.uk/10.3390/mca25020027

AMA Style

Awwal AM, Wang L, Kumam P, Mohammad H, Watthayu W. A Projection Hestenes–Stiefel Method with Spectral Parameter for Nonlinear Monotone Equations and Signal Processing. Mathematical and Computational Applications. 2020; 25(2):27. https://0-doi-org.brum.beds.ac.uk/10.3390/mca25020027

Chicago/Turabian Style

Awwal, Aliyu Muhammed, Lin Wang, Poom Kumam, Hassan Mohammad, and Wiboonsak Watthayu. 2020. "A Projection Hestenes–Stiefel Method with Spectral Parameter for Nonlinear Monotone Equations and Signal Processing" Mathematical and Computational Applications 25, no. 2: 27. https://0-doi-org.brum.beds.ac.uk/10.3390/mca25020027

Article Metrics

Back to TopTop