Next Article in Journal / Special Issue
A Geometric Orthogonal Projection Strategy for Computing the Minimum Distance Between a Point and a Spatial Parametric Curve
Previous Article in Journal
Algorithms for Managing, Querying and Processing Big Data in Cloud Environments
Previous Article in Special Issue
An Optimal Order Method for Multiple Roots in Case of Unknown Multiplicity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems

School of Mathematics and Physics, Bohai University, Jinzhou 121013, China
*
Author to whom correspondence should be addressed.
Submission received: 16 October 2015 / Revised: 26 January 2016 / Accepted: 27 January 2016 / Published: 1 February 2016
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

:
In this work, two multi-step derivative-free iterative methods are presented for solving system of nonlinear equations. The new methods have high computational efficiency and low computational cost. The order of convergence of the new methods is proved by a development of an inverse first-order divided difference operator. The computational efficiency is compared with the existing methods. Numerical experiments support the theoretical results. Experimental results show that the new methods remarkably reduce the computing time in the process of high-precision computing.

1. Introduction

Finding the solutions of system of nonlinear equations F ( x ) = 0 is a hot problem with wide applications in sciences and engineering, wherein F : D R m R m and D is an open convex domain in R m . Many efficient methods have been proposed for solving system of nonlinear equations, see for example [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18] and the references therein. The best known method is the Steffensen method [1,2], which is given by
y ( k ) = ψ 1 ( x ( k ) , w ( k ) ) = x ( k ) [ w ( k ) , x ( k ) ; F ] 1 F ( x ( k ) )
where w ( k ) = x ( k ) + F ( x ( k ) ) , [ w ( k ) , x ( k ) ; F ] 1 is the inverse of [ w ( k ) , x ( k ) ; F ] and [ w ( k ) , x ( k ) ; F ] : D R m R m is the first order divided difference on D. Equation (1) does not require the derivative of the system F in per iteration.
To reduce the computational time and improve the efficiency index of the Steffensen method, many modified high-order methods have been proposed in open literatures, see [3,4,5,6,7,8,9,10,11,12,13,14] and the references therein. Liu et al. [3] obtained a fourth-order derivative-free method for solving system of nonlinear equations, which can be written as
y ( k ) = ψ 1 ( x ( k ) , w ( k ) ) x ( k + 1 ) = ψ 2 ( x ( k ) , w ( k ) , y ( k ) ) = y ( k ) [ y ( k ) , x ( k ) ; F ] 1 ( [ y ( k ) , x ( k ) ; F ] [ y ( k ) , w ( k ) ; F ] + [ w ( k ) , x ( k ) ; F ] ) [ y ( k ) , x ( k ) ; F ] 1 F ( y ( k ) )
where w ( k ) = x ( k ) + F ( x ( k ) ) . Grau-Sánchez et al. [4,5] developed some efficient derivative-free methods. One of the methods is the following sixth-order method
y ( k ) = x ( k ) [ w ( k ) , s ( k ) ; F ] 1 F ( x ( k ) ) z ( k ) = y ( k ) 2 [ x ( k ) , y ( k ) ; F ] [ w ( k ) , s ( k ) ; F ] 1 F ( y ( k ) ) x ( k + 1 ) = ψ 3 ( w ( k ) , s ( k ) , x ( k ) , y ( k ) , z ( k ) ) = z ( k ) 2 [ x ( k ) , y ( k ) ; F ] [ w ( k ) , s ( k ) ; F ] 1 F ( z ( k ) )
where w ( k ) = x ( k ) + F ( x ( k ) ) and s ( k ) = x ( k ) F ( x ( k ) ) . It should be noted that the Equations (2) and (3) need to compute two LU decompositions in per iteration, respectively. Some derivative-free methods are also discussed by Ezquerro et al. in [6] and by Wang et al. in [7,8]. The above multi-step derivative-free iterative methods can save the computing time in the High-precision computing. Therefore, it is meaningful to study the multi-step derivative-free iterative methods.
It is well-known that we can improve the efficiency index of the iterative method and reduce the computational time of the iterative process by reducing the computational cost of the iterative method. There are many ways to reduce the computational cost of the iterative method. In this paper, we reduce the computational cost of the iterative method by reducing the number of LU (lower upper) decompositions in per iteration. Two new derivative-free iterative methods are proposed for solving system of nonlinear equations in Section 2. We prove the local convergence order of the new methods. The feature of the new methods is that the LU decomposition is computed only once in per iteration. Section 3 compares the efficiency of different methods by computational efficiency index [10]. Section 4 illustrates convergence behavior of our methods by numerical examples. Section 5 is a short conclusion.

2. The New Methods and Analysis of Convergence

Using the central difference [ x ( k ) + F ( x ( k ) ) , x ( k ) F ( x ( k ) ) ; F ] , we propose the following iterative scheme
y ( k ) = x ( k ) [ w ( k ) , s ( k ) ; F ] 1 F ( x ( k ) ) x ( k + 1 ) = ψ 4 ( x ( k ) , w ( k ) , s ( k ) , y ( k ) ) = y ( k ) μ 1 F ( y ( k ) )
where μ 1 = ( 3 I 2 [ w ( k ) , s ( k ) ; F ] 1 [ y ( k ) , x ( k ) ; F ] ) [ w ( k ) , s ( k ) ; F ] 1 , w ( k ) = x ( k ) + F ( x ( k ) ) , s ( k ) = x ( k ) F ( x ( k ) ) and I is the identity matrix. Furthermore, if we define z ( k ) = ψ 4 ( x ( k ) , w ( k ) , s ( k ) , y ( k ) ) , then the order of convergence of the following method is six.
x ( k + 1 ) = ψ 5 ( x ( k ) , w ( k ) , s ( k ) , y ( k ) , z ( k ) ) = z ( k ) μ 1 F ( z ( k ) )
Compared with the Equation (4), the Equation (5) increases one function evaluation F ( z ( k ) ) . In order to simplify calculation, the new Equation (4) can be written as
[ w ( k ) , s ( k ) ; F ] γ ( k ) = F ( x ( k ) ) y ( k ) = x ( k ) γ ( k ) [ w ( k ) , s ( k ) ; F ] δ 1 ( k ) = F ( y ( k ) ) δ 2 ( k ) = [ y ( k ) , x ( k ) ; F ] δ 1 ( k ) [ w ( k ) , s ( k ) ; F ] δ 3 ( k ) = δ 2 ( k ) x ( k + 1 ) = y ( k ) 3 δ 1 ( k ) + 2 δ 3 ( k )
Similar strategy can be used in the Equation (5). For the Equations (4) and (5), we have the following analysis of convergence.
Theorem 1. Let α R m be a solution of the system F ( x ) = 0 and F : D R m R m be sufficiently differentiable in an open neighborhood D of α. Then, for an initial approximation sufficiently close to α, the convergence order of iterative Equation (4) is four with the following error equation
ε = ( 4 A 2 2 A 3 A 3 F ( α ) 2 ) E e 2 + A 2 E 2 + O ( e 5 )
where e = x ( k ) α and E = y ( k ) α . Iterative Equation (5) is of sixth order convergence and satisfies the following error equation
e n + 1 = 2 A 2 E ε ( A 3 + A 3 F ( α ) 2 4 A 2 2 ) e 2 ε + O ( e 7 )
where e n + 1 = x ( k + 1 ) α
Proof. The first order divided difference operator of F as a mapping [ · , · ; F ] : D × D R m × R m L ( R m ) (see [5,10,11]) is given by
[ x + h , x ; F ] = 0 1 F ( x + t h ) d t , ( x , h ) R m × R m
Expanding F ( x + t h ) in Taylor series at the point x and integrating, we obtain
0 1 F ( x + t h ) d t = F ( x ) + 1 2 F ( x ) h + 1 6 F ( x ) h 2 + O ( h 3 )
Developing F ( x ( k ) ) in a neighborhood of α and assuming that Γ = [ F ( α ) ] 1 exists, we have
F ( x ( k ) ) = F ( α ) [ e + A 2 e 2 + A 3 e 3 + A 4 e 4 + A 5 e 5 + O ( e 6 ) ]
where A i = 1 i ! Γ F ( i ) ( α ) L i ( R m , R m ) . The derivatives of F ( x ( k ) ) can be given by
F ( x ( k ) ) = F ( α ) [ I + 2 A 2 e + 3 A 3 e 2 + 4 A 4 e 3 + 5 A 5 e 4 + O ( e 5 ) ]
F ( x ( k ) ) = F ( α ) [ 2 A 2 + 6 A 3 e + 12 A 4 e 2 + 20 A 5 e 3 + O ( e 4 ) ]
F ( x ( k ) ) = F ( α ) [ 6 A 3 + 24 A 4 e + 60 A 5 e 2 + O ( e 3 ) ]
Setting y = x + h and E = y α , we have h = E e . Replacing the previous expressions Equations (12)–(14) into Equation (10) we get
[ x ( k ) , y ( k ) ; F ] = F ( α ) ( I + A 2 ( E + e ) + A 3 ( E 2 + E e + e 2 ) + O ( e 5 ) )
Noting that w ( k ) α = e + F ( x k ) and s ( k ) α = e F ( x k ) , we replace in Equation (15) E by e + F ( x k ) , e by e F ( x k ) , we obtain
[ w ( k ) , s ( k ) ; F ] = F ( α ) ( I + 2 A 2 e + ( 3 A 3 + A 3 F ( α ) 2 ) e 2 + O ( e 3 ) ) = F ( α ) D ( e ) + O ( e 3 )
where D ( e ) = I + 2 A 2 e + ( 3 A 3 + A 3 F ( α ) 2 ) e 2 and I is the identity matrix. Using Equation (16), we find
[ w ( k ) , s ( k ) ; F ] 1 = D ( e ) 1 Γ + O ( e 3 )
Then, we compel the inverse of D ( e ) to be (see [12,13])
D ( e ) 1 = I + X 2 e + X 3 e 2 + O ( e 3 )
such that X 2 and X 3 verify
D ( e ) D ( e ) 1 = D ( e ) 1 D ( e ) = I
Solving the system Equation (19), we obtain
X 2 = 2 A 2
X 3 = ( 4 A 2 2 ( 3 A 3 + A 3 F ( α ) 2 ) )
then,
[ w ( k ) , s ( k ) ; F ] 1 = ( I 2 A 2 e + ( 4 A 2 2 ( 3 A 3 + A 3 F ( α ) 2 ) ) e 2 + O ( e 3 ) ) Γ
E = y ( k ) α = e [ w ( k ) , s ( k ) ; F ] 1 F ( x ( k ) ) = A 2 e 2 + O ( e 3 )
Similar to Equation (11), we have
F ( y ( k ) ) = F ( α ) [ E + A 2 E 2 + O ( E 3 ) ]
From Equations (15) and (22)–(24), we get
μ 1 = ( 3 I 2 [ w ( k ) , s ( k ) ; F ] 1 [ y ( k ) , x ( k ) ; F ] ) [ w ( k ) , s ( k ) ; F ] 1 = ( I 2 A 2 E + ( A 3 + A 3 F ( α ) 2 4 A 2 2 ) e 2 ) Γ
Taking into account Equation (4), (24) and (25), we obtain
ε = ψ 4 ( x ( k ) , w ( k ) , s ( k ) , y ( k ) ) α = E μ 1 F ( y ( k ) ) = E ( I 2 A 2 E + ( A 3 + A 3 F ( α ) 2 4 A 2 2 ) e 2 ) ( E + A 2 E 2 + O ( E 3 ) ) = ( 4 A 2 2 A 3 A 3 F ( α ) 2 ) E e 2 + A 2 E 2 + O ( e 5 )
This means that the Equation (4) is of fourth-order convergence.
Therefore, from Equations (5) and (24)–(26), we obtain the error equation:
e n + 1 = x ( k + 1 ) α = ε μ 1 F ( z ( k ) ) = ε ( I 2 A 2 E + ( A 3 + A 3 F ( α ) 2 4 A 2 2 ) e 2 ) ( ε + O ( ε 2 ) ) = 2 A 2 E ε ( A 3 + A 3 F ( α ) 2 4 A 2 2 ) e 2 ε + O ( e 7 )
This means that the Equation (5) is of sixth-order convergence. ☐

3. Computational Efficiency

The classical efficiency index E = ρ 1 / c (see [9]) is the most used index, but not the only one. We find that the iterative methods with the same classical efficiency index ( E ) have the different properties in actual applications. The reason is that the number of functional evaluations of iterative method is not the only influence factor in evaluating the efficiency of the iterative method. The number of matrix products, scalar products, decomposition LU of matrix, and the resolution of the triangular linear systems also play an important role in evaluating the real efficiency of iterative method. In this paper, the computational efficiency index ( C E I ) [10] is used to compare the efficiency of the iterative methods. Some discussions on the C E I can be found in [4,5,6,7]. The C E I of the iterative methods ψ i ( i = 1 , 2 , , 5 ) is given by
C E I i ( μ , m ) = ρ i 1 C i ( μ , m ) , i = 1 , 2 , 3 , 4 , 5
where ρ i is the order of convergence of the method and C i ( μ , m ) is the computational cost of method. The C i ( μ , m ) is given by
C i ( μ , m ) = a i ( m ) μ + p i ( m )
where a i ( m ) denotes the number of evaluations of scalar functions used in the evaluations of F and [ x , y ; F ] , and p i ( m ) represents the operational cost per iteration. To express the value of Equation (29) in terms of products, a ratio μ > 0 in Equation (29) between products (and divisions) and evaluations of functions is required, see [5,10]. We must add m products for multiplication of a vector by a scalar and m 2 products for matrix-vector multiplication. To compute an inverse linear operator, we need ( m 3 m ) / 3 products and divisions in the LU decomposition and m 2 products and divisions for solving two triangular linear systems. If we compute the first-order divided difference then we need m ( m 1 ) scalar functional evaluations and m 2 quotients. The first-order divided difference [ x , y ; F ] of F is given by
[ y , x ; F ] i j = ( F i ( y 1 , y j 1 , y j , x j + 1 , , x m ) F i ( y 1 , y j 1 , x j , x j + 1 , , x m ) ) / ( y j x j )
where 1 i , j m , x = ( x 1 , x j 1 , x j , x j + 1 , x m ) and y = ( y 1 , y j 1 , y j , y j + 1 , y m ) (see [9]). Based on Equations (28) and (29), Table 1 shows the computational cost of different methods.
Table 1. Computational cost of the iterative methods.
Table 1. Computational cost of the iterative methods.
Methodsρa(m)p(m)C(μ, m)
ψ 1 2 m ( m + 1 ) ( m 3 m ) / 3 + 2 m 2 C 1 = m ( m + 1 ) μ + ( m 3 m ) / 3 + 2 m 2
ψ 2 4 3 m 2 2 ( m 3 m ) / 3 + 7 m 2 C 2 = 3 m 2 μ + 2 ( m 3 m ) / 3 + 7 m 2
ψ 3 6 m ( 2 m + 3 ) 2 ( m 3 m ) / 3 + 6 m 2 C 3 = m ( 2 m + 3 ) μ + 2 ( m 3 m ) / 3 + 6 m 2
ψ 4 4 2 m ( m + 1 ) ( m 3 m ) / 3 + 6 m 2 + 2 m C 4 = 2 m ( m + 1 ) μ + ( m 3 m ) / 3 + 6 m 2 + 2 m
ψ 5 6 m ( 2 m + 3 ) ( m 3 m ) / 3 + 9 m 2 + 4 m C 5 = m ( 2 m + 3 ) μ + ( m 3 m ) / 3 + 9 m 2 + 4 m
From Table 1, we can see that our methods ψ i ( i = 4 , 5 ) need less number of LU decomposition than methods ψ 2 and ψ 3 . The computational cost of the fourth-order methods show the following order:
C 4 < C 2 , for   m 2
We use the following expressions [10] to compare the CEI of different methods
R i , j = ln C E I i ln C E I j = ln ( ρ i ) C j ( μ , m ) ln ( ρ j ) C i ( μ , m ) , i , j = 1 , 2 , 3 , 4 , 5
For R i , j > 1 the iterative method M i is more efficient than M j .
Using the C E I of the iterative methods, we obtain the following theorem:
Theorem 2. 1. For the fourth-order method, we have C E I 4 > C E I 2 for all m 2   a n d   μ > 0 .
2. For the sixth-order method, we have C E I 5 > C E I 3 for all m 11   a n d   μ > 0 .
Proof. 1. From Table 1, we note that the methods ψ i ( i = 2 , 4 ) have the same order ρ 2 = ρ 4 = 4 . Based on Equations (29) and (30), we get that C E I 4 > C E I 2 for all m 2 and μ > 0 .
2. The methods ψ i ( i = 3 , 5 ) have the same order and the same functional evaluations. The relation between ψ 5 and ψ 3 can be given by
R 5 , 3 = ln ( ρ 5 ) C 3 ( μ , m ) ln ( ρ 3 ) C 5 ( μ , m ) = m ( 2 m + 3 ) μ + 2 ( m 3 m ) / 3 + 6 m 2 m ( 2 m + 3 ) μ + ( m 3 m ) / 3 + 9 m 2 + 4 m
Subtracting the denominator from the numerator of Equation (32), we have
1 3 m ( m 2 9 m 13 )
The Equation (33) is positive for m 10 . 2662 . Thus, we obtain that C E I 5 > C E I 3 for all m 11 and μ > 0 .  ☐
Then, we compare the C E I of the iterative methods with different convergence order by the following theorem:
Theorem 3. We have 1. C E I 5 > C E I 4 for all m 2 and μ > m 2 ln 2 / 3 + 18 m ln 4 / 3 + ( 17 ln 2 5 ln 3 ) 6 ( m ln 3 / 2 + ln 3 / 4 ) .
2. C E I 4 > C E I 1 for all m 8 and μ > 0 .
Proof. 1. From the expression Equation (31) and Table 1,We get the following relation between ψ 4 and ψ 5
R 5 , 4 = ln ( ρ 5 ) C 4 ( μ , m ) ln ( ρ 4 ) C 5 ( μ , m ) = ln 6 ln 4 2 m ( m + 1 ) μ + ( m 3 m ) / 3 + 6 m 2 + 2 m m ( 2 m + 3 ) μ + ( m 3 m ) / 3 + 9 m 2 + 4 m
We consider the boundary R 5 , 4 = 1 . The boundary can is given by the following equation
μ = H 5 , 4 ( m ) = m 2 ln 2 / 3 + 18 m ln 4 / 3 + 17 ln 2 5 ln 3 6 ( m ln 3 / 2 + ln 3 / 4 )
where C E I 5 > C E I 4 over it (see Figure 1). The boundary Equation (35) cut axes at points ( m , μ ) = ( 13 . 888 , 0 ) and ( 2 , 4 . 7859 ) . Thus, we get that C E I 5 > C E I 4 since R 5 , 4 > 1 for all m 2 and μ > H 5 , 4 ( m ) .
2. The relation between ψ 1 and ψ 4 is given by
R 4 , 1 = ln ( ρ 4 ) C 1 ( μ , m ) ln ( ρ 1 ) C 4 ( μ , m ) = ln 4 ln 2 m ( m + 1 ) μ + ( m 3 m ) / 3 + 2 m 2 2 m ( m + 1 ) μ + ( m 3 m ) / 3 + 6 m 2 + 2 m
Subtracting the denominator from the numerator of Equation (36), we have
1 3 m ( m 2 6 m 7 )
The Equation (37) is positive for m > 7 . Thus, we obtain that C E I 4 > C E I 1 for all m 8 and μ > 0 .  ☐
Figure 1. The boundary function H 5 , 4 in ( m , μ ) plain.
Figure 1. The boundary function H 5 , 4 in ( m , μ ) plain.
Algorithms 09 00014 g001

4. Numerical Examples

In this section, we compare the performance of related methods by mathematical experiments. The numerical experiments have been carried out using Maple 14 computer algebra system with 2048 digits. The computer specifications are Microsoft Windows 7 Intel(R), Core(TM) i3-2350M CPU, 1.79 GHz with 2 GB of RAM.
According to the Equation (29), the factor μ is claimed by expressing the cost of the evaluation of elementary functions in terms of products [15]. Table 2 gives an estimation of the cost of the elementary functions in amount of equivalent products, where the running time of one product is measured in milliseconds.
Table 2. Estimation of computational cost of elementary functions computed with Maple 14 and using a processor Intel® Core(TM) i3-2350M CPU, 1.79 GHz (32-bit Machine) Microsoft Windows 7 Professional, where x = 3 1 and y = 5 .
Table 2. Estimation of computational cost of elementary functions computed with Maple 14 and using a processor Intel® Core(TM) i3-2350M CPU, 1.79 GHz (32-bit Machine) Microsoft Windows 7 Professional, where x = 3 1 and y = 5 .
Digitsx·yx/y x exp ( x ) ln ( x ) sin ( x ) cos ( x ) arctan ( x )
20480.109 ms15531211211095
Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 show the following information of the methods ψ i ( i = 1 , 2 , , 5 ) : the number of iterations k needed to converge to the solution, the norm of function F ( x ( k ) ) at the last step, the value of the stopping factors at the last step, the computational cost C, the computational time T i m e ( s ) , the computational efficiency indices C E I and the computational order of convergence ρ. Using the commond time( ) in Maple 14, we can obtain the computational time of different methods. The computational order of convergence ρ is defined by [16]:
ρ ln ( | | x ( k + 1 ) x ( k ) | | / | | x ( k ) x ( k 1 ) | | ) ln ( | | x ( k ) x ( k 1 ) | | / | | x ( k 1 ) x ( k 2 ) | | )
The following problems are chosen for numerical tests:
Example 1 Considering the following system
x 1 + e x 1 cos ( x 2 ) = 0 3 x 1 x 2 sin ( x 2 ) = 0
where ( m , μ ) = ( 2 , 53 + 110 + 112 + 1 2 ) = ( 2 , 138 ) are the values used in Equation (29). x ( 0 ) = ( 0 . 5 , 0 . 5 ) T is the initial point and α ( 0 , 0 ) T is the solution of the Example 1. | | x ( k ) x ( k 1 ) | | < 10 100 is the stopping criterion.
The results shown in Table 3 confirm the first assertion of Theorem 2 and the first assertion of Theorem 3 for m 2 . Namely, C E I 5 > C E I 4 for μ > 4 . 7859 . The new sixth-order method ψ 5 spends minimum time for finding the numerical solution. The ’nc’ denotes that the method does not converge in the Table 3.
Table 3. Performance of methods for Example 1.
Table 3. Performance of methods for Example 1.
Methodk | | x ( k ) x ( k 1 ) | | | | F ( x ( k ) ) | | ρCCEITime(s)
ψ 1 131.792e−1613.748e−3222.000008381.00082751.127
ψ 2 nc
ψ 3 43.558e−7438.245e−4966.0031419601.00091480.780
ψ 4 54.086e−2115.330e−4214.0001516861.00082260.836
ψ 5 46.240e−1642.389e−4896.0042019781.00090630.546
Example 2 The second system is defined by [11]
x 2 + x 3 e x 1 = 0 x 1 + x 3 e x 3 = 0 x 1 + x 2 e x 3 = 0
where ( m , μ ) = ( 3 , 53 + 53 3 ) = ( 3 , 35 . 3 ) . The initial point is x ( 0 ) = ( 0 . 5 , 0 . 5 , 0 . 5 ) . | | x ( k ) x ( k 1 ) | | < 10 200 is the stopping criterion.The solution is α ( 0 . 3517337 , 0 . 3517337 , 0 . 3517337 ) .
The results shown in Table 4 confirm the first assertion of Theorem 2 and assertion 1 of Theorem 3 for m = 3 . Namely, C E I 4 > C E I 2 and C E I 5 > C E I 4 for μ > 4 . 7859 . Table 4 shows that sixth-order method ψ 5 is the most efficient iterative method in both computational time and C E I .
Table 4. Performance of methods for Example 2.
Table 4. Performance of methods for Example 2.
Methodk | | x ( k ) x ( k 1 ) | | | | F ( x ( k ) ) | | ρCCEITime(s)
ψ 1 92.136e−3025.945e−6042449.61.001542890.514
ψ 2 52.439e−6752.703e−135041032.11.001344080.592
ψ 3 41.414e−10807.020e−162061023.11.001752840.561
ψ 4 54.123e−6999.73957e−13974915.21.001515890.561
ψ 5 49.097e−5508.57708e−164761054.11.001701250.483
Example 3 Now, considering the following large scale nonlinear systems [17]:
x i x i + 1 1 = 0 , 1 i m 1 x m x 1 1 = 0
The initial vector is x ( 0 ) = { 1 . 5 , 1 . 5 , , 1 . 5 } t for the solution α = { 1 , 1 , , 1 } t . The stopping criterion is | | x ( k ) x ( k 1 ) | | < 10 100 .
Table 5. Performance of methods for Example 3, where ( m , μ ) = ( 199 , 1 ) .
Table 5. Performance of methods for Example 3, where ( m , μ ) = ( 199 , 1 ) .
Methodk | | x ( k ) x ( k 1 ) | | | | F ( x ( k ) ) | | ρCCEITime(s)
ψ 1 104.993e−1507.480e−2992.000002,745,8021.00000025295.940
ψ 2 53.013e−2121.210e−4234.000005,649,6101.000000245126.438
ψ 3 43.922e−5562.197e−8335.999985,571,0051.00000032277.111
ψ 4 51.404e−2699.850e−5384.000002,944,4041.00000047181.042
ψ 5 45.298e−2082.231e−6215.999763,063,8041.00000058564.818
Table 6. The computational time (in second) for Example 3 by the methods.
Table 6. The computational time (in second) for Example 3 by the methods.
Method ψ 1 ψ 2 ψ 3 ψ 4 ψ 5
m = 99 20.98229.49916.84819.21915.459
m = 199 95.940126.43877.11181.04264.818
m = 299 254.234328.896207.340199.930156.094

Application in Integral Equations

The Chandrasekhar integral [18] equation comes from radiative transfer theory, which is given by
F ( P , c ) = 0 , P : [ 0 , 1 ] R
with the operator F and parameter c as
F ( P , c ) ( u ) = P ( u ) 1 c 2 0 1 u P ( v ) u + v d v 1
We approximate the integrals by the composite midpoint rule:
0 1 f ( t ) d t = 1 m j = 1 m f ( t j )
where t j = ( j 1 / 2 ) / m for 1 j m . We obtain the resulting discrete problem is
F i ( u , c ) = u i 1 c 2 m j = 1 m t i u j t i + t j 1 , 1 i m
The initial vector is x ( 0 ) = 1 . 5 , 1 . 5 , , 1 . 5 t , c = 0 . 9 . Table 7 and Table 8 show the numerical results of this problem. | | F ( x ( k ) ) | | < 10 200 is the stopping criterion of this problem.
Table 7. The computational time (in second) for solving Chandrasekhar Integral equation.
Table 7. The computational time (in second) for solving Chandrasekhar Integral equation.
Method ψ 1 ψ 2 ψ 3 ψ 4 ψ 5
m = 30 88.468207.20087.937102.05570.309
m = 60 422.388904.602435.929488.969400.345
Table 8. The number of iterations for solving Chandrasekhar Integral equation.
Table 8. The number of iterations for solving Chandrasekhar Integral equation.
Method ψ 1 ψ 2 ψ 3 ψ 4 ψ 5
m = 30 86454
m = 60 86454
The results shown in Table 5 confirm the assertion of Theorem 2 and Theorem 3 for m = 199 . Namely, C E I 4 > C E I 2 , C E I 5 > C E I 3 , and C E I 4 > C E I 1 . From the Table 6, we remark that the computational time of our fourth-order method ψ 4 is less than that of the sixth order method ψ 3 for m = 299 . Table 5, Table 6 and Table 7 show that, as the nonlinear system is big-sized, our new methods ψ i ( i = 4 , 5 ) remarkably reduce the computational time.
The numerical results shown in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 are in concordance with the theory developed in this paper. The new methods require less number of iterations to obtain higher accuracy in the contrast to the other methods. The most important is that our methods have higher C E I and lower computational time than other methods in this paper. The sixth-order method ψ 5 is the most efficient iterative methods in both C E I and computational time.

5. Conclusions

In this paper, two high-order iterative methods for solving system of nonlinear equations are obtained. The new methods are derivative free. The order of convergence of the new methods is proved by using a development of an inverse first-order divided difference operator. Moreover, the computational efficiency index for system of nonlinear equations is used to compare the efficiency of different methods. Numerical experiments show that our methods remarkably reduce the computational time for solving big-sized system of nonlinear equations. The main reason is that the LU decomposition of the matrix of our methods is computed only once in per iteration. We concluded that, in order to obtain an efficient iterative method, we should comprehensively consider the number of functional evaluations, the convergence order and the operational cost of the iterative.

Acknowledgments

The project supported by the National Natural Foundation of China (Nos. 11371081,11547005 and 61572082), the PhD Start-up Fund of Liaoning Province of China ( Nos. 20141137, 20141139 and 201501196), the Liaoning BaiQianWan Talents Program (No.2013921055) and the Educational Commission Foundation of Liaoning Province of China (Nos. L2014443 and L2015012).

Author Contributions

Xiaofeng Wang conceived and designed the experiments; Xiaofeng Wang and Xiaodong Fan wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New Jersey, NJ, USA, 1964. [Google Scholar]
  3. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  4. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  5. Grau-Sánchez, M.; Grau, À.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  6. Ezquerro, J.A.; Hernández, M.A.; Romero, N. Solving nonlinear integral equations of Fredholm type with high order iterative methods. J. Comput. Appl. Math. 2011, 236, 1449–1463. [Google Scholar] [CrossRef]
  7. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algor. 2015, 70, 545–558. [Google Scholar] [CrossRef]
  8. Wang, X.; Zhang, T. A family of steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
  9. Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
  10. Ezquerro, J.A.; Grau, À.; Grau-Sánchez, M.; Hernández, M.A.; Noguera, M. Analysing the efficiency of some modifications of the secant method. Comput. Math. Appl. 2012, 64, 2066–2073. [Google Scholar] [CrossRef]
  11. Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  12. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratts composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  13. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef]
  14. Potra, F.A.; Pátk, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
  15. Fousse, L.; Hanrot, G.; Lefvre, V.; Pélissier, P.; Zimmermann, P. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 2007, 33, 15–16. [Google Scholar] [CrossRef]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. App. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  17. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear systems. J. Comput. Appl. Math. 2013, 252, 86–94. [Google Scholar] [CrossRef]
  18. Kelley, C.T. Solution of the Chandrasekhar H-equation by Newton’s method. J. Math. Phys. 1980, 21, 1625–1628. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Wang, X.; Fan, X. Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems. Algorithms 2016, 9, 14. https://0-doi-org.brum.beds.ac.uk/10.3390/a9010014

AMA Style

Wang X, Fan X. Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems. Algorithms. 2016; 9(1):14. https://0-doi-org.brum.beds.ac.uk/10.3390/a9010014

Chicago/Turabian Style

Wang, Xiaofeng, and Xiaodong Fan. 2016. "Two Efficient Derivative-Free Iterative Methods for Solving Nonlinear Systems" Algorithms 9, no. 1: 14. https://0-doi-org.brum.beds.ac.uk/10.3390/a9010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop