Next Article in Journal / Special Issue
Fifth-Order Iterative Method for Solving Multiple Roots of the Highest Multiplicity of Nonlinear Equation
Previous Article in Journal
Data Fusion Modeling for an RT3102 and Dewetron System Application in Hybrid Vehicle Stability Testing
Previous Article in Special Issue
Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence of an Optimal Eighth Order Method under Weak Conditions

1
Cameron University, Department of Mathematics Sciences Lawton, OK 73505, USA
2
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Private Bag X01, Scottsville, Pietermaritzburg 3209, South Africa
*
Author to whom correspondence should be addressed.
Algorithms 2015, 8(3), 645-655; https://0-doi-org.brum.beds.ac.uk/10.3390/a8030645
Submission received: 9 June 2015 / Revised: 31 July 2015 / Accepted: 5 August 2015 / Published: 19 August 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

:
We study the local convergence of an eighth order Newton-like method to approximate a locally-unique solution of a nonlinear equation. Earlier studies, such as Chen et al. (2015) show convergence under hypotheses on the seventh derivative or even higher, although only the first derivative and the divided difference appear in these methods. The convergence in this study is shown under hypotheses only on the first derivative. Hence, the applicability of the method is expanded. Finally, numerical examples are also provided to show that our results apply to solve equations in cases where earlier studies cannot apply.
MSC Classification:
65D10; 65D99; 65G99

1. Introduction

In this study, we are concerned with the problem of approximating a locally-unique solution x * of equation:
F ( x ) = 0
where F is a differentiable function defined on a convex subset D of S with values in S , where S is R or C .
Many problems from applied sciences, including engineering, can be solved by means of finding the solutions of equations in a form like Equation (1) using mathematical modeling [2,3,4,5,6,7]. Except in special cases, the solutions of these equations can be found in closed form. This is the main reason why the most commonly-used solution methods are usually iterative. The convergence analysis of iterative methods is usually divided into two categories: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of iteration procedures. A very important problem in the study of iterative procedures is the radius of convergence. In general, the radius of convergence is small. Therefore, it is important to enlarge the radius of convergence. Another important problem is to find more precise error estimates on the distances x n x * .
The most popular method for approximating a simple solution x * of Equation (1) is undoubtedly Newton’s method, which is given by:
x n + 1 = x n F ( x n ) 1 F ( x n ) , for each n = 0 , 1 , 2 .
provided that F does not vanish in D [2,13]. To obtain a higher order of convergence, many methods have been proposed [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. We study the local convergence of the three-step method defined for each n = 0 , 1 , 2 , by:
y n = x n F ( x n ) 1 F ( x n ) , z n = y n F ( x n ) + β F ( y n ) F ( x n ) + ( β 2 ) F ( y n ) F ( x n ) 1 F ( x n ) , x n + 1 = x n F ( z n ) A n ,
where x 0 is an initial point, β S and:
A n = 2 [ x n , z n ; F ] 2 [ x n , y n ; F ] + [ z n , y n ; F ] + ( y n z n ) [ y n , x n , x n ; F ] , [ x n , y n ; F ] = F ( x n ) F ( y n ) x n y n , and [ y n , x n , x n ; F ] = [ x n , y n ; F ] F ( x n ) y n x n .
The eighth order of convergence for Method (3) was established in [1], when β S , using Taylor expansions and hypotheses reaching up to the eighth derivative of F, although only the first derivatives and the divided difference appear in these methods. This method is also an optimal in the sense of Traub with efficiency index 8 1 4 [4]. The advantages of Method (3) over other competing methods were also shown in [1]. However, the hypotheses of higher order derivatives limit the applicability of these methods. As a motivational example, define function F on X = Y = R , D = [ 5 2 , 1 2 ] by:
F ( x ) = x 3 l n x 2 + x 5 x 4 , x 0 0 , x = 0 .
Then, we have that:
F ( x ) = 3 x 2 l n x 2 + 5 x 4 4 x 3 + 2 x 2 ,
F ( x ) = 12 x l n x 2 + 20 x 3 12 x 2 + 10 x
and:
F ( x ) = 12 l n x 2 + 60 x 2 12 x + 22 .
Then, obviously, function F ( x ) is unbounded on D . Hence, the results in [1], cannot apply to show the convergence of Method (3) or its special cases requiring hypotheses on the third derivative of function F or higher. Notice that, in particular, there is a plethora of iterative methods for approximating solutions of nonlinear equations [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. These results show that if the initial point x 0 is sufficiently close to the solution x * , then the sequence { x n } converges to x * . However, how close to the solution x * should the initial guess x 0 be? These local results give no information on the radius of the convergence ball for the corresponding method. The same technique can be used for other methods.
In the present study, we study the local convergence of Method (3) using hypotheses only on the first derivative of function F . We also provide the radius of the convergence ball, computable error bounds on the distances involved and the uniqueness of the solution result using Lipschitz constants. Such results were not given in [1] or the earlier related studies [8,9,10,11,12]. This way, we expand the applicability of Method (3).
The rest of the paper is organized as follows: We present the local convergence analysis of Method (3) in Section 2. Numerical examples are given in the concluding Section 3.

2. Local Convergence

In this section, we present the local convergence analysis of Method (3). Let L 0 > 0 , L > 0 , M 1 , L 1 > 0 , β S and L 2 > 0 . It is convenient for the local convergence analysis that follows to introduce some functions and parameters. Define functions g 1 , p and h p on the interval [ 0 , 1 L 0 ) by:
g 1 ( t ) = L t 2 ( 1 L 0 t ) , p ( t ) = 1 2 L 0 t + 2 M | β 2 | g 1 ( t ) , h p ( t ) = p ( t ) 1 ,
and parameter r 1 by:
r 1 = 2 2 L 0 + L .
We have that h p ( 0 ) = 1 < 0 and h p ( t ) as t 1 L 0 . It follows from the intermediate value theorem that function h p has zeros in the interval 0 , 1 L 0 . Denote by r p the smallest such zero. Moreover, define functions g 2 and h 2 on the interval [ 0 , r p ) by:
g 2 ( t ) = 1 + M 2 ( 1 + | β | g 1 ( t ) ) ( 1 p ( t ) ) ( 1 L 0 t ) g 1 ( t )
and:
h 2 ( t ) = g 2 ( t ) 1 .
Then, we get h 2 ( 0 ) = 1 < 0 and h 2 ( t ) as t r p . Denote by r 2 the smallest zero of function h 2 on the interval ( 0 , r p ) . Furthermore, define functions q and h q on the interval [ 0 , r p ] by:
q ( t ) = [ 4 L 1 + ( 3 L 1 + L 2 ) g 1 ( t ) + g 2 ( t ) ] t ,
and:
h q ( t ) = q ( t ) 1 .
We have that h q ( 0 ) = 1 < 0 and h q ( t ) as t r p . Denote by r q the smallest zero of function h q on the interval ( 0 , r q ) . Finally, define functions g 3 and h 3 on the interval [ 0 , r q ) by:
g 3 ( t ) = 1 + M 1 q ( t ) g 2 ( t ) ,
and:
h 3 ( t ) = g 3 ( t ) 1 .
We get that h 3 ( 0 ) = 1 < 0 and h 3 ( t ) as t r q . Denote by r 3 the smallest zero of function h 3 on the interval ( 0 , r q ) . Set:
r = min { r 1 , r 3 } .
Then, we have that:
0 < r r 1 ,
and for each t [ 0 , r ) :
0 g 1 ( t ) < 1 ,
0 p ( t ) < 1 ,
0 g 2 ( t ) < 1 ,
0 q ( t ) < 1
and:
0 g 3 ( t ) < 1 .
Let U ( γ , ρ ) , U ¯ ( γ , ρ ) stand, respectively, for the open and closed balls in S , with center γ S and of radius ρ > 0 . Next, we present the local convergence analysis of Method (3) using the preceding notation.
Theorem 1. 
Let F : D S S be a differentiable function. Let [ . , . ; F ] : D × D L ( S ) be a divided difference of order one. Suppose that there exist x * D , L 0 > 0 , L > 0 , M 1 , L 1 0 , L 2 0 , β S , such that for all x , y D :
F ( x * ) = 0 , F ( x * ) 0
| F ( x * ) 1 ( F ( x ) F ( x * ) | L 0 | x x * | ,
| F ( x * ) 1 ( F ( x ) F ( y ) | L | x y | ,
| F ( x * ) 1 F ( x ) | M ,
| F ( x * ) 1 [ x , y ; F ] F ( x * ) | L 1 | x x * | + | y x * | ,
| F ( x * ) 1 [ x , y ; F ] F ( x ) | L 2 | x y |
and:
U ¯ ( x * , r ) D ,
where the radius r is defined by Equation (4). Then, the sequence { x n } generated for x 0 U ( x * , r ) { x * } by Method (3) is well defined, remains in U ( x * , r ) for each n = 0 , 1 , 2 , and converges to x * . Moreover, the following estimates hold:
| y n x * | g 1 ( | x n x * | ) | x n x * | | x n x * | < r ,
| z n x * | g 2 ( | x n x * | ) | x n x * | < | x n x * |
and:
| x n + 1 x * | g 3 ( | x n x * | ) | x n x * | < | x n x * | ,
where the “g” functions are defined previously. Furthermore, for T [ r , 2 L 0 ) , the limit point x * is the only solution of equation F ( x ) = 0 in U ¯ ( x * , T ) D .
Proof. 
We shall show estimate Equations (18)–(20) using mathematical induction. By hypothesis x 0 U ( x * , r ) { x * } , Equations (4) and (12), we get:
| F ( x * ) 1 ( F ( x 0 ) F ( x * ) ) | L 0 | x 0 x * | < L 0 r < 1 .
It follows from the Equation (21) and the Banach lemma on invertible operators [2,3,14] that F ( x 0 ) 0 and:
| F ( x 0 ) 1 F ( x * ) | 1 1 L 0 | x 0 x * | .
Hence, y 0 is well defined by the first sub-step of Method (3) for n = 0 . Then, we have by Equations (4), (5), (11), (13) and (22) that:
| y 0 x * | = | x 0 x * F ( x 0 ) 1 F ( x 0 ) | | F ( x 0 ) 1 F ( x * ) | | 0 1 F ( x * ) 1 F ( x * + θ ( x 0 x * ) ) F ( x 0 ) ( x 0 x * ) d θ | L | x 0 x * | 2 2 ( 1 L 0 | x 0 x * | ) = g 1 ( | x 0 x * | ) | x 0 x * | < | x 0 x * | < r ,
which shows Equation (18) for n = 0 and y 0 U ( x * , r ) . We can write by Equation (11) that:
F ( x 0 ) = F ( x 0 ) F ( x * ) = 0 1 F ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ .
Notice that | x * + θ ( x 0 x * ) x 0 * | = θ | x 0 x * | < r ; hence, x * + θ ( x o x * ) U ( x * , r ) . Then, by Equations (14) and (24), we obtain that:
| F ( x * ) 1 F ( x 0 ) | = 0 1 F ( x * ) 1 F ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ M | x 0 x * | .
We also get that:
| F ( x * ) 1 F ( y 0 ) | M | y 0 x * | M g 1 ( | x 0 x * | ) | x 0 x * | .
Next, we shall show that F ( x 0 ) + ( β 2 ) F ( y 0 ) 0 . We have by Equations (4), (6), (11), (12), (22) and (26) that:
| ( F ( x * ) ( x 0 x * ) ) 1 F ( x 0 ) F ( x * ) F ( x * ) ( x 0 x * ) + ( β 2 ) F ( y 0 ) | | x 0 x * | 1 | F ( x * ) 1 ( F ( x 0 ) F ( x * ) F ( x * ) ( x 0 x * ) ) | + | β 2 | | F ( x * ) 1 F ( y 0 ) | | x 0 x * | 1 L 0 2 | x 0 x * | 2 + M | β 2 | | y 0 x * | 1 2 L 0 | x 0 x * | + 2 M | β 2 | g 1 | x 0 x * | = p ( | x 0 x * | ) < p ( r ) < 1 .
Hence, we have that:
| ( F ( x 0 ) + ( β 2 ) F ( y 0 ) ) 1 F ( x * ) | 1 | x 0 x * | ( 1 p ( | x 0 x * | ) ) .
Hence, z 0 is well defined by the second sub-step of Method (3) for n = 0 . Then, using Equations (4), (7), (17), (23)–(26) and (28), we get in turn that:
| z 0 x * | | y 0 x * | + | F ( x 0 ) + ( β 2 ) F ( y 0 ) 1 F ( x * ) | | F ( x * ) 1 F ( x 0 ) + β F ( x * ) 1 F ( y 0 ) | × | F ( x 0 ) 1 F ( x * ) | | F ( x * ) 1 F ( y 0 ) | | y 0 x * | + M 2 | x 0 x * | + | β | | y 0 x * | | y 0 x * | | x 0 x * | 1 p ( | x 0 x * | ) ( 1 L 0 | x 0 x * | ) 1 + M 2 1 + | β | g 1 ( | x 0 x * ) 1 p ( | x 0 x * | ) ( 1 L 0 | x 0 x * | ) | y 0 x * | g 2 | x 0 x * | | x 0 x * | < | x 0 x * | < r ,
which shows Equation (19) for n = 0 and z 0 U ( x * , r ) . We must show that A 0 0 . Notice that, we can write:
A 0 = 2 [ x 0 , z 0 ; F ] F ( x * ) 2 F ( x * ) [ x 0 , y 0 ; F ] + [ z 0 , y 0 ; F ] F ( x * ) + ( y 0 x * ) + ( x * z 0 ) [ x 0 , y 0 ; F ] F ( x 0 ) y 0 x 0 .
Using equations, namely, Equations (4), (9), (15), (16), (23), (29) and (30), we get:
| F ( x * ) 1 ( A 0 F ( x * ) ) | 2 L 1 ( | x 0 x * | + | z 0 x * | ) + 2 L 1 ( | x 0 x * | + | y 0 x * | ) + L 1 ( | z 0 x * | + | y 0 x * | ) + L 2 ( | y 0 x * | + | z 0 x * | ) 4 L 1 | x 0 x * | + ( 3 L 1 + L 2 ) | y 0 x * | + ( 3 L 1 + L 2 ) | z 0 x * | 4 L 1 + ( 3 L 1 + L 2 ) g 1 ( | x 0 x * | ) + ( 3 L 1 + L 2 ) g 2 ( | x 0 x * | ) | x 0 x * | q ( | x 0 x * | ) < q ( r ) < 1 .
Hence, we get:
| A 0 1 F ( x * ) | 1 1 q ( | x 0 x * | ) .
It follows that x 1 is well defined by the third sub-step of Method (2) for n = 0 . Then, it follows from Equations (4), (10), (22), (25) ( f o r x 0 = z 0 ) , (29) and (32) that:
| x 1 x * | | z 0 x * | + | A 0 1 F ( x * ) | | F ( x * ) 1 F ( z 0 ) | | z 0 x * | + M | z 0 x * | 1 q ( | x 0 x * | ) 1 + M 1 q ( | x 0 x * | ) | z 0 x * | 1 + M 1 q ( | x 0 x * | ) g 2 ( | x 0 x * | ) | x 0 x * | g 3 ( | x 0 x * | ) | x 0 x * | < | x 0 x * | < r ,
which shows Equation (20) for n = 0 and x 1 U ( x * , r ) . By simply replacing x 0 , y 0 , z 0 , x 1 by x k , y k , z k , x k + 1 in the preceding estimates, we arrive at Equations (18)–(20). Using the estimates x k + 1 x * < x k x * < r , we deduce that lim k x k = x * and x k + 1 U ( x * , r ) . Finally, to show the uniqueness part, let Q = 0 1 F ( y * + θ ( x * y * ) ) d θ for some y * U ¯ ( x * , T ) with F ( y * ) = 0 .
Using Equation (12), we get that:
F ( x * ) 1 ( Q F ( x * ) ) 0 1 L 0 | y * + θ ( x * y * ) x * d θ 0 1 ( 1 t ) y * x * d θ L 0 2 T < 1 .
It follows from Equation (34) that Q is invertible. Then, in view of the identity 0 = F ( x * ) F ( y * ) = Q ( x * y * ) , we conclude that x * = y * .
Remark 1. 
(a)
In view of Equation (12) and the estimate:
| F ( x * ) 1 F ( x ) | = | F ( x * ) 1 ( F ( x ) F ( x * ) ) + I | 1 + | F ( x * ) 1 ( F ( x ) F ( x * ) ) | 1 + L 0 | x 0 x * |
condition Equation (14) can be dropped, and M can be replaced by:
M ( t ) = 1 + L 0 t
or by M ( t ) = M = 2 , since t [ 0 , 1 L 0 ) .
(b)
The results obtained here can be used for operators F satisfying the autonomous differential equation [2,3] of the form:
F ( x ) = P ( F ( x ) ) ,
where P is a known continuous operator. Since F ( x * ) = P ( F ( x * ) ) = P ( 0 ) , we can apply the results without actually knowing the solution x * . Let, as an example, F ( x ) = e x 1 . Then, we can choose P ( x ) = x + 1 .
(c)
The radius r 1 was shown in [2,3] to be the convergence radius for Newton’s method Equation (2) under conditions Equations (11) and (13). It follows from Equation (4) and the definition of r 1 that the convergence radius r of Method (3) cannot be larger than the convergence radius r 1 of the second order Newton’s method (2). As already noted that r 1 is at least as the convergence ball given by Rheinboldt [15]:
r R = 2 3 L .
In particular, for L 0 < L , we have that:
r R < r 1
and:
r R r 1 1 3 as L 0 L 0 .
That is our convergence ball r 1 that is at most three times larger than Rheinboldt’s. The same value for r R is given by Traub [4].
(d)
It is worth noticing that Method (3) is not changing if we use the conditions of Theorem 2.1 instead of the stronger conditions given in [1]. Moreover, for the error bounds, in practice, we can use the computational order of convergence (COC) [16]:
ξ = l n | x n + 2 x * | | x n + 1 x * | l n | x n + 1 x * | | x n x * | , f o r e a c h n = 0 , 1 , 2 ,
or the approximate computational order of convergence (ACOC) [16]:
ξ * = l n | x n + 2 x n + 1 | | x n + 1 x n | l n | x n + 1 x n | | x n x n 1 | , f o r e a c h n = 1 , 2 ,
This way, we obtain, in practice, the order of convergence in a way that avoids the bounds involving estimates higher than the first Fréchet derivative.

3. Numerical Example and Applications

We present numerical examples in this section.
Example 1. 
Let S = R , D = [ 1 , 1 ] , x * = 0 , and define function F on D by:
F ( x ) = sin x .
Then, we get L 0 = L = M = 1 and L 1 = L 2 = 1 2 . Then, by the definition of the r 1 and r 3 , we obtain:
r 1 = 0 . 666667 , r 3 = 0 . 186589 ,
and as a consequence:
r = 0 . 186589 .
Example 2. 
Let S = R , D = [ 1 , 1 ] , x * = 0 , and define function F on D by:
F ( x ) = e x 1 .
Then, we get L 0 = e 1 , L = e , L 1 = e 1 2 , L 2 = e 2 and M = 2 .
Then, we get L 0 = L = M = 1 and L 1 = L 2 = 1 2 . Then, by the definition of the r 1 and r 3 , we obtain:
r 1 = 0 . 324947 , r 3 = 0 . 032978 ,
and as a consequence:
r = 0 . 032978 .
Example 3. 
Returning back to the motivation example in the Introduction, we have L = L 0 = 146 . 6629073 , L 1 = L 2 = L 0 2 and M = 2 .
r 1 = 0 . 0045456 , r 3 = 0 . 000553 ,
and as a consequence:
r = 0 . 000553 .

Author Contributions

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.; Wang, Y.; Tan, D. A family of three-step iterative methods with eighth-order convergence for nonlinear equations. Appl. Math. Comput. 2015. [Google Scholar] [CrossRef]
  2. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  3. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publishing Company: Hackensack, NJ, USA, 2013. [Google Scholar]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root–finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar] [CrossRef]
  6. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  7. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Waltham, MA, USA, 2013. [Google Scholar]
  8. Herceg, D.; Herceg, D.J. Means based modifications of Newton’s method for solving nonlinear equations. Appl. Math. Comput. 2013, 219, 6126–6133. [Google Scholar] [CrossRef]
  9. Herceg, D.; Herceg, D. Third-order modifications of Newton’s method based on Stolarsky and Gini means. J. Comput. Appl. Math. 2013, 245, 53–61. [Google Scholar] [CrossRef]
  10. Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
  11. Ozban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
  12. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  13. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  14. Li, D.; Liu, P.; Kou, J. An improvement of the Chebyshev–Halley methods free from second derivative. Appl. Math. Comput. 2014, 235, 221–225. [Google Scholar] [CrossRef]
  15. Rheinboldt, W.C. An Adaptive Continuation Process for Solving Systems of Nonlinear Equations. Available online: https://eudml.org/doc/208686 (accessed on 11 August 2015).
  16. Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett. 2010, 23, 472–478. [Google Scholar] [CrossRef]
  17. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequationes Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  18. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef]
  19. Candela, V.; Marquina, A. Recurrence relations for rational cubic methods I: The Halley method. Computing 1990, 44, 169–184. [Google Scholar] [CrossRef]
  20. Chicharro, F.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013. [Google Scholar] [CrossRef] [PubMed]
  21. Chun, C. Some improvements of Jarratt’s method with sixth-order convergence. Appl. Math. Comput. 2007, 190, 1432–1437. [Google Scholar] [CrossRef]
  22. Cordero, A.; García-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef]
  23. Cordero, A.; Torregrosa, J.R.; Vindel, P. Dynamics of a family of Chebyshev-Halley type methods. Appl. Math. Comput. 2013, 219, 8568–8583. [Google Scholar] [CrossRef]
  24. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  25. Ezquerro, J.A.; Hernández, J.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  26. Ezquerro, J.A.; Hernández, M.A. On the R-order of the Halley method. J. Math. Anal. Appl. 2005, 303, 591–601. [Google Scholar] [CrossRef]
  27. Gutiérrez, J.A.; Hernández, M.A. Recurrence relations for the super-Halley method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef]
  28. Hernández, M.A. Chebyshev’s approximation algorithms and applications. Comput. Math. Appl. 2001, 41, 433–455. [Google Scholar] [CrossRef]
  29. Hernández, M.A.; Salanova, M.A. Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math. 1999, 1, 29–40. [Google Scholar]
  30. Herceg, D.; Herceg, D. Sixth-order modifications of Newton’s method based on Stolarsky and Gini means. J. Comput. Appl. Math. 2014, 267, 244–253. [Google Scholar] [CrossRef]
  31. Jarratt, P. Some fourth order multipoint methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  32. Kou, J.; Wang, X. Semilocal convergence of a modified multi-point Jarratt method in Banach spaces under general continuity conditions. Numer. Algor. 2012, 60, 369–390. [Google Scholar]
  33. Kou, J. On Chebyshev–Halley methods with sixth-order convergence for solving non-linear equations. Appl. Math. Comput. 2007, 190, 126–131. [Google Scholar] [CrossRef]
  34. Kou, J.; Li, Y.; Wang, X. Third-order modification of Newton’s method. J. Comput. Appl. Math. 2007, 205, 1–5. [Google Scholar]
  35. Lukić, T.; Ralević, N.M. Geometric mean Newton’s method for simple and multiple roots. Appl. Math. Lett. 2008, 21, 30–36. [Google Scholar] [CrossRef]
  36. Neta, B. A sixth order family of methods for nonlinear equations. Int. J. Comput. Math. 1979, 7, 157–161. [Google Scholar] [CrossRef]
  37. Parhi, S.K.; Gupta, D.K. Recurrence relations for a Newton-like method in Banach spaces. J. Comput. Appl. Math. 2007, 206, 873–887. [Google Scholar]
  38. Parhi, S.K.; Gupta, D.K. A sixth order method for nonlinear equations. Appl. Math. Comput. 2008, 203, 50–55. [Google Scholar] [CrossRef]
  39. Ren, H.; Wu, Q.; Bi, W. New variants of Jarratt’s method with sixth-order convergence. Numer. Algor. 2009, 52, 585–603. [Google Scholar] [CrossRef]
  40. Wang, X.; Kou, J.; Gu, C. Semilocal convergence of a sixth-order Jarratt method in Banach spaces. Numer. Algor. 2011, 57, 441–456. [Google Scholar] [CrossRef]
  41. Zhou, X. A class of Newton’s methods with third-order convergence. Appl. Math. Lett. 2007, 20, 1026–1030. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Behl, R.; Motsa, S.S. Local Convergence of an Optimal Eighth Order Method under Weak Conditions. Algorithms 2015, 8, 645-655. https://0-doi-org.brum.beds.ac.uk/10.3390/a8030645

AMA Style

Argyros IK, Behl R, Motsa SS. Local Convergence of an Optimal Eighth Order Method under Weak Conditions. Algorithms. 2015; 8(3):645-655. https://0-doi-org.brum.beds.ac.uk/10.3390/a8030645

Chicago/Turabian Style

Argyros, Ioannis K., Ramandeep Behl, and S.S. Motsa. 2015. "Local Convergence of an Optimal Eighth Order Method under Weak Conditions" Algorithms 8, no. 3: 645-655. https://0-doi-org.brum.beds.ac.uk/10.3390/a8030645

Article Metrics

Back to TopTop