Next Article in Journal
Modeling Documents with Event Model
Next Article in Special Issue
Local Convergence of an Optimal Eighth Order Method under Weak Conditions
Previous Article in Journal
Target Detection Algorithm Based on Two Layers Human Visual System
Previous Article in Special Issue
On the Accessibility of Newton’s Method under a Hölder Condition on the First Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule

by
Diyashvir Kreetee Rajiv Babajee
African Network for Policy Research & Actions for Sustainability (ANPRAS), Midlands, Curepipe 52501, Mauritius
Algorithms 2015, 8(3), 552-561; https://0-doi-org.brum.beds.ac.uk/10.3390/a8030552
Submission received: 11 May 2015 / Revised: 20 July 2015 / Accepted: 21 July 2015 / Published: 29 July 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

:
In this paper, we present three improvements to a three-point third order variant of Newton's method derived from the Simpson rule. The first one is a fifth order method using the same number of functional evaluations as the third order method, the second one is a four-point 10th order method and the last one is a five-point 20th order method. In terms of computational point of view, our methods require four evaluations (one function and three first derivatives) to get fifth order, five evaluations (two functions and three derivatives) to get 10th order and six evaluations (three functions and three derivatives) to get 20th order. Hence, these methods have efficiency indexes of 1.495, 1.585 and 1.648, respectively which are better than the efficiency index of 1.316 of the third order method. We test the methods through some numerical experiments which show that the 20th order method is very efficient.

1. Introduction

Newton's method has remained one of the best root-finding methods for solving nonlinear scalar equation f ( x ) = 0 . In last 15 years, many higher order variants of Newton's method have been developed. One of the them is a third order variant developed by Hasanov et al. [1] by approximating an indefinite integral in the Newton theorem by Simpson's formula. This method is a three-point method requiring 1 function and 3 first derivative evaluations and has an efficiency index of 3 1 / 4 = 1 . 316 which is lower than 2 1 / 2 = 1 . 414 of the 1-point Newton method. Recently, the order of many variants of Newton's method have been improved using the same number of functional evaluations by means of weight functions (see [2,3,4,5,6,7] and the references therein).
In this work, we improve the order of the three-point variant from three to five using weight function. Using polynomial interpolation, we develop four-point 10th order and five-point 20th order methods. Finally, we test the efficiency of the methods through numerical experiments.

2. Developments of the Methods

Let x n + 1 = ψ ( x n ) define an Iterative Function (I.F.).
Definition 1. [8] If the sequence { x n } tends to a limit x * in such a way that
lim n x n + 1 - x * ( x n - x * ) p = C
for p 1 , then the order of convergence of the sequence is said to be p, and C is known as the asymptotic error constant. If p = 1 , p = 2 or p = 3 , the convergence is said to be linear, quadratic or cubic, respectively.
Let e n = x n - x * , then the relation
e n + 1 = C e n p + O e n p + 1 = O e n p .
is called the error equation. The value of p is called the order of convergence of the method.
Definition 2. [9] The Efficiency Index is given by
E I = p 1 d ,
where d is the total number of new function evaluations (the values of f and its derivatives) per iteration.
Let x n + 1 be determined by new information at x n , ϕ 1 ( x n ) , . . . , ϕ i ( x n ) , i 1 .
No old information is reused. Thus,
x n + 1 = ψ ( x n , ϕ 1 ( x n ) , . . . , ϕ i ( x n ) ) .
Then ψ is called a multipoint I.F without memory.
Kung-Traub Conjecture [10]
Let ψ be an I.F. without memory with d evaluations. Then
p ( ψ ) p O p t = 2 d - 1 ,
where p o p t is the maximum order.
The second order Newton (also called Newton-Raphson) I.F. (2ndNR) is given by
ψ 2 n d N R ( x ) = x - u ( x ) , u ( x ) = f ( x ) f ' ( x ) .
The 2ndNR I.F. is a is 1-point I.F. with 2 functions evaluations and it satisfies the Kung-Traub conjecture d = 2 .
Thus, E I 2 n d N R = 1 . 414 .
The Newton I.F. can be constructed from its local linear model, of the function f ( x ) , which is the tangent drawn to the function f ( x ) at the current point x. The local linear model at x is
L ( x ) = f ( x ) + f ' ( x ) ( ψ - x ) .
This local linear model can be interpreted from the viewpoint of the Newton Theorem:
f ( ψ ) = f ( x ) + I N T , where I N T = x ψ f ' ( υ ) d υ .
Weerakoon and Fernando [11] showed that if the indefinite integral I N T is approximated by the rectangle: I N T f ' ( x ) ( ψ - x ) , the Newton I.F. is obtained by setting L ( x ) = 0 .Hasanov et al. [1] obtained a new linear model
L 1 ( x ) = f ( x ) + 1 6 f ' ( x ) + 4 f ' x + ψ 2 + f ' ( ψ ) ( ψ - x )
by approximating I N T by Simpson's formula: I N T 1 6 f ' ( x ) + 4 f ' x + ψ 2 + f ' ( ψ ) .
Solving the new linear model, they obtained the implicit I.F.
ψ ( x ) = x - 6 f ( x ) f ' ( x ) + 4 f ' x + ψ ( x ) 2 + f ' ( ψ ( x ) ) .
Using the Newton I.F. to estimate ψ ( x ) in the first derivative by ψ 2 n d N R ( x ) , they obtained the 3 r d 3 p V S I.F.:
ψ 3 r d 3 p V S ( x ) = x - f ( x ) D ( x ) , D ( x ) = 1 6 f ' ( x ) + 4 f ' x + ψ 2 n d N R ( x ) 2 + f ' ( ψ 2 n d N R ( x ) )
The 3 r d 3 p V S I.F. is a special case of the Frontini-Sormani family of third order I.F.s from quadrature rule [12]. It was extended to systems of equations in [13]. However this I.F. is considered as inefficient. According to Kung-Traub conjecture, the optimal order of the I.F. with d = 4 is eight. In fact, we can have optimal three-point eighth order I.F.s with 4 function evaluations or with 3 function and 1 first derivative evaluations (see [5,14] and the references therein).
However, we can achieve only a maximum of sixth order with I.F.s with 2 function and 2 first derivative evaluations. The question we now pose: What is the maximum order we can achieve with I.F.s with 1 function and 3 first derivative evaluations?
Let us define τ = f ' ( ψ 2 n d N R ( x ) ) f ' ( x ) . We improve the 3 r d 3 p V S I.F. using weight function to obtain a three-point fifth order I.F. ( 5 t h 3 p V S ):
ψ 5 t h 3 p V S ( x ) = x - f ( x ) D ( x ) × H ( τ ) , H ( τ ) = 1 + 1 4 ( τ - 1 ) 2 - 3 8 ( τ - 1 ) 3
It is remarkable that with the same number of functional evaluations we have improved the efficiency index from I E 3 r d 3 p V S = 1 . 316 to I E 5 t h 3 p V S = 5 1 / 4 = 1 . 495 . However, the maximum order we could achieve I.F.s with 1 function and 3 first derivative evaluations is five. Babajee [2] developed a technique to improve the order of old methods.
Theorem 3 (Babajee's theorem for improving the order of old methods). [2] Let a sufficiently smooth function f : D has a simple root x * in the open interval D. Let ψ o l d ( x ) be an Iteration Function (I.F.) of order p. Then the I.F. defined as ψ n e w ( x ) = ψ o l d ( x ) - G × f ( ψ o l d ( x ) ) is of local convergence of order p + q if G is a function satisfying the error equation
G = 1 f ' ( x * ) 1 + C G e q + O ( e q + 1 ) ,
where C G is a constant.
Suppose that the error equation of the old I.F. is given by
e o l d = ψ o l d ( x ) - x * = C o l d e p + . . .
Then, the error equation of the new I.F. is given by
e n e w = ψ n e w ( x ) - x * = - C G C o l d e p + q - c 2 C o l d 2 e 2 p + . . . ,
where c j = f ( j ) ( x * ) j ! f ' ( x * ) , j = 1 , 2 , 3 . . .
Usually, G is a weight function or an approximation to 1 f ' ( ψ o l d ( x ) ) obtained from polynomial interpolation.
Using Babajee's theorem and applying Newton I.F., we obtain a 10th order I.F. (10thVS):
ψ 10 t h V S ( x ) = ψ 5 t h 3 p V S ( x ) - f ( ψ 5 t h 3 p V S ( x ) ) f ' ( ψ 5 t h 3 p V S ( x ) )
However, we need to compute two more function evaluations. So we estimate f ' ( ψ 5 t h 3 p V S ( x ) ) by the following polynomial:
q 1 ( t ) = f ( x ) + f ' ( x ) ( t - x ) + a 1 ( t - x ) 2 + a 2 ( t - x ) 3 + a 3 ( t - x ) 4 .
which satisfies the following conditions
q 1 ( ψ 5 t h 3 p V S ( x ) ) = f ( ψ 5 t h 3 p V S ( x ) ) , q 1 ' ( ψ 2 n d N R ( x ) ) = f ' ( ψ 2 n d N R ( x ) ) , q 1 ' x + ψ 2 n d N R ( x ) 2 = f ' x + ψ 2 n d N R ( x ) 2 .
Let
A 1 = ψ 5 t h 3 p V S ( x ) - x , A 2 = ψ 2 n d N R ( x ) - x , A 3 = x + ψ 2 n d N R ( x ) 2 - x , B 1 = f [ ψ 5 t h 3 p V S ( x ) , x , x ] , B 2 = f ' [ ψ 2 n d N R ( x ) , x ] , B 3 = f ' x + ψ 2 n d N R ( x ) 2 , x ,
where we define the divided differences:
f [ y , x ] = f ( y ) - f ( x ) y - x , f ' [ y , x ] = f ' ( y ) - f ' ( x ) y - x , f [ y , x , x ] = f [ y , x ] - f ' ( x ) y - x
Equation (14) reduces, after using the divided differences, to a system of 3 linear equations in matrix form:
1 A 1 A 1 2 2 3 A 2 4 A 2 2 2 3 A 3 4 A 3 2 a 1 a 2 a 3 = B 1 B 2 B 3
whose solutions are given by
a 1 = 1 2 - 3 A 1 2 A 3 B 2 + 3 A 1 2 A 2 B 3 + 4 A 1 A 3 2 B 2 - 4 A 1 A 2 2 B 3 - 12 A 2 A 3 2 B 1 + 12 A 2 2 A 3 B 1 - 4 A 1 A 2 2 + 3 A 1 2 A 2 + 4 A 1 A 3 2 - 3 A 1 2 A 3 - 6 A 2 A 3 2 + 6 A 2 2 A 3 , a 2 = A 1 2 B 2 - A 1 2 B 3 - 2 A 3 2 B 2 - 4 B 1 A 2 2 + 2 A 2 2 B 3 + 4 B 1 A 3 2 - 4 A 1 A 2 2 + 3 A 1 2 A 2 + 4 A 1 A 3 2 - 3 A 1 2 A 3 - 6 A 2 A 3 2 + 6 A 2 2 A 3 , a 3 = - 1 2 2 A 1 B 2 - 2 A 1 B 3 + 6 B 1 A 3 - 3 A 3 B 2 - 6 B 1 A 2 + 3 A 2 B 3 - 4 A 1 A 2 2 + 3 A 1 2 A 2 + 4 A 1 A 3 2 - 3 A 1 2 A 3 - 6 A 2 A 3 2 + 6 A 2 2 A 3 .
So, q 1 ' ( ψ 5 t h 3 p V S ( x ) ) = f ' ( x ) + 2 a 1 ( ψ 5 t h 3 p V S ( x ) - x ) + 3 a 2 ( ψ 5 t h 3 p V S ( x ) - x ) 2 + 4 a 3 ( ψ 5 t h 3 p V S ( x ) - x ) 3 .
Using Babajee's theorem with p = q = 5 , we can obtain a four-point 10th-order I.F.s (10th4pVS):
ψ 10 t h 4 p V S ( x ) = ψ 5 t h 3 p V S ( x ) - G 1 × f ( ψ 5 t h 3 p V S ( x ) ) , G 1 = 1 q 1 ' ( ψ 5 t h 3 p V S ( x ) )
The efficiency index has now increased since I E 10 t h 4 p V S = 10 1 / 5 = 1 . 585 with 2 function and 3 first derivative evaluations. We point out that the optimal order of a four-point I.F. with 5 functional evaluations is 16 but this can be achieved with either 5 function evaluations or 4 function and 1 first derivative evaluations (see [5,14] and the references therein).
Using a similar approach to estimate f ' ( ψ 10 t h 4 p V S ( x ) ) by the following polynomial:
q 2 ( t ) = f ( x ) + f ' ( x ) ( t - x ) + b 1 ( t - x ) 2 + b 2 ( t - x ) 3 + b 3 ( t - x ) 4 + b 4 ( t - x ) 5
which satisfies the following conditions
q 2 ( ψ 5 t h 3 p V S ( x ) ) = f ( ψ 5 t h 3 p V S ( x ) ) , q 2 ( ψ 10 t h 4 p V S ( x ) ) = f ( ψ 10 t h 4 p V S ( x ) ) , q 2 ' ( ψ 2 n d N R ( x ) ) = f ' ( ψ 2 n d N R ( x ) ) , q 2 ' x + ψ 2 n d N R ( x ) 2 = f ' x + ψ 2 n d N R ( x ) 2 .
Furthermore, let A 4 = ψ 10 t h 3 p V S ( x ) - x , B 4 = f [ ψ 10 t h 4 p V S ( x ) , x , x ] . Equation (17) reduces, after using the divided differences, to a system of 4 linear equations in matrix form:
1 A 1 A 1 2 A 1 3 2 3 A 2 4 A 2 2 5 A 2 3 2 3 A 3 4 A 3 2 5 A 3 3 1 A 4 A 4 2 A 4 3 b 1 b 2 b 3 b 4 = B 1 B 2 B 3 B 4
whose solutions are given by
b = - 10 A 1 A 4 2 A 2 3 + 10 A 1 A 4 2 A 3 3 - 10 A 1 2 A 4 A 3 3 + 15 A 1 2 A 2 A 3 3 - 6 A 1 2 A 2 A 4 3 - 8 A 2 2 A 4 A 1 3 + 20 A 2 2 A 4 A 3 3 - 20 A 2 2 A 1 A 3 3 + 8 A 2 2 A 1 A 4 3 + 12 A 2 2 A 3 A 1 3 - 12 A 2 2 A 3 A 4 3 + 6 A 4 2 A 2 A 1 3 - 15 A 4 2 A 2 A 3 3 + 10 A 1 2 A 4 A 2 3 + 6 A 1 2 A 3 A 4 3 - 15 A 1 2 A 3 A 2 3 + 8 A 3 2 A 4 A 1 3 + 12 A 3 2 A 2 A 4 3 - 20 A 3 2 A 4 A 2 3 - 8 A 3 2 A 1 A 4 3 + 20 A 3 2 A 1 A 2 3 - 12 A 3 2 A 2 A 1 3 - 6 A 4 2 A 3 A 1 3 + 15 A 4 2 A 3 A 2 3 ,
b 2 = 1 b ( 4 A 3 2 A 4 3 B 2 - 8 A 3 2 A 4 3 B 1 - 5 A 3 3 A 4 2 B 2 + 10 A 3 3 A 4 2 B 1 - 4 A 2 2 A 4 3 B 3 + 8 A 2 2 A 4 3 B 1 + 20 B 4 A 2 2 A 3 3 - 20 B 1 A 2 2 A 3 3 + 5 A 2 3 A 4 2 B 3 - 10 A 2 3 A 4 2 B 1 - 20 A 2 3 A 3 2 B 4 + 20 A 2 3 A 3 2 B 1 + 2 A 1 2 A 4 3 B 3 - 2 A 1 2 A 4 3 B 2 - 10 A 1 2 A 3 3 B 4 + 5 A 1 2 A 3 3 B 2 + 10 A 1 2 A 2 3 B 4 - 5 A 1 2 A 2 3 B 3 - 2 A 1 3 A 4 2 B 3 + 2 A 1 3 A 4 2 B 2 + 8 A 1 3 A 3 2 B 4 - 4 A 1 3 A 3 2 B 2 - 8 A 1 3 A 2 2 B 4 + 4 A 1 3 A 2 2 B 3 ) , b 3 = 1 b ( - 3 A 3 A 4 3 B 2 + 6 A 3 A 4 3 B 1 + 5 A 3 3 A 4 B 2 - 10 A 3 3 A 4 B 1 + 3 A 2 A 4 3 B 3 - 6 A 2 A 4 3 B 1 - 15 A 2 B 4 A 3 3 + 15 B 1 A 2 A 3 3 - 5 A 2 3 A 4 B 3 + 10 A 2 3 A 4 B 1 + 15 A 2 3 A 3 B 4 - 15 A 2 3 A 3 B 1 - 2 A 1 A 4 3 B 3 + 2 A 1 A 4 3 B 2 + 10 A 1 A 3 3 B 4 - 5 A 1 A 3 3 B 2 - 10 A 1 A 2 3 B 4 + 5 A 1 A 2 3 B 3 + 2 A 1 3 A 4 B 3 - 2 A 1 3 A 4 B 2 - 6 A 1 3 A 3 B 4 + 3 A 1 3 A 3 B 2 + 6 A 1 3 A 2 B 4 - 3 A 1 3 A 2 B 3 ) , b 4 = 1 b ( - 3 A 4 2 A 2 B 3 - 8 A 2 2 A 4 B 1 - 2 A 1 2 A 4 B 3 - 3 A 1 2 A 3 B 2 + 8 A 2 2 A 1 B 4 + 4 A 2 2 A 4 B 3 - 4 A 2 2 A 1 B 3 - 6 A 1 2 A 2 B 4 + 12 A 2 2 A 3 B 1 + 2 A 1 2 A 4 B 2 + 2 A 1 A 4 2 B 3 + 3 A 1 2 A 2 B 3 - 12 A 2 2 A 3 B 4 + 6 A 4 2 A 2 B 1 - 2 A 1 A 4 2 B 2 + 6 A 1 2 A 3 B 4 + 8 A 3 2 A 4 B 1 - 4 A 3 2 A 4 B 2 + 12 A 3 2 A 2 B 4 + 4 B 2 A 1 A 3 2 - 8 A 3 2 A 1 B 4 - 12 B 1 A 2 A 3 2 - 6 A 4 2 A 3 B 1 + 3 A 4 2 A 3 B 2 ) .
So,
q 2 ' ( ψ 10 t h 4 p V S ( x ) ) = f ' ( x ) + 2 b 1 ( ψ 10 t h 4 p V S ( x ) - x ) + 3 b 2 ( ψ 10 t h 4 p V S ( x ) - x ) 2 + 4 b 3 ( ψ 10 t h 4 p V S ( x ) - x ) 3 + 5 b 4 ( ψ 10 t h 4 p V S ( x ) - x ) 4 .
Furthermore, using Babajee's theorem with p = q = 10 , we can obtain a five-point 20th-order I.F.s (20th5pVS):
ψ 20 t h 5 p V S ( x ) = ψ 10 t h 4 p V S ( x ) - G 2 × f ( ψ 10 t h 4 p V S ( x ) ) , G 2 = 1 q 2 ' ( ψ 10 t h 4 p V S ( x ) )
The efficiency index has now increased to I E 20 t h 5 p V S = 20 1 / 6 = 1 . 648 with 2 function and 3 first derivative evaluations.

3. Convergence Analysis

In this section, we prove the convergence analysis of the three I.F.s (11), (15) and (18).
Theorem 4. Let a sufficiently smooth function f : D has a simple root x * in the open interval D which contains x 0 as an initial approximation to x * . Then the 5 t h 3 p V S I.F (11) is of local fifth-order convergence , the 10 t h 4 p V S I.F (15) is of local 10th-order convergence and the 20 t h 5 p V S I.F (18) is of local 20th-order convergence.
Proof. Let e n = x - x * .
Using the Taylor series and the symbolic software such as Maple we have
f ( x ) = f ' ( x * ) [ e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + ... ]
and
f ' ( x ) = f ' ( x * ) [ 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + ... ]
so that
u ( x ) = e n - c 2 e n 2 + 2 ( c 2 2 - c 3 ) e n 3 + ( 7 c 2 c 3 - 4 c 2 3 - 3 c 4 ) e n 4 + ... .
Now
ψ 2 n d N R ( x ) = x * + c 2 e n 2 + 2 c 3 - 2 c 2 2 e n 3 + 3 c 4 - 7 c 2 c 3 + 4 c 2 3 e n 4 + ... ,
so that
τ = f ' ( ψ 2 n d N R ( x ) ) f ' ( x ) = 1 - c 2 e n + 6 c 2 2 - 3 c 3 e n 2 + 16 c 2 c 3 - 16 c 2 3 - 4 c 4 e n 3 + O e n 4 ,
H ( τ ) = 1 + c 2 2 e n 2 + 3 c 2 c 3 - 3 c 2 3 e n 3 + O e n 4
and
D ( x ) = f ' ( x * ) 1 + c 2 e n + c 2 2 + c 3 e n 2 + 3 c 2 c 3 - 2 c 2 3 + c 4 e n 3 + O e n 4 .
Substituting Equations (19), (20) and (21) into Equation (11), we obtain, after simplifications,
ψ 5 t h 3 p V S ( x ) - x * = C 5 t h 3 p V S e n 5 + O e n 6 , C 5 t h 3 p V S = - 1 4 c 3 2 - 1 2 c 3 c 2 2 + 9 c 2 4 .
Now, using Maple, we have
G 1 = 1 f ' ( x * ) 1 + C G 1 e n 2 + O e n 3 , C G 1 = - 18 c 2 5 + c 3 c 2 3 + 1 / 2 c 2 c 3 2 .
Using Babajee's theorem with p = q = 5 , we have
ψ 10 t h 4 p V S ( x ) - x * = ( - C G 1 C 5 t h 3 p V S - c 2 C 5 t h 3 p V S 2 ) e n 10 + O e n 11 = C 10 t h 4 p V S e n 10 + O e n 11 , C 10 t h 4 p V S = - 17 4 c 3 2 c 2 5 - 9 c 3 c 2 7 + 1 4 c 3 3 c 2 3 + 81 c 2 9 + 1 16 c 2 c 3 4 .
Using Maple,
G 2 = 1 f ' ( x * ) 1 + C G 2 e n 10 + O e n 11 , C G 2 = - 162 c 2 10 + 18 c 3 c 2 8 + 17 2 c 3 2 c 2 6 - 1 2 c 3 3 c 2 4 - 1 8 c 3 4 c 2 2 .
Using Babajee's theorem with p = q = 10 , we have
ψ 20 t h 5 p V S ( x ) - x * = ( - C G 2 C 10 t h 4 p V S - c 2 C 10 t h 4 p V S 2 ) e n 20 + O e n 21 = C 20 t h 5 p V S e n 20 + O e n 21 , C 20 t h 5 p V S = 6561 c 2 19 - 1458 c 3 c 2 17 - 1215 2 c 3 2 c 2 15 + 117 c 3 3 c 2 13 + 379 16 c 3 4 c 2 11 - 13 4 c 3 5 c 2 9 - 15 32 c 3 6 c 2 7 + 1 32 c 3 7 c 2 5 + 1 256 c 3 8 c 2 3

4. Numerical Examples

In this section, we give numerical results on some test functions to compare the efficiency of the proposed methods (5th3pVS, 10th4pVS and 20th5pVS) with 3rd3pVS and Newton's method (2ndNR). Numerical computations have been carried out in the MATLAB software rounding to 1000 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process E n = | x n - x n - 1 | < ϵ where ϵ = 10 - 100 . Let N be the number of iterations required for convergence. The test functions and their simple zeros are given below:
f 1 ( x ) = sin ( 2 cos x ) - 1 - x 2 + e sin ( x 3 ) , x * = - 0 . 7848959876612125352 . . . f 2 ( x ) = x 3 + 4 x 2 - 10 , x * = 1 . 3652300134140968457 . . . f 3 ( x ) = ( x + 2 ) e x - 1 , x * = - 0 . 4428544010023885831 . . . f 4 ( x ) = x - cos x , x * = 0 . 6417143708728826583 . . .
Table 1 shows the corresponding results for f 1 ( x ) to f 4 ( x ) . It can be found that the 20th5pVS I.F. converge in less iterations with the least error E N for the functions and their starting points considered. This I.F. takes at most half the number of iterations than that of the 2 n d N R I.F. to converge. The number of iterations and the error are smaller when we choose a starting point close to the root.
Table 1. Results for the 5th3pVS, 10th4pVS and 20th5pVS Iterative Functions (I.F.s) for f 1 ( x ) - f 4 ( x ) along with 2 n d N R and 3rd3pVS I.F.s
Table 1. Results for the 5th3pVS, 10th4pVS and 20th5pVS Iterative Functions (I.F.s) for f 1 ( x ) - f 4 ( x ) along with 2 n d N R and 3rd3pVS I.F.s
f ( x ) x 0 2 n d N R 3 r d 3 p V S 5 t h 3 p V S 10 t h 4 p V S 20 t h 5 p V S
N E N N E N N E N N E N N E N
f 1 ( x ) - 0 . 7 71.09e-07452.58e-09441.04e-11136.96e-07733.31e-299
- 1 79.07e-06151.65e-07649.04e-09538.96e-06835.99e-249
f 2 ( x ) 1 . 6 77.80e-06352.07e-07945.53e-09732.53e-08039.88e-324
2 . 5 83.21e-05361.29e-10051.01e-17342.57e-30231.024e-125
f 3 ( x ) - 0 . 3 77.80e-06653.63e-08341.62e-10234.90e-07131.18e-282
- 0 . 8 84.34e-07366.08e-13852.24e-14447.38 e-19033.88e-092
f 4 ( x ) 0 . 7 61.41e-06745.59e-05749.12e-22231.08e-10830
271.52e-06258.42e-08741.38e-06949.78e-23833.70e-123

5. Conclusion

In this work, we have developed three-point fifth order, four-point 10th order and five-point 20th order methods using weight functions and polynomial interpolation. It is clear that our proposed methods require only four evaluations per iterative step to obtain fifth order method, five evaluations per iterative step to get 10th order and six evaluations per iterative step to get 20th order. We have thus increased the order of convergence to five, 10 and 20 compared to the third order method suggested in [1] with efficiency indexes EI =1.495, EI =1.585 and EI = 1.648, respectively. Our proposed methods are better than Newton's method in terms of efficiency index (EI =1.4142). Numerical results show that the five-point 20th order method is the most efficient.

Acknowledgments

The author is thankful to the anonymous reviewers for their valuable comments to improve the readability of the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hasanov, V.I.; Ivanov, I.G.; Nedzhibov, G.H. A new modification of Newton's method. Appl. Math. Eng. 2002, 27, 278–286. [Google Scholar]
  2. Babajee, D.K.R. Several improvements of the 2-point third order midpoint iterative method using weight functions. Appl. Math. Comput. 2012, 218, 7958–7966. [Google Scholar] [CrossRef]
  3. Babajee, D.K.R. On a two-parameter Chebyshev-Halley-like family of optimal two-point fourth order methods free from second derivatives. Afr. Mat. 2014. [Google Scholar] [CrossRef]
  4. Babajee, D.K.R.; Jaunky, V.C. Applications of higher-order optimal Newton Secant iterative methods in ocean acidification and investigation of long-run implications of CO2 emissions on alkalinity of seawater. ISRN Appl. Math. 2013. [Google Scholar] [CrossRef]
  5. Babajee, D.K.R.; Thukral, R. On a 4-point sixteenth-order King family of iterative methods for solving nonlinear equations. Int. J. Math. Math. Sci. 2012. [Google Scholar] [CrossRef]
  6. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  7. Soleymani, F.; Khratti, S.K.; Karimi Vanani, S. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett. 2011, 25, 847–853. [Google Scholar] [CrossRef]
  8. Wait, R. The Numerical Solution of Algebraic Equations; John Wiley & Sons: New York, NY, USA, 1979. [Google Scholar]
  9. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: San Diego, CA, USA, 1960. [Google Scholar]
  10. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  11. Weerakoon, S.; Fernando, T.G.I. A variant of Newton's method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  12. Frontini, M.; Sormani, E. Some variants of Newton's method with third-order convergence. Appl. Math. Comput. 2003, 140, 419–426. [Google Scholar] [CrossRef]
  13. Cordero, A.; Torregrosa, J.R. Variants of Newton's method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  14. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolation. Appl. Math. Comput. 2013, 57, 1950–1956. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Babajee, D.K.R. Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule. Algorithms 2015, 8, 552-561. https://0-doi-org.brum.beds.ac.uk/10.3390/a8030552

AMA Style

Babajee DKR. Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule. Algorithms. 2015; 8(3):552-561. https://0-doi-org.brum.beds.ac.uk/10.3390/a8030552

Chicago/Turabian Style

Babajee, Diyashvir Kreetee Rajiv. 2015. "Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule" Algorithms 8, no. 3: 552-561. https://0-doi-org.brum.beds.ac.uk/10.3390/a8030552

Article Metrics

Back to TopTop