Next Article in Journal
Generalized Randić Estrada Indices of Graphs
Next Article in Special Issue
Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations
Previous Article in Journal
A Fuzzy AHP-Fuzzy TOPSIS Urged Baseline Aid for Execution Amendment of an Online Food Delivery Affability
Previous Article in Special Issue
Generalized Three-Step Numerical Methods for Solving Equations in Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions

1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, University of Houston, Houston, TX 77204, USA
3
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Submission received: 22 July 2022 / Revised: 9 August 2022 / Accepted: 10 August 2022 / Published: 14 August 2022
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)

Abstract

:
A process for solving an algebraic equation was presented by Newton in 1669 and later by Raphson in 1690. This technique is called Newton’s method or Newton–Raphson method and is even today a popular technique for solving nonlinear equations in abstract spaces. The objective of this article is to update developments in the convergence of this method. In particular, it is shown that the Kantorovich theory for solving nonlinear equations using Newton’s method can be replaced by a finer one with no additional and even weaker conditions. Moreover, the convergence order two is proven under these conditions. Furthermore, the new ratio of convergence is at least as small. The same methodology can be used to extend the applicability of other numerical methods. Numerical experiments complement this study.
MSC:
49M15; 47H17; 65G99; 65H10; 65N12; 58C15

1. Introduction

Given Banach spaces U , V . Let L ( U , V ) stand for the space of all continuous linear operators mapping U into V . Consider differentiable as per Fréchet operator L : D U V and its corresponding nonlinear equation
L ( x ) = 0 ,
with D denoting a nonempty open set. The task of determining a solution x * D is very challenging but important, since applications from numerous computational disciplines are brought in form (1) [1,2]. The analytic form of x * is rarely attainable. That is why mainly numerical methods are used generating approximations to solution x * . Most of them are based on Newton’s method [3,4,5,6,7]. Moreover, authors developed efficient high-order and multi-step algorithms with derivative [8,9,10,11,12,13] and divided differences [14,15,16,17,18].
Among these processes the most widely used is Newton’s and its variants. In particular, Newton’s Method (NM) is developed as
x 0 D , x n + 1 = x n L ( x n ) 1 L ( x n ) n = 0 , 1 , 2 , .
There exists a plethora of results related to the study of NM [3,5,6,7,19,20,21]. These papers are based on the theory inaugurated by Kantorovich and its variants [21]. Basically, the conditions (K) are used in non-affine or affine invariant form. Suppose (K1) ∃ point x 0 D and parameter s 0 : L ( x 0 ) 1 L ( V , U ) , and
L ( x 0 ) 1 L ( x 0 ) s ,
(K2) ∃ parameter M 1 > 0 : Lipschitz condition
L ( x 0 ) 1 ( L ( w 1 ) L ( w 2 ) ) M 1 w 1 w 2
holds w 1 D and w 2 D ,
(K3)
s 1 2 M 1
and
(K4) B [ x 0 , ρ ] D , where parameter ρ > 0 is given later.
Denote B [ x 0 , r ] : = { x D : x x 0 r } for r > 0 . Set ρ = r 1 = 1 1 2 M 1 s M 1 .
There are many variants of Kantorovich’s convergence result for NM. One of these results follows [4,7,20].
Theorem 1.
Under conditions (K) for ρ = r 1 ; NM is contained in B ( x 0 , r 1 ) , convergent to a solution x * B [ x 0 , r 1 ] of Equation (1), and
x n + 1 x n u n + 1 u n .
Moreover, the convergence is linear if s = 1 2 M 1 and quadratic if s < 1 2 M 1 . Furthermore, the solution is unique B [ x 0 , r 1 ] in the first case and in B ( x 0 , r 2 ) in the second case where r 2 = 1 + 1 2 M 1 s M 1 and scalar sequence { u n } is given as
u 0 = 0 , u 1 = s , u n + 1 = u n + M 1 ( u n u n 1 ) 2 2 ( 1 M 1 u n ) .
A plethora of studies have used conditions (K) [3,4,5,19,21,22,23].
Example 1.
Consider the cubic polynomial
c ( x ) = x 3 a
for D = B ( x 0 , 1 a ) and parameter a ( 0 , 1 2 ) . Select initial point x 0 = 1 . Conditions (K) give s = 1 a 3 and M 1 = 2 ( 2 a ) . It follows that estimate
1 a 3 > 1 4 ( 2 a )
holds a ( 0 , 1 2 ) . That is condition (K3) is not satisfied. Therefore convergence is not assured by this theorem. However, NM may converge. Hence, clearly, there is a need to improve the results based on the conditions K.
By looking at the crucial sufficient condition (K3) for the convergence, (K4) and the majorizing sequence given by Kantorovich in the preceding Theorem 1 one sees that if the Lipschitz constants M 1 is replaced by a smaller one, say L > 0 , than the convergence domain will be extended, the error distances x n + 1 x n , x n x * will be tighter and the location of the solution more accurate. This replacement will also lead to fewer Newton iterates to reach a certain predecided accuracy (see the numerical Section). That is why with the new methodology, a new domain is obtained inside D that also contains the Newton iterates. However, then, L can replace M 1 in Theorem 1 to obtain the aforementioned extensions and benefits.
In this paper several avenues are presented for achieving this goal. The idea is to replace Lipschitz parameter M 1 by smaller ones.
(K5) Consider the center Lipschitz condition
L ( x 0 ) 1 ( L ( w 1 ) L ( x 0 ) ) M 0 w 1 x 0 w 1 D ,
the set D 0 = B [ x 0 , 1 M 0 ] D and the Lipschitz-2 condition
(K6)
L ( x 0 ) 1 ( L ( w 1 ) L ( w 2 ) ) M w 1 w 2 w 1 , w 2 D 0 .
These Lipschitz parameters are related as
M 0 M 1 ,
M M 1
since
D 0 D .
Notice also since parameters M 0 and M are specializations of parameter M 1 , M 1 = M 1 ( D ) , M 0 = M 0 ( D ) , but M = M ( D 0 ) . Therefore, no additional work is required to find M 0 and M (see also [22,23]). Moreover the ratio M 0 M can be very small (arbitrarily). Indeed,
Example 2.
Define scalar function
F ( t ) = b 0 t + b 1 + b 2 sin e b 3 t ,
for t 0 = 0 , where b j , j = 0 , 1 , 2 , 3 are real parameters. It follows by this definition that for b 3 sufficiently large and b 2 sufficiently small, M 0 M 1 can be small (arbitrarily), i.e., M 0 M 1 0 .
Then, clearly there can be a significant extension if parameters M 1 and M 0 or M and M 0 can be replace M 1 in condition (K3). Looking at this direction the following replacements are presented in a series of papers [19,22,23], respectively
( N 2 )                                     s 1 q 2 ,
( N 3 )                                     s 1 q 3 ,
and
( N 4 )                                     s 1 q 4 ,
where q 1 = 2 M 1 , q 2 = M 1 + M 0 , q 3 = 1 4 ( 4 M 0 + M 1 + M 1 2 + 8 M 1 M 0 ) and q 4 = 1 4 ( 4 M 0 + M 1 2 + 8 M 0 M 1 + M 1 M 0 ) . These items are related as follows:
q 4 q 3 q 2 q 1 ,
( N 2 ) ( N 3 ) ( N 4 ) ,
and as relation M 0 M 1 0 ,
q 2 q 1 1 2 , q 3 q 2 1 4 , q 4 q 3 0
and
q 4 q 2 0 .
Preceding items indicate the times (at most) one is improving the other. These are the extensions given in this aforementioned references. However, it turns out that parameter L can replace M 1 in these papers (see Section 3). Denote by N ˜ , q ˜ the corresponding items. It follows
q ˜ 1 q 1 = M M 1 0 , q ˜ 2 q 2 0 , q ˜ 3 q 3 0
for M 0 M 1 0 and M M 1 0 . Hence, the new results also extend the ones in the aforementioned references. Other extensions involve tighter majorizing sequences for NM (see Section 2) and improved uniqueness report for solution x * (Section 3). The applications appear in Section 4 followed by conclusions in Section 5.

2. Majorizations

Let K 0 , M 0 , K , M be given positive parameters and s be a positive variable. The real sequence { t n } defined for t 0 = 0 , t 1 = s , t 2 = t 1 + K ( t 1 t 0 ) 2 2 ( 1 K 0 t 1 ) and n = 0 , 1 , 2 , by
t n + 2 = t n + 1 + M ( t n + 1 t n ) 2 2 ( 1 M 0 t n + 1 )
plays an important role in the study of NM, we adopted the notation t n ( s ) = t n n = 1 , 2 , . That is why some convergence results for it are listed in what follows next in this study.
Lemma 1.
Suppose conditions
K 0 t 1 < 1 a n d t n + 1 < 1 M 0
hold n = 1 , 2 , . Then, the following assertions hold
t n < t n + 1 < 1 M 0
and t * [ s , 1 M 0 ] such that lim n t n = t * .
Proof. 
The definition of sequence { t n } and the condition (7) implies (8). Moreover, increasing sequence { t n } has 1 M 0 as an upper bound. Hence, it is convergent to its (unique) least upper bound t * .
Next, stronger convergence criteria are presented. However, these criteria are easier to verify than conditions of Lemma 1. Define parameter δ by
δ = 2 M M + M 2 + 8 M 0 M .
This parameter plays a role in the following results.
Case: K 0 = M 0 and K = M .
Part (i) of the next auxiliary result relates to the Lemma in [19].
Lemma 2.
Suppose condition
s 1 2 M 2
holds, where
M 2 = 1 4 ( M + 4 M 0 + M 2 + 8 M 0 M ) .
Then, the following assertions hold
(i)
Estimates
t n + 1 t n δ ( t n t n 1 )
t n < 1 δ n + 1 1 δ s < s 1 δ
hold. Moreover, conclusions of Lemma 1 are true for sequence { t n } . The sequence, { t n } converges linearly to t * ( 0 , s 1 δ ] . Furthermore, if for some μ > 0
s < μ ( 1 + μ ) M 2 .
Then, the following assertions hold
(ii)
t n + 1 t n M 2 ( 1 + μ ) ( t n t n 1 ) 2
and
t n + 1 t n 1 α ( α s ) 2 n ,
where α = M 2 ( 1 + μ ) and the conclusions of Lemma 1 for sequence { t n } are true. The sequence, { t n } converges quadratically to t * .
Proof. 
(i)
It is given in [19].
(ii)
Notice that condition (14) implies (11) by the choice of parameter μ . Assertion (15) holds if estimate
0 < M 2 ( 1 M 0 t n + 1 ) M 2 ( 1 + μ )
is true. This estimate is true for n = 1 , since it is equivalent to M 0 s μ 1 + μ . But this is true by M 0 2 M 2 , condition (11) and inequality μ M 0 ( 1 + μ ) 2 M 2 μ 1 + μ . Then, in view of estimate (13), estimate (17) certainly holds provided that
( 1 + μ ) M 0 ( 1 + δ + + δ n + 1 ) s μ 0 .
This estimate motivates the introduction of recurrent polynomials p n which are defined by
p n ( t ) = ( 1 + μ ) M 0 ( 1 + t + + t n + 1 ) s μ ,
t [ 0 , 1 ) . In view of polynomial p n assertion (18) holds if
p n ( t ) 0 a t t = δ .
The polynomials p n are connected:
p n + 1 ( t ) p n ( t ) = ( 1 + μ ) M 0 t n + 2 s > 0 ,
so
p n ( t ) < p n + 1 ( t ) t [ 0 , 1 ) .
Define function p : [ 0 , 1 ) R by
p ( t ) = lim n p n ( t ) .
It follows by definitions (19) and (20) that
p ( t ) = ( 1 + μ ) M 0 s 1 t μ .
Hence, assertion (20) holds if
p ( t ) 0 a t t = δ ,
or equivalently
M 0 s μ 1 + μ M 2 + 8 M 0 M M M 2 + 8 M 0 M + M ,
which can be rewritten as condition (14). Therefore, the induction for assertion (17) is completed. That is assertion (15) holds by the definition of sequence { t n } and estimate (15). It follows that
α ( t n + 1 t n ) α 2 ( t n t n 1 ) = ( α ( t n t n 1 ) ) 2 , α 2 ( α ( t n 1 t n 2 ) ) 2 α 2 α 2 ( t n 1 t n 2 ) 2 2 α 2 α 2 α 2 ( t n 2 t n 3 ) 2 3
so
t n + 1 t n α 1 + 2 + 2 2 + + 2 n 1 s 2 n = 1 α ( α s ) 2 n .
Notice also that M μ < 4 M 2 , then μ ( 1 + μ ) M 1 < 2 M ( 1 + μ ) , so α s < μ .
Remark 1.
(1)
The technique of recurrent polynomials in part (i) is used: to produce convergence condition (11) and a closed form upper bound on sequence { t n } (see estimate (13)) other than 1 M 0 and t * (which is not given in closed form). This way we also established the linear convergence of sequence { t n } . By considering condition (14) but being able to use estimate (13) we establish the quadratic convergence of sequence { t n } in part (ii) of Lemma 2.
(2)
If μ = 1 , then (14) is the strict version of condition (10).
(3)
Sequence { t n } is tighter than the Kantorovich sequence { u n } since M 0 M 1 and M M 1 . Concerning the ration of convergence α s this is also smaller than r = 2 M 1 s ( 1 1 2 M 1 s ) 2 given in the Kantorovich Theorem [19]. Indeed, by these definitions α s < r provided that μ ( 0 , μ 1 ) , where μ 1 = 4 M 1 M ( 1 + 1 2 M 1 s ) 2 1 . Notice that
( 1 + 1 2 M 1 s ) 2 < ( 1 + 1 ) 2 = 4 4 M 1 M ,
so μ 1 > 0 .
Part (i) of the next auxiliary result relates to a Lemma in [19]. The case M 0 = M has been studied in the introduction. So, in the next Lemma we assume M 0 M in part (ii).
Lemma 3.
Suppose condition
s 1 2 M 3
holds, where
M 3 = 1 8 ( 4 M 0 + M 0 M + 8 M 0 2 + M 0 M ) .
Then, the following assertions hold
(i)
t n + 1 t n δ ( t n t n 1 ) δ n 1 M 0 s 2 2 ( 1 M 0 s )
and
t n + 2 s + 1 δ n + 1 1 δ ( t 2 t 1 ) < t * * = s + t 2 t 1 1 δ s , n = 1 , 2 , .
Moreover, conclusions of Lemma 1 are true for sequence { t n } . The sequence { t n } converges linearly to t * ( 0 , t * * ] . Define parameters h 0 by
h 0 = 2 ( M 0 M + 8 M 0 2 + M 0 M ) M ( M 0 M + 8 M 0 2 + M 0 M + 4 M 0 ) , M ¯ 3 = h 0 2 ,
γ = 1 + μ , β = μ 1 + μ , d = 2 ( 1 δ )
and
μ = M 0 2 M 3 M 0 .
(ii)
Suppose
M 0 < M M 0 θ
and (25) hold, where θ 0.6478 is the smallest solution of scalar equation 2 z 4 + z 1 = 0 . Then, the conclusions of Lemma 2 also hold for sequence { t n } . The sequence converges quadratically to t * ( 0 , t * * ] .
(iii)
Suppose
M 1 θ M 0 a n d s < 1 2 M ¯ 3
hold. Then, the conclusions of Lemma 2 are true for sequence { t n } . The sequence { t n } converges quadratically to t * ( 0 , t * * ] .
(iv)
M 0 > M and (25) hold. Then, M ¯ 3 M 3 and the conclusions of Lemma 2 are true for sequence { t n } . The sequence { t n } converges quadratically to t * ( 0 , t * * ] .
Proof. 
(i)
It is given in Lemma 2.1 in [23].
(ii)
As in Lemma 2 but using estimate (27) instead of (13) to show
M 2 ( 1 M 0 t n + 1 ) M γ 2 .
It suffices
γ M 0 s + 1 δ n 1 δ ( t 2 t 1 ) + 1 γ 0
or
p n ( t ) 0 a t t = δ ,
where
p n ( t ) = γ M 0 ( 1 + t + + t n 1 ) ( t 2 t 1 ) + γ M 0 s + 1 γ .
Notice that
p n + 1 ( t ) p n ( t ) = γ M 0 t n ( t 2 t 1 ) > 0 .
Define function p : [ 0 , 1 ) R by
p ( t ) = lim n p n ( t ) .
It follows that
p ( t ) = γ M 0 ( t 2 t 1 ) 1 t + γ M 0 s + 1 γ .
So, (30) holds provided that
p ( t ) 0 a t t = δ .
By the definition of parameters γ , d , β and for M 0 s = x , (31) holds if
x 2 2 ( 1 x ) ( 1 δ ) + x β
or
( d 1 ) x 2 + ( 1 + β ) x β 0
or
x 1 + β ( 1 β ) 2 + 4 β d 2 ( 1 d )
or
s 1 + β ( 1 β ) 2 + 4 β d 2 ( 1 d ) .
Claim. The right hand side of assertion (31) equals 1 M 2 . Indeed, this is true if
1 + β ( 1 β ) 2 + 4 β d = 2 M 0 ( 1 d ) M 2
or
1 + β 2 M 0 ( 1 d ) 2 M 3 = ( 1 β ) 2 + 4 β d
or by squaring both sides
1 + β 2 + 4 M 0 2 ( 1 d ) 2 4 M 3 2 + 2 β 4 M 0 ( 1 d ) 2 M 3 4 β M 0 ( 1 d ) 2 M 3 = 1 + β 2 2 β + 4 β d
or
β 1 M 0 ( 1 d ) 2 M 3 d = M 0 ( 1 d ) 2 M 3 1 M 0 2 M 3
or
β 1 M 0 2 M 3 ( 1 d ) = 1 M 0 2 M 3 ( 1 d ) M 0 2 M 3
or
β = M 0 2 M 3
or
μ 1 + μ = M 0 2 M 3
or
μ = M 0 2 M 3 M 0 ,
which is true. Notice also that
2 M 3 M 0 = 1 4 ( 4 M 0 + M 0 M + M 0 M + 8 M 0 2 ) = 1 4 ( M 0 M + M 0 M + 8 M 0 2 ) > 0
and 2 M 3 2 M 0 > 0 , since 2 M 3 M 0 = M 0 M + M 0 M + 8 M 0 2 4 M 0 4 , M 0 < M 0 M and 3 M 0 < M 0 M + 8 M 0 2 (by condition (25)). Thus, μ ( 0 , 1 ) . It remains to show
α = M 2 ( 1 + μ ) s < 1
or by the choice of μ and M 2
M 2 2 1 + M 0 2 M 3 M 0 s < 1
or
s < 1 2 M ¯ 3 .
Claim. M ¯ 3 M 3 . By the definition of parameters M 2 and M ¯ 3 it must be shown that
M ( M 0 M + M 0 M + 8 M 0 2 + 4 M 0 2 ( M 0 M + M 0 M + 8 M 0 2 ) M 0 M + M 0 M + 8 M 0 2 + 4 M 0 4
or if for y = M 0 M
2 y y + 8 y 2 .
By (28) 2 y > 0 , so estimate (34) holds if 2 y 2 + y 1 0 or
2 z 4 + z 1 0 f o r z = y .
However, the last inequality holds by (28). The claimed is justified. So, estimate (33) holds by (25) and this claim.
(iii)
It follows from the proof in part (ii). However, this time M 2 M ¯ 2 follows from (29). Notice also that according to part (ii) condition (25) implies (29). Moreover, according to part (iii) condition (29) implies (25).
(iv)
As in case (ii) estimate (34) must be satisfied. If M 0 4 M , then the estimate (34) holds, since 2 y 0 . If M < M 0 < 4 M then again M 0 > θ M , so estimate (34) or equivalently 2 z 2 + z 1 > 0 holds.
Comments similar to Remark 1 can follow for Lemma 3.
Case. Parameters K 0 and K are not equal to M 0 . Comments similar to Remark 1 can follow for Lemma 3.
It is convenient to define parameter δ 0 by
δ 0 = K ( t 2 t 1 ) 2 ( 1 K 0 t 2 )
and the quadratic polynomial φ by
φ ( t ) = ( M K + 2 δ M 0 ( K 2 K 0 ) ) t 2 + 4 δ ( M 0 + K 0 ) t 4 δ .
The discriminant ∆ of polynomial q can be written as
= 16 δ ( δ ( M 0 K 0 ) ) 2 + ( M + 2 δ M 0 ) K > 0 .
It follows that the root 1 h 1 given by the quadratic formula can be written as
1 2 h 1 = 2 δ ( M 0 + K 0 ) + ( δ ( M 0 + K 0 ) ) 2 + δ ( M K + 2 δ M 0 ) ( K 2 K 0 ) .
Denote by 1 h 2 the unique positive zero of equation
M 0 ( K 2 K 0 ) t 2 + 2 M 0 t 1 = 0 .
This root can be written as
1 2 h 2 = 1 M 0 + M 2 + M 0 ( K 2 K 0 ) .
Define parameter M 4 by
1 M 4 = min 1 h 1 , 1 h 2 .
Part (i) of the next auxiliary result relates to Lemma 2.1 in [22].
Lemma 4.
Suppose
s 1 2 M 4
holds, where parameter M 4 is given by Formula (35). Then, the following assertions hold
(i)
Estimates
t n + 2 t n + 1 δ 0 δ n 1 K s 2 2 ( 1 K 0 s ) ,
and
t n + 2 s + 1 + δ 0 1 δ n 1 δ ( t 2 t 1 ) t ¯ = s + 1 + δ 0 1 δ ( t 2 t 1 ) .
Moreover, conclusions of Lemma 2 are true for sequence { t n } . The sequence { t n } converges linearly to t * ( 0 , t ¯ ] .
(ii)
Suppose
M 0 δ 0 ( t 2 t 1 ) 1 δ + s β ,
s < 2 ( 1 + μ ) M
and (36) hold for some μ > 0 . Then, the conclusions of Lemma 3 are true for sequence { t n } . The sequence { t n } converges quadratically to t * ( 0 , t ¯ ] .
Proof. 
(i)
It is given in Lemma 2.1 in [22].
(ii)
Define polynomial p n by
p n ( t ) = γ M 0 δ 0 ( 1 + t + + t n 1 ) ( t 2 t 1 ) + γ M 0 s + 1 γ .
By this definition it follows
p n + 1 ( t ) p n ( t ) = γ M 0 δ 0 ( t 2 t 1 ) t n > 0 .
As in the proof of Lemma 3 (ii), estimate
M 2 ( 1 M 0 t n + 1 ) M 2 γ
holds provided that
p n ( t ) 0 a t t = δ .
Define function p : [ 0 , 1 ) R by
p ( t ) = lim n p n ( t ) .
It follows by the definition of function p and polynomial p n that
p ( t ) = γ M 0 δ 0 ( t 2 t 1 ) 1 t + γ M 0 s γ .
Hence, estimate (39) holds provided that
p ( t ) 0 a t t = δ .
However, this assertion holds, since μ ( 0 , 1 ) . Moreover, the definition of α and condition (38) of the Lemma 4 imply
α s = M 2 ( 1 + μ ) .
Hence, the sequence { t n } converges quadratically to t * .
Remark 2.
Conditions (36)–(38) can be condensed and a specific choice for μ can be given as follows: Define function f : 0 , 1 K 0 R by
f ( t ) = 1 M 0 δ 0 ( t ) ( t 2 ( t ) t 1 ( t ) ) 1 δ + t .
It follows by this definition
f ( 0 ) = 1 > 0 , f ( t ) a s t 1 K 0 .
Denote by μ 2 the smallest solution of equation f ( t ) = 0 in 0 , 1 K 0 . Then, by choosing μ = μ 2 conditions (37) holds as equality. Then, if follows that if we solve the first condition in (37) for “s", then conditions (36)–(38) can be condensed as
s s 1 min 1 M 4 , 2 ( 2 + μ 2 ) M .
If s 1 = 2 ( 2 + μ 2 ) M , then condition (40) should hold as a strict inequality to show quadratic convergence.

3. Semi-Local Convergence

Sequence { t n } given by (6) was shown to be majorizing for { x n } and tighter than { u n } under conditions of Lemmas in [19,22,23], respectively. These Lemmas correspond to part (i) of Lemma 1, Lemma 3 and Lemma 4, respectively. However, by asking the initial approximation s to be bounded above by a slightly larger bound the quadratic order of convergence is recovered. Hence, the preceding Lemmas can replace the order ones, respectively in the semi-local proofs for NM in these references. The parameter K 0 and K are connected to x 0 and L as follows
(K7) ∃ parameter K 0 > 0 such that for x 1 = x 0 L ( x 0 ) 1 L ( x 0 )
L ( x 0 ) 1 ( L ( x 1 ) L ( x 0 ) ) K 0 x 1 x 0 ,
(K8) ∃ parameter K such that ξ [ 0 , 1 ] , x , y D 0 ,
0 1 L ( x 0 ) 1 ( L ( x + ξ ( y x ) ) L ( x ) ) d ξ K 2 y x .
Note that K 0 M 0 and K M . The convergence criteria in Lemmas 1, 3 and 4 do not necessarily imply each other in each case. That is why we do not only rely on Lemma 4 to show the semi-local convergence of NM. Consider the following three sets of conditions:
  • (A1): (K1), (K4), (K5), (K6) and conditions of Lemma 1 hold for ρ = t * , or
  • (A2): (K1), (K4) (K5), (K6), conditions of Lemma 2 hold with ρ = t * , or
  • (A3): (K1), (K4) (K5), (K6), conditions of Lemma 3 hold with ρ = t * , or
  • (A4): (K1), (K4) (K5), (K6), conditions of Lemma 4 hold with ρ = t * .
The upper bounds of the limit point given in the Lemmas and in closed form can replace ρ in condition (K4). The proof are omitted in the presentation of the semi-local convergence of NM since the proof is given in the aforementioned references [19,20,22,23] with the exception of quadratic convergence given in part (ii) of the presented Lemmas.
Theorem 2.
Suppose any of conditions A i , i = 1 , 2 , 3 , 4 hold. Then, sequence { x n } generated by NM is well defined in B [ x 0 , ρ ] , remains in B [ x 0 , ρ ] n = 0 , 1 , 2 , and converges to a solution x * B [ x 0 , ρ ] of equation L ( x ) = 0 . Moreover, the following assertion hold n = 0 , 1 , 2 ,
x n + 1 x n t n + 1 t n
and
x * x n t * t n .
The convergence ball is given next. Notice, however that we do not use all conditions A i .
Proposition 1.
Suppose: there exists a solution x * B ( x 0 , ρ 0 ) of equation L ( x ) = 0 for some ρ 0 > 0 ; condition (K5) holds and ρ 1 ρ 0 such that
M 0 2 ( ρ 0 + ρ 1 ) < 1 .
Set D 1 = D B [ x 0 , ρ 1 ] . Then, the only solution of equation L ( x ) = 0 in the set D 1 is x * .
Proof. 
Let x * D 1 be a solution of equation L ( x ) = 0 . Define linear operator J = 0 1 L ( x * + τ ( x * x * ) ) d τ . Then, using (K5) and (41)
L ( x 0 ) 1 ( L ( x 0 ) J ) M 0 0 1 ( ( 1 τ ) x 0 x * + τ x 0 x * ) d τ M 0 2 ( ρ 0 + ρ 1 ) < 1 .
Therefore, x * = x * is implied by the invertability of J and
J ( x * x * ) = L ( x * ) L ( x * ) = 0 .
If conditions of Theorem 2 hold, set ρ 0 = ρ .

4. Numerical Experiments

Two experiments are presented in this Section.
Example 3.
Recall Example 1 (with L ( x ) = c ( x ) ). Then, the parameters are s = 1 a 3 , K 0 = a + 5 3 , M 0 = 3 a , M 1 = 2 ( 2 a ) . It also follows D 0 = B ( 1 , 1 a ) B 1 , 1 M 0 = B 1 , 1 M 0 , so K = M = 2 1 + 1 3 a . Denote by T i , i = 1 , 2 , 3 , 4 the set of values a for which conditions ( K 3 ) , ( N 2 ) ( N 4 ) are satisfied. Then, by solving these inequalities for a : T 1 = , T 2 = [ 0.4648 , 0.5 ) , T 3 = [ 0.4503 , 0.5 ) , and T 4 = [ 0.4272 , 0.5 ) , respectively.
The domain can be further extended. Choose a = 0.4 , then, 1 M 0 = 0.3846 . The following Table 1 shows, that the conditions of Lemma 1, since K 0 t < 1 and M 0 t n + 1 < 1 n = 1 , 2 , .
Example 4.
Let U = V = I R 3 , D = B ( x 0 , 0.5 ) and
L ( x ) = e x 1 1 , x 2 3 + x 2 , x 3 T .
The equation L ( x ) = 0 has the solution x * = ( 0 , 0 , 0 ) T and L ( x ) = d i a g ( e x 1 , 3 x 2 2 + 1 , 1 ) .
Let x 0 = ( 0.1 , 0.1 , 0.1 ) T . Then s = L ( x 0 ) 1 L ( x 0 ) 0.1569 ,
M 0 = max e 0.6 e 0.1 , 3 ( 0.6 + 0.1 ) 1.03 2.7183 ,
M 1 = max e 0.6 e 0.1 , 3 ( 0.6 + 0.6 ) 1.03 3.49513 .
It also follows that 1 M 0 0.3679 , D 0 = D B [ x 0 , 1 M 0 ] = B [ 0.1 , 0.3679 ] and
K 0 = max e p 1 e 0.1 , 3 ( p 2 + 0.1 ) 1.03 2.3819 ,
M = K = max e p 1 e 0.1 , 6 p 1 1.03 2.7255 ,
where p 1 = 0.1 + 1 M 0 0.4679 , p 2 0.0019 .
Notice that M 0 < M 1 and M < M 1 . The Kantorovich convergence condition (K3) is not fulfilled, since 2 M 1 s 1.0968 > 1 . Hence, convergence of converge NM is not assured by the Kantorovich criterion. However, the new conditions (N2)–(N4) are fulfilled, since q 2 s 0.9749 < 1 , q 3 s 0.9320 < 1 , q 4 s 0.8723 < 1 .
The following Table 2 shows, that the conditions of Lemma 1 are fulfilled, since K 0 t < 1 and M 0 t n + 1 < 1 n = 1 , 2 , .
Example 5.
Let U = V = C [ 0 , 1 ] be the domain of continuous real functions defined on the interval [ 0 , 1 ] . Set D = B [ x 0 , 3 ] , and define operator L on D as
L ( v ) ( v 1 ) = v ( v 1 ) y ( v 1 ) 0 1 N ( v 1 , t ) v 3 ( t ) d t , v C [ 0 , 1 ] , v 1 [ 0 , 1 ] ,
where y is given in C [ 0 , 1 ] , and N is a kernel given by Green’s function as
N ( v 1 , t ) = ( 1 v 1 ) t , t v 1 v 1 ( 1 t ) , v 1 t .
By applying this definition the derivative of L is
[ L ( v ) ( z ) ] ( v 1 ) = z ( v 1 ) 3 0 1 N ( v 1 , t ) v 2 ( t ) z ( t ) d t
z C [ 0 , 1 ] , v 1 [ 0 , 1 ] . Pick x 0 ( v 1 ) = y ( v 1 ) = 1 . The norm-max is used. It then follows from (43)–(45) that L ( x 0 ) 1 L ( B 2 , B 1 ) ,
I L ( x 0 ) < 0.375 , L ( x 0 ) 1 1.6 ,
s = 0.2 , M 0 = 2.4 , M 1 = 3.6 ,
and D 0 = B ( x 0 , 3 ) B [ x 0 , 0.4167 ] = B [ x 0 , 0.4167 ] , so M = 1.5 . Notice that M 0 < M 1 and M < M 1 . Choose K 0 = K = M 0 . The Kantorovich convergence condition (K3) is not fulfilled, since 2 M 1 s = 1.44 > 1 . Hence, convergence of converge NM is not assured by the Kantorovich criterion. However, new condition (36) is fulfilled, since 2 M 4 s = 0.6 < 1 .
Example 6.
Let U = V = I R , D = ( 1 , 1 ) and
L ( x ) = e x + 2 x 1 .
The equation L ( x ) = 0 has the solution x * = 0 . The parameters are s = | e x 0 + 2 x 0 1 e x 0 + 2 | , M 0 = M 1 = e , K 0 = K = M = e x 0 + 1 e and
D 0 = ( 1 , 1 ) x 0 1 e , x 0 + 1 e = x 0 1 e , x 0 + 1 e .
Let us choose x 0 = 0.15 . Then, s 0.1461 . Conditions (K3) and (N2) are fulfilled. The majorizing sequences { t n } (6) and { u n } from Theorem 1 are:
{ t n } = { 0 , 0.1461 , 0.1698 , 0.1707 , 0.1707 , 0.1707 , 0.1707 } ,
{ u n } = { 0 , 0.1461 , 0.1942 , 0.2008 , 0.2009 , 0.2009 , 0.2009 , 0.2009 } .
In Table 3, there are error bounds. Notice that the new error bounds are tighter, than the ones in Theorem 1.
Let us choose x 0 = 0.2 . Then, s 0.1929 . In this case condition (K3) is not held, but (N2) holds. The majorizing sequence { t n } (6) is:
{ t n } = { 0 , 0.1929 , 0.2427 , 0.2491 , 0.2492 , 0.2492 , 0.2492 , 0.2492 } .
Table 4 shows the error bounds from Theorem 2.

5. Conclusions

We developed a comparison between results on the semi-local convergence of NM. There exists an extensive literature on the convergence analysis of NM. Most convergence results are based on recurrent relations, where the Lipschitz conditions are given in affine or non-affine invariant forms.The new methodology uses recurrent functions. The idea is to construct a domain included in the one used before which also contains the Newton iterates. That is important, since the new results do not require additional conditions. This way the new sufficient convergence conditions are weaker in the Lipschitz case, since they rely on smaller constants. Other benefits include tighter error bounds and more precise uniqueness of the solution results. The new constants are special cases of earlier ones. The methodology is very general making it suitable to extend the usage of other numerical methods under Hölder or more generalized majorant conditions. This will be the topic of our future work.

Author Contributions

Conceptualization I.K.A.; Methodology I.K.A.; Investigation S.R., I.K.A., S.S. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Appell, J.; DePascale, E.; Lysenko, J.V.; Zabrejko, P.P. New results on Newton-Kantorovich approximations with applications to nonlinear integral equations. Numer. Funct. Anal. Optim. 1997, 18, 1–17. [Google Scholar] [CrossRef]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  3. Ezquerro, J.A.; Hernández-Verón, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory. Frontiers in Mathematics; Birkhäuser/Springer: Cham, Switzerland, 2017. [Google Scholar]
  4. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  5. Potra, F.A.; Pták, V. Nondiscrete induction and iterative processes. In Research Notes in Mathematics; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  6. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  7. Yamamoto, T. Historical developments in convergence analysis for Newton’s and Newton-like methods. J. Comput. Appl. Math. 2000, 124, 1–23. [Google Scholar] [CrossRef]
  8. Zhanlav, T.; Chun, C.; Otgondorj, K.H.; Ulziibayar, V. High order iterations for systems of nonlinear equations. Int. J. Comput. Math. 2020, 97, 1704–1724. [Google Scholar] [CrossRef]
  9. Sharma, J.R.; Guha, R.K. Simple yet efficient Newton-like method for systems of nonlinear equations. Calcolo 2016, 53, 451–473. [Google Scholar] [CrossRef]
  10. Grau-Sanchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving system of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  11. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  12. Kou, J.; Wang, X.; Li, Y. Some eight order root finding three-step methods. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 536–544. [Google Scholar] [CrossRef]
  13. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  14. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameters. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  15. Moccari, M.; Lofti, T. On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability. J. Math. Anal. Appl. 2018, 468, 240–269. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  17. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  18. Shakhno, S.M. On a two-step iterative process under generalized Lipschitz conditions for first-order divided differences. J. Math. Sci. 2010, 168, 576–584. [Google Scholar] [CrossRef]
  19. Argyros, I.K. On the Newton-Kantorovich hypothesis for solving equations. J. Comput. Math. 2004, 169, 315–332. [Google Scholar] [CrossRef]
  20. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  21. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef]
  22. Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s scheme. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar]
  23. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s scheme. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
Table 1. Sequence (6) for Example 1.
Table 1. Sequence (6) for Example 1.
n12345678
t n 0.20000.28650.32720.34250.34550.34560.34560.3456
Table 2. Sequence (6) for Example 4.
Table 2. Sequence (6) for Example 4.
n123456
t n 0.15690.21540.22660.22710.22710.2271
Table 3. Results for x 0 = 0.15 for Example 6.
Table 3. Results for x 0 = 0.15 for Example 6.
n | x n + 1 x n | | t n + 1 t n | | u n + 1 u n |
01.4607 × 10 1 1.4607 × 10 1 1.4607 × 10 1
13.9321 × 10 3 2.3721 × 10 2 4.8092 × 10 2
22.5837 × 10 6 8.7693 × 10 4 6.6568 × 10 3
31.1126 × 10 12 1.2039 × 10 6 1.3262 × 10 4
402.2688× 10 12 5.2681× 10 8
Table 4. Results for x 0 = 0.2 for Example 6.
Table 4. Results for x 0 = 0.2 for Example 6.
n | x n + 1 x n | | t n + 1 t n |
01.929 × 10 1 1.929 × 10 1
17.0934 × 10 3 4.9769 × 10 2
28.4258 × 10 6 6.4204 × 10 3
31.1832 × 10 11 1.1263 × 10 4
403.4690 × 10 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Regmi, S.; Shakhno, S.; Yarmola, H. A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions. Mathematics 2022, 10, 2931. https://0-doi-org.brum.beds.ac.uk/10.3390/math10162931

AMA Style

Argyros IK, Regmi S, Shakhno S, Yarmola H. A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions. Mathematics. 2022; 10(16):2931. https://0-doi-org.brum.beds.ac.uk/10.3390/math10162931

Chicago/Turabian Style

Argyros, Ioannis K., Samundra Regmi, Stepan Shakhno, and Halyna Yarmola. 2022. "A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions" Mathematics 10, no. 16: 2931. https://0-doi-org.brum.beds.ac.uk/10.3390/math10162931

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop