Next Article in Journal
Hydrodynamic Impacts of Short Laser Pulses on Plasmas
Next Article in Special Issue
A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions
Previous Article in Journal
Boosting Unsupervised Dorsal Hand Vein Segmentation with U-Net Variants
Previous Article in Special Issue
Gradient-Based Optimization Algorithm for Solving Sylvester Matrix Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Three-Step Numerical Methods for Solving Equations in Banach Spaces

by
Michael I. Argyros
1,
Ioannis K. Argyros
2,*,
Samundra Regmi
3,* and
Santhosh George
4
1
Department of Computer Science, University of Oklahoma, Norman, OK 73019, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, University of Houston, Houston, TX 77204, USA
4
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangaluru 575 025, India
*
Authors to whom correspondence should be addressed.
Submission received: 8 July 2022 / Revised: 24 July 2022 / Accepted: 26 July 2022 / Published: 27 July 2022
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)

Abstract

:
In this article, we propose a new methodology to construct and study generalized three-step numerical methods for solving nonlinear equations in Banach spaces. These methods are very general and include other methods already in the literature as special cases. The convergence analysis of the specialized methods is been given by assuming the existence of high-order derivatives which are not shown in these methods. Therefore, these constraints limit the applicability of the methods to equations involving operators that are sufficiently many times differentiable although the methods may converge. Moreover, the convergence is shown under a different set of conditions. Motivated by the optimization considerations and the above concerns, we present a unified convergence analysis for the generalized numerical methods relying on conditions involving only the operators appearing in the method. This is the novelty of the article. Special cases and examples are presented to conclude this article.
MSC:
49M15; 47H17; 65J15; 65G99; 47H17; 41A25; 49M15

1. Introduction

A plethora of applications from diverse disciplines of computational sciences are converted to nonlinear equations such as
F ( x ) = 0
using modeling (mathematical) [1,2,3,4]. The nonlinear operator F is defined on an open and convex subset Ω of a Banach space X with values in X. The solution of the equation is denoted by x * . Numerical methods are mainly used to find x * . This is the case since the analytic form of the solution x * is obtained in special cases.
Researchers, as well as practitioners, have proposed numerous numerical methods under a different set of convergence conditions using high-order derivatives, which are not present in the methods.
Let us consider an example.
Example 1.
Define the function F on X = [ 0.5 , 1.5 ] by
F ( t ) = t 3 ln t 2 + t 5 t 4 , t 0 0 , t = 0
Clearly, the point t * = 1 solves the equation F ( t ) = 0 . It follows that
F ( t ) = 6 ln t 2 + 60 t 2 24 t + 22 .
Then, the function F does not have a bounded third derivative in X .
Hence, many high convergence methods (although they may converge) cannot apply to show convergence. In order to address these concerns, we propose a unified approach for dealing with the convergence of these numerical methods that take into account only the operators appearing on them. Hence, the usage of these methods becomes possible and under weaker conditions.
Let x 0 Ω be a starting point. Define the generalized numerical method n = 0 , 1 , 2 , by
y n = a n = a ( x n ) z n = b n = b ( x n , y n ) x n + 1 = c n = c ( x n , y n , z n ) ,
where a : Ω X , b : Ω × Ω X and c : Ω × Ω × Ω X are given operators chosen so that lim n x n = x * .
The specialization of (2) is
y n = x n + α n F ( x n ) z n = u n + β n F ( x n ) + γ n F ( y n ) x n + 1 = v n + δ n F ( x n ) + ϵ n F ( y n ) + θ n F ( z n ) ,
where u n = x n or u n = y n , v n = x n or v n = y n or v n = z n , and α n , β n , γ n , δ n , ϵ n , θ n are linear operators on Ω , Ω × Ω and Ω × Ω × Ω , with values in X , respectively. By choosing some of the linear operators equal to the O linear operators in (3), we obtain the methods studied in [5]. Moreover, if X = R k , then we obtain the methods studied in [6,7]. In particular, the methods in [5] are of the special form
y n = x n O 1 , n 1 F ( x n ) z n = y n O 2 , n 1 F ( y n ) x n + 1 = z n O 3 , n 1 F ( z n ) ,
y n = x n s F ( x n ) 1 F ( x n ) z n = x n O 4 , n F ( x n ) x n + 1 = z n O 5 , n F ( z n ) ,
where they, as the methods in [7,8], are of the form
y n = x n F ( x n ) 1 F ( x n ) z n = y n O 6 , n F ( x n ) 1 F ( y n ) x n + 1 = z n O 7 , n F ( x n ) 1 F ( z n ) ,
where s R is a given parameter, and O k , n , k = 1 , 2 , , 7 are linear operators acting between Ω and X . In particular, operators must have a special form to obtain the fourth, seventh or eighth order of convergence.
Further specifications of operators “ O ” lead to well-studied methods, a few of which are listed below (other choices can be found in [6,7,9,10]):
  • Newton method (second order) [1,4,11,12]:
    y n = x n F ( x n ) 1 F ( x n ) .
  • Jarrat method (second order) [13]:
    y n = x n 2 3 F ( x n ) 1 F ( x n ) .
  • Traub-type method (fifth order) [14]:
    y n = x n F ( x n ) 1 F ( x n ) z n = x n F ( x n ) 1 F ( y n ) x n + 1 = x n F ( x n ) 1 F ( z n ) .
  • Homeir method (third order) [15]:
    y n = x n 1 2 F ( x n ) 1 F ( x n ) x n + 1 = y n F ( x n ) 1 F ( y n ) .
  • Cordero–Torregrosa (third Order) [2]:
    y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n 6 F ( x n ) + 4 F ( x n + y n 2 ) F ( y n ) 1 F ( x n ) .
    or
    y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n 2 2 F ( 3 x n + y n 4 ) F ( x n + y n 2 ) + 2 F ( x n + 3 y n 4 ) 1 F ( x n ) .
  • Noor–Wasseem method (third order) [3]:
    y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n 4 3 F ( 2 x n + y n 3 ) + F ( y n ) 1 F ( x n ) .
  • Xiao–Yin method (third order) [16]:
    y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n 2 3 ( 3 F ( y n ) F ( x n ) ) 1 + F ( x n ) 1 F ( x n ) .
  • Corder–Torregrosa method (fifth order) [2]:
    y n = x n 2 3 F ( x n ) 1 F ( x n ) z n = x n 1 2 ( 3 F ( y n ) F ( x n ) ) 1 ( 3 F ( y n ) + F ( x n ) ) F ( x n ) 1 F ( x n ) x n + 1 = z n ( 1 2 F ( y n ) + 1 2 F ( x n ) ) 1 F ( z n ) .
    or
    y n = x n F ( x n ) 1 F ( x n ) z n = x n 2 ( F ( y n ) + F ( x n ) ) 1 F ( x n ) x n + 1 = z n F ( y n ) 1 F ( z n ) .
  • Sharma–Arora method (fifth order) [17,18]:
    y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n ( 2 F ( y n ) 1 F ( x n ) 1 ) F ( x n ) .
  • Xiao–Yin method (fifth order) [16]:
    y n = x n 2 3 F ( x n ) 1 F ( x n ) z n = x n 1 4 ( 3 F ( y n ) 1 + F ( x n ) 1 ) F ( x n ) x n + 1 = x n 1 3 ( 3 F ( y n ) F ( x n ) ) 1 F ( x n ) .
  • Traub-type method (second order) [14]:
    y n = x n [ w n , x n ; F ] 1 F ( x n ) w n = x n + d F ( x n ) ,
    where [ . , . ; F ] : Ω × Ω L ( X , X ) is a divided difference of order one.
  • Moccari–Lofti method (fourth order) [19]:
    y n = x n [ x n , w n ; F ] 1 F ( x n ) x n + 1 = y n ( [ y n , w n ; F ] + [ y n , x n ; F ] [ x n , w n ; F ] ) 1 F ( y n ) .
  • Wang–Zang method (seventh order) [8,16,20]:
    y n = x n [ w n , x n ; F ] 1 F ( x n ) z n = M 8 ( x n , y n ) x n + 1 = z n ( [ z n , x n ; F ] + [ z n , y n ; F ] [ y n , x n ; F ] ) 1 F ( z n ) ,
    where M 8 is any fourth-order Steffensen-type iteration method.
  • Sharma–Arora method (seventh order) [17]:
    y n = x n [ w n , x n ; F ] 1 F ( x n ) z n = y n ( 3 I [ w n , x n ; F ] ( [ y n , x n ; F ] + [ y n , w n ; F ] ) [ w n , x n ; F ] 1 ) F ( y n ) x n + 1 = z n [ z n , y n ; F ] 1 ( [ w n , x n ; F ] + [ y n , x n ; F ] [ z n , x n ; F ] ) [ w n , x n ; F ] 1 F ( z n ) .
The local, as well as the semi-local, convergence for methods (4) and (5), were presented in [17], respectively, using hypotheses relating only to the operators on these methods. However, the local convergence analysis of method (6) requires the usage of derivatives or divided differences of higher than two orders, which do not appear in method (6). These high-order derivatives restrict the applicability of method (6) to equations whose operator F has high-order derivatives, although method (6) may converge (see Example 1).
Similar restrictions exist for the convergence of the aforementioned methods of order three or above.
It is also worth noticing that the fifth convergence order method by Sharma [18]
y n = x n F ( x n ) 1 F ( x n ) z n = y n 5 F ( x n ) 1 F ( y n ) x n + 1 = y n 1 5 [ 9 F ( x n ) 1 F ( y n ) + F ( x n ) 1 F ( z n ) ]
cannot be handled with the analyses given previously [5,6,7] for method (4), method (5), or method (6).
Based on all of the above, clearly, it is important to study the convergence of method (2) and its specialization method (3) with the approach employed for method (4) or (5). This way, the resulting unified convergence criteria can apply to their specialized methods listed or not listed previously. Hence, this is the motivation as well as the novelty of the article.
There are two important types of convergence: the semi-local and the local. The semi-local uses information involving the initial point to provide criteria, assuring the convergence of the numerical method, while the local one is based on the information about the solution to find the radii of the convergence balls.
The local convergence results are vital, although the solution is unknown in general since the convergence order of the numerical method can be found. This kind of result also demonstrates the degree of difficulty in selecting starting points. There are cases when the radius of convergence of the numerical method can be determined without the knowledge of the solution.
As an example, let X = R . Suppose function F satisfies an autonomous differential [5,21] equation of the form
H ( F ( t ) ) = F ( t ) ,
where H is a continuous function. Notice that H ( F ( t * ) ) = F ( t * ) or F ( t * ) = H ( 0 ) . In the case of F ( t ) = e t 1 , we can choose H ( t ) = t + 1 (see also the numerical section).
Moreover, the local results can apply to projection numerical methods, such as Arnoldi’s, the generalized minimum residual numerical method (GMRES), the generalized conjugate numerical method (GCS) for combined Newton/finite projection numerical methods, and in relation to the mesh independence principle to develop the cheapest and most efficient mesh refinement techniques [1,5,11,21].
In this article, we introduce a majorant sequence and use our idea of recurrent functions to extend the applicability of the numerical method (2). Our analysis includes error bounds and results on the uniqueness of x * based on computable Lipschitz constants not given before in [5,13,21,22,23,24] and in other similar studies using the Taylor series. This idea is very general. Hence, it applies also to other numerical methods [10,14,22,25].
The convergence analysis of method (2) and method (3) is given in Section 2. Moreover, the special choices of operators appear in the method in Section 3 and Section 4. Concluding remarks, open problems, and future work complete this article.

2. Convergence Analysis of Method

The local is followed by the semi-local convergence analysis. Let S = [ 0 , ) and S 0 = [ 0 , ρ 0 ) for some ρ 0 > 0 . Consider functions h 1 : S 0 R , h 2 : S 0 × S 0 R and h 3 : S 0 × S 0 × S 0 R be continuous and nondecreasing in each variable.
Suppose that equations
h i ( t ) 1 = 0 , i = 1 , 2 , 3
have the smallest solutions, ρ i S { 0 } . The parameter ρ defined by
ρ = min { ρ i }
shall be shown to be a radius of convergence for method (2). Let S 1 = [ 0 , ρ ) . It follows by the definition of radius ρ that for all t S 1
0 h i ( t ) < 1 .
The notation U ( x , ς ) denotes an open ball with center x X and of radius ς > 0 . By U [ x , ς ] , we denote the closure of U ( x , ς ) .
The following conditions are used in the local convergence analysis of the method (2).
Suppose the following:
(H1)
Equation F ( x ) = 0 has a solution x * Ω .
(H2)
a ( x ) x * h 1 ( x x * ) x x * ,
b ( x , y ) x * h 2 ( x x * , y x * ) x x *
and
c ( x , y , z ) x * h 3 ( x x * , y x * , z x * ) x x *
for all x , y , z Ω 0 = Ω U ( x * , ρ 0 ) .
(H3)
Equations (24) have smallest solutions ρ i S 0 { 0 } ;
(H4)
U [ x * , ρ ] Ω , where the radius ρ is given by Formula (25).
Next, the main local convergence analysis is presented for method (2).
Theorem 1.
Suppose that the conditions (H1)–(H4) hold and x 0 U ( x * , r ) { x * } . Then, the sequence { x n } generated by method (2) is well defined and converges to x * . Moreover, the following estimates hold n = 0 , 1 , 2 ,
y n x * h 1 ( x n x * ) x n x * x n x * < ρ
z n x * h 2 ( x n x * , y n x * ) x n x * x n x *
and
x n + 1 x * h 3 ( x n x * , y n x * , z n x * ) x n x * x n x * .
Proof. 
Let x 0 U ( x * , ρ 0 ) . Then, it follows from the first condition in (H1) the definition of ρ , (26) (for i = 1 ) and the first substep of method (2) for n = 0 that
y 0 x * h 1 ( x 0 x * ) x 0 x * x 0 x * < ρ ,
showing estimate (27) for n = 0 and the iterate y 0 U ( x * , ρ ) . Similarly,
z 0 x * h 2 ( x 0 x * , y 0 x * ) x 0 x * h 2 ( x 0 x * , y 0 x * ) h 2 ( x 0 x * , x 0 x * ) x 0 x * x 0 x *
and
x 1 x * h 3 ( x 0 x * , y 0 x * , z 0 x * ) x 0 x * h 3 ( x 0 x * , x 0 x * , x 0 x * ) x 0 x * x 0 x * ,
showing estimates (28), (29), respectively and the iterates z 0 , x 1 U ( x * , ρ ) . By simply replacing x 0 , y 0 , z 0 , x 1 by x k , y k , z k , x k + 1 in the preceding calculations, the induction for estimates (27)–(29) is terminated. Then, from the estimate
x k + 1 x * d x k x * < ρ ,
where
d = h 3 ( x 0 x * , x 0 x * , x 0 x * ) [ 0 , 1 )
we conclude x k + 1 U [ x * , ρ ] and lim k x k = x * .
Remark 1.
It follows from the proof of Theorem 1 that y , z can be chosen in particular as y n = a ( x n ) and z n = b ( x n , y n ) . Thus, the condition (H2) should hold for all x , a ( x ) , b ( x , y ) Ω 0 and not x , y , z Ω 0 . Clearly, in this case, the resulting functions h i are at least as tight as the functions h i , leading to an at least as large radius of convergence ρ ¯ as ρ (see the numerical section).
Concerning the semi-local convergence of method (2), let us introduce scalar sequences { t n } , { s n } and { u n } defined for t 0 = 0 , s 0 = η 0 and the rest of the iterates, depending on operators a , b , c and F (see how in the next section). These sequences shall be shown to be majorizing for method (2). However, first, a convergence result for these sequence is needed.
Lemma 1.
Suppose that n = 0 , 1 , 2 ,
t n s n u n t n + 1
and
t n λ
for some λ 0 . Then, the sequence { t n } is convergent to its unique least upper bound t * [ 0 , λ ] .
Proof. 
It follows from conditions (33) and (34) that sequence { t n } is nondecreasing and bounded from above by λ , and as such, it converges to t * .
Theorem 2.
Suppose the following:
(H5) Iterates { x n } , { y n } , { z n } generated by method (2) exist, belong in U ( x 0 , t * ) and satisfy the conditions of Lemma 1 for all n = 0 , 1 , 2 ,
(H6) a ( x n ) x n s n t n ,
b ( x n , y n ) y n u n s n
and
c ( x n , y n , z n ) z n t n + 1 u n
for all n = 0 , 1 , 2 , and
(H7) U [ x 0 , t * ] Ω .
Then, there exists x * U [ x 0 , t * ] such that lim n x n = x * .
Proof. 
It follows by condition (H5) that sequence { t n } is complete as convergent. Thus, by condition (H6), sequence { x n } is also complete in a Banach space X, and as such, it converges to some x * U [ x 0 , t * ] (since U [ x 0 , t * ] is a closed set). □
Remark 2.
(i) Additional conditions are needed to show F ( x * ) = 0 . The same is true for the results on the uniqueness of the solution.
(ii) The limit point t * is not given in the closed form. So, it can be replaced by λ in Theorem 2.

3. Special Cases I

The iterates of method (3) are assumed to exist, and operator F has a divided difference of order one.
Local Convergence
Three possibilities are presented for the local cases based on different estimates for the determination of the functions h i . It follows by method (3) that
(P1)
y n x * = x n x * + α n F ( x n ) = ( I + α n [ x n , x * ; F ] ) ( x n x * ) ,
z n x * = ( I + γ n [ y n , x * ; F ] ) ( y n x * ) + β n [ x n , x * ; F ] ( x n x * ) = [ ( I + γ n [ y n , x * ; F ] ) ( I + α n [ x n , x * ; F ] ) + β n [ x n , x * ; F ] ] ( x n x * )
and
x n + 1 x * = ( I + θ n [ z n , x * ; F ] ) ( z n x * ) + δ n [ x n , x * ; F ] ( x n x * ) + ϵ n [ y n , x * ; F ] ( y n x * ) = [ ( I + θ n [ z n , x * ; F ] ) ( I + γ n [ y n , x * ; F ] ) ( I + β n [ x n , x * ; F ] ) + δ n [ x n , x * ; F ] + ϵ n [ y n , x * ; F ] ( I + α n [ x n , x * ; F ] ) ] ( x n x * )
Hence, the functions h i are selected to satisfy x n , y n , z n Ω
I + α n [ x n , x * ; F ] h 1 ( x n x * ) ,
( I + γ n [ y n , x * ; F ] ) ( I + α n [ x n , x * ; F ] ) + β n [ x n , x * ; F ] h 2 ( x n x * , y n x * )
( I + θ n [ z n , x * ; F ] ) ( I + γ n [ y n , x * ; F ] ) ( I + β n [ x n , x * ; F ] ) + δ n [ x n , x * ; F ] + ϵ n [ y n , x * ; F ] ( I + α n [ x n , x * ; F ] ) h 3 ( x n x * , y n x * , z n x * ) .
A practical non-discrete choice for the function h 1 is given by
I + α ( x ) [ x , x * ; F ] h 1 ( x x * ) x Ω .
Another choice is given by
h 1 ( t ) = sup x Ω , x x * t I + α ( x ) [ x , x * ; F ] .
The choices of functions h 2 and h 3 can follow similarly.
(P2)
Let M i : Ω X be a linear operator. By M n i we denote M i ( x n ) n = 0 , 1 , 2 , .
Then, it follows from method (3)
y n x * = x n x * M n 1 F ( x n ) + ( α n + M n ) F ( x n ) = ( I M n 2 [ x n , x * ; F ] ) + ( α n + M n 2 ) [ x n , x * ; F ] ) ( x n x * ) , z n x * = ( ( I M n 2 [ y n , x * ; F ] ) + ( γ n + M n 2 ) [ y n , x * ; F ] ) ( y n x * )
and
x n + 1 x * = ( ( I M n 3 [ z n , x * ; F ] ) + ( θ n + M n 3 ) [ z n , x * ; F ] ) ( z n x * ) .
Thus, the functions h i must satisfy
I + α n h 1 ( x n x * ) ,
( I + γ n ) ( I + α n ) h 2 ( x n x * , y n x * )
and
x n + 1 x * ( I + θ n ) ( I + γ n ) ( I + α n ) h 3 ( x n x * , y n x * , z n x * ) .
Clearly, the function h 1 can be chosen again as in case (P1). The functions h 2 and h 3 can be defined similarly.
(P3)
Assume ∃ function φ 0 : [ 0 , ) R continuous and non-decreasing such that
F ( x * ) 1 ( F ( x ) F ( x * ) ) φ 0 ( x x * ) x Ω .
Then, we can write
F ( x n ) = F ( x n ) F ( x * ) = 0 1 F ( x * + θ ( x n x * ) ) d θ ( x n x * )
leading to
F ( x * ) 1 F ( x n ) 0 1 φ 0 ( θ x n x * ) d θ x n x * .
Then, by method (3) we obtain, in turn, that
y n x * = [ I + α n F ( x * ) F ( x * ) 1 × 0 1 F ( x * + θ ( x n x * ) ) d θ F ( x * ) + F ( x * ) ] ( x n x * ) ,
so, the function h 1 must satisfy
I + α n 0 1 F ( x * + θ ( x n x * ) ) d θ h 1 ( x n x * )
or
h 1 ( t ) = sup x x * t , x Ω I + α ( x ) 0 1 F ( x * + θ ( x n x * ) ) d θ
or
I + α n F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) h 1 ( x n x * )
or
h 1 ( t ) = sup x x * t , x Ω I + α ( x ) F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) .
Similarly, for the other two steps, we obtain in the last choice
z n x * I + γ n F ( x * ) ( 1 + 0 1 φ 0 ( θ y n x * ) d θ ) y n x * + β n F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) x n x *
and
x n + 1 x * I + θ n F ( x * ) ( 1 + 0 1 φ 0 ( θ z n x * ) d θ ) z n x * + δ n F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) x n x * + ϵ n F ( x * ) ( 1 + 0 1 φ 0 ( θ y n x * ) d θ ) y n x * .
Thus, the function h 2 satisfies
I + γ n F ( x * ) ( 1 + 0 1 φ 0 ( θ y n x * ) d θ ) y n x * + β n F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) h 2 ( x n x * , y n x * )
or
h 2 ( s , t ) = sup x x * s , y x * t [ I + γ ( x ) F ( x * ) × ( 1 + 0 1 φ 0 ( θ t ) d θ ) t ) + β ( x ) F ( x * ) ( 1 + 0 1 φ 0 ( θ s ) d θ ) ] .
Finally, concerning the choice of the function h 3 , by the third substep of method (3)
x n + 1 x * I + θ n F ( x * ) ( 1 + 0 1 φ 0 ( θ z n x * ) d θ ) z n x * + δ n F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) x n x * + ϵ n F ( x * ) ( 1 + 0 1 φ 0 ( θ y n x * ) d θ ) y n x * ,
so the function h 3 must satisfy
I + θ n F ( x * ) ( 1 + 0 1 φ 0 ( θ y n x * ) d θ ) h 2 ( x n x * , y n x * ) + δ n F ( x * ) ( 1 + 0 1 φ 0 ( θ x n x * ) d θ ) + ϵ n F ( x * ) ( 1 + 0 1 φ 0 ( θ y n x * ) d θ ) h 1 ( x n x * ) h 3 ( x n x * , y n x * , z n x * )
or
h ( x , s , t , u ) = sup x x * s , y x * t , z x * u μ ( x , s , t , u ) ,
where
μ ( x , s , t , u ) = I + θ ( x ) F ( x * ) × ( 1 + 0 1 φ 0 ( θ u ) d θ ) h 2 ( t , s ) + δ ( x ) F ( x * ) ( 1 + 0 1 φ 0 ( θ s ) d θ ) + ϵ ( x ) F ( x * ) ( 1 + 0 1 φ 0 ( ( θ t ) d θ ) h 1 ( s ) ] .
The functions h 2 and h 3 can also be defined with the other two choices as those of function h 1 given previously.
Semi-local Convergence
Concerning this case, we can have instead of the conditions of Theorem 2 (see (H6)) but for method (3)
α n F ( x n ) s n t n ,
β n F ( x n ) + γ n F ( y n ) u n s n
and
δ n F ( x n ) + ϵ n F ( y n ) + θ n F ( z n ) t n + 1 u n n = 0 , 1 , 2 , .
Notice that under these choices,
y n x n s n t n
z n y n u n s n
and
x n + 1 z n t n + 1 u n .
Then, the conclusions of Theorem 2 hold for method (3). Even more specialized choices of linear operators appearing on these methods as well as function h i can be found in the Introduction, the next section, or in [1,2,11,21] and the references therein.

4. Special Cases II

The section contains even more specialized cases of method (2) and method (3). In particular, we study the local and semi-local convergence first of method (22) and second of method (20). Notice that to obtain method (22), we set in method (3)
α n = F ( x n ) 1 , u n = y n , β n = O , γ n = 5 F ( x n ) 1 , v n = y n , δ n = O , ϵ n = 9 5 F ( x n ) 1 a n d θ n = 1 5 F ( x n ) .
Moreover, for method (20), we let
α n = [ x n , w n ; F ] 1 , u n = y n , β n = O , z n = x n + 1 , γ n = ( [ y n , w n ; F ] + [ y n , x n ; F ] [ x n , w n ; F ] ) 1 , δ n = ϵ n = θ n = O
and v n = z n .

5. Local Convergence of Method

The local convergence analysis of method (23) utilizes some functions parameters. Let S = [ 0 , ) .
Suppose the following:
(i)
∃ function w 0 : S R continuous and non-decreasing such that equation
w 0 ( t ) 1 = 0
has a smallest solution ρ 0 S { 0 } . Let S 0 = [ 0 , ρ 0 ) .
(ii)
∃ function w : S 0 R continuous and non-decreasing such that equation
h 1 ( t ) 1 = 0
has a smallest solution ρ 1 S 0 { 0 } , where the function h 1 : S 0 R defined by
h 1 ( t ) = 0 1 w ( ( 1 θ ) t ) d θ 1 w 0 ( t ) .
(iii)
Equation
w 0 ( h 1 ( t ) t ) 1 = 0
has a smallest solution ρ ¯ 1 S 0 { 0 } . Let
ρ ¯ ¯ 0 = min { ρ 0 , ρ ¯ 1 }
and S ˜ 1 = [ 0 , ρ ¯ ¯ 0 ) .
(iv)
Equation
h 2 ( t ) 1 = 0
has a smallest solution ρ 2 S ˜ 1 { 0 } , where the function h 2 : S ˜ 1 R is defined as
h 2 ( t ) = 0 1 w ( ( 1 θ ) h 1 ( t ) t ) d θ 1 w 0 ( h 1 ( t ) t ) + w ( ( 1 + h 1 ( t ) ) t ) ( 1 + 0 1 w 0 ( θ h 1 ( t ) t ) d θ ) ( 1 w 0 ( t ) ) ( 1 w 0 ( h 1 ( t ) t ) ) + 4 ( 1 + 0 1 w 0 ( θ h 1 ( t ) t ) d θ 1 w 0 ( t ) h 1 ( t ) .
(v)
Equation
h 3 ( t ) 1 = 0
has a smallest solution ρ 3 S ˜ 1 { 0 } , where the function h 3 : S ˜ 1 R is defined by
h 3 ( t ) = h 1 ( t ) + 1 5 [ 9 ( 1 + 0 1 w 0 ( θ h 1 ( t ) t ) d θ ) h 1 ( t ) 1 w 0 ( t ) ( 1 + 0 1 w 0 ( θ h 2 ( t ) t ) d θ ) h 2 ( t ) ] .
The parameter ρ defined by
ρ = min { ρ j } j = 1 , 2 , 3
is proven to be a radius of convergence for method (2) in Theorem 3. Let S 1 = [ 0 , ρ ) . Then, it follows by these definitions that t S 2
0 w 0 ( t ) < 1
0 w 0 ( h 1 ( t ) t ) < 1
and
0 h i ( t ) < 1 .
The conditions required are as follows:
(C1) Equation F ( x ) = 0 has a simple solution x * Ω .
(C2) F ( x * ) 1 ( F ( x ) F ( x * ) ) w 0 ( x x * )   x Ω .
Set Ω 1 = U ( x * , ρ 0 ) Ω .
(C3) F ( x * ) 1 ( F ( y ) F ( x ) ) w ( y x ) x , y Ω 1
and
(C4) U [ x 0 , ρ ] Ω .
Next, the main local convergence result follows for method (23).
Theorem 3.
Suppose that conditions (C1)–(C4) hold and x 0 U ( x * , ρ ) { x * } . Then, the sequence { x n } generated by method (23) is well defined in U ( x * , ρ ) , remains in U ( x * , ρ ) n = 0 , 1 , 2 , and is convergent to x * . Moreover, the following assertions hold:
y n x * h 1 ( x n x * ) x n x * x n x * < ρ ,
z n x * h 2 ( x n x * ) x n x * x n x * ,
and
x n + 1 x * h 3 ( x n x * ) x n x * x n x * ,
where functions h i are defined previously and the radius ρ is given by Formula (37).
Proof. 
Let u U ( x * , ρ ) { x * } . By using conditions (C1), (C2) and (37), we have that
F ( x * ) 1 ( F ( u ) F ( x * ) ) w 0 ( x 0 x * ) w 0 ( r ) < 1 .
It follows by (44) and the Banach lemma on invertible operators [11,15] that F ( u ) 1 L ( X , X ) and
F ( u ) 1 F ( x * ) 1 1 w 0 ( x 0 x * ) .
If u = x 0 , then the iterate y 0 is well defined by the first substep of method (23) and we can write
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) = F ( x 0 ) 1 0 1 ( F ( x * + θ ( x 0 x * ) ) d θ F ( x 0 ) ) ( x 0 x * ) .
In view of (C1)–(C3), (45) (for u = x 0 ), (40) (for i = 1 ) and (46), we obtain in turn that
y 0 x * 0 1 w ( ( 1 θ ) x 0 x * ) d θ x 0 x * 1 w 0 ( x 0 x * ) h 1 ( x 0 x * ) x 0 x * < x 0 x * < ρ .
Thus, the iterate y 0 U ( x * , r ) and (41) holds for n = 0 . The iterate z 0 is well defined by the second substep of method (23), so we can write
z 0 x * = y 0 x 0 5 F ( x 0 ) 1 F ( y 0 ) = y 0 x * F ( y 0 ) 1 F ( y 0 ) + F ( y 0 ) 1 ( F ( x 0 ) F ( y 0 ) ) F ( x 0 ) 1 F ( y 0 ) 4 F ( x 0 ) 1 F ( y 0 ) .
Notice that linear operator F ( y 0 ) 1 exists by (45) (for u = y 0 ). It follows by (37), (40) (for j = 1 ), (C3), (45) (for u = x 0 , y 0 ), in turn that
z 0 x * 0 1 w ( ( 1 θ ) y 0 x * ) d θ 1 w 0 ( y 0 x * ) + w ( y 0 x 0 ) ( 1 + 0 1 w 0 ( θ y 0 x * ) d θ ) ( 1 w 0 ( x 0 x * ) ) ( 1 w 0 ( y 0 x * ) ) + 4 ( 1 + 0 1 w 0 ( θ y 0 x * ) d θ 1 w 0 ( x 0 x * ) y 0 x * h 2 ( x 0 x * ) x 0 x * x 0 x * .
Thus, the iterate z 0 U ( x * , ρ ) and (42) holds for n = 0 , where we also used (C1) and (C2) to obtain the estimate
F ( x * ) 1 F ( y 0 ) = F ( x * ) 1 [ 0 1 F ( x * + θ ( y 0 x * ) ) d θ F ( x * ) + F ( x * ) ] ( y 0 x * ) ( 1 + 0 1 w 0 ( θ y 0 x * ) d θ ) y 0 x * .
Moreover, the iterate x 1 is well defined by the third substep of method (23), so we can have
x 1 x * = y 0 x * 1 5 F ( x 0 ) 1 ( 9 F ( y 0 ) + F ( z 0 ) ) ,
leading to
x 1 x * y 0 x * + 1 5 9 ( 1 + 0 1 w 0 ( θ y 0 x * ) d θ ) y 0 x * 1 w 0 ( y 0 x * ) + ( 1 + 0 1 w 0 ( θ z 0 x * ) d θ ) z 0 x * h 3 ( x 0 x * ) x 0 x * x 0 x * < ρ .
Therefore, the iterate x 1 U ( x * , ρ ) and (43) holds for n = 0 .
Switch x 0 , y 0 , z 0 , x 1 by x m , y m , z m , x m + 1 m = 0 , 1 , 2 in the preceding calculations to complete the induction for the estimates (41)–(43). Then, by the estimate
x m + 1 x * d x m x * < ρ ,
where d = h 3 ( x 0 x * ) [ 0 , 1 ) , we obtain that x m + 1 U ( x * , ρ ) and l i m m x m = x * .
The uniqueness of the solution result for method (23) follows.
Proposition 1.
Suppose the following:
(i)
Equation F ( x ) = 0 has a simple solution x * U ( x * , r ) Ω for some r > 0 .
(ii)
Condition (C2) holds.
(iii)
There exists r 1 r such that
0 1 w 0 ( θ r 1 ) d θ < 1 .
Set Ω 2 = U [ x * , r 1 ] Ω . Then, the only solution of equation F ( x ) = 0 in the set Ω 2 is x * .
Proof. 
Let y * D 2 be such that F ( y * ) = 0 . Define the linear operator J = 0 1 h ( x * + θ ( y * x * ) ) d θ . It then follows by (ii) and (52) that
h ( x * ) 1 ( J F ( x * ) ) 0 1 w 0 ( θ y * x * ) d θ 0 1 w 0 ( θ r 1 ) d θ < 1 .
Hence, we deduce x * = y * by the invertibility of J and the estimate J ( x * y * ) = F ( x * ) F ( y * ) = 0 .
Remark 3.
Under all conditions of Theorem 3, we can set ρ = r .
Example 2.
Consider the motion system
F 1 ( v 1 ) = e v 1 , F 2 ( v 2 ) = ( e 1 ) v 2 + 1 , F 3 ( v 3 ) = 1
with F 1 ( 0 ) = F 2 ( 0 ) = F 3 ( 0 ) = 0 . Let F = ( F 1 , F 2 , F 3 ) t r . Let X = R 3 , Ω = U [ 0 , 1 ] , x * = ( 0 , 0 , 0 ) t r . Let function F on Ω for v = ( v 1 , v 2 , v 3 ) t r given as
F ( v ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) t r .
Using this definition, we obtain the derivative as
F ( v ) = e v 1 0 0 0 ( e 1 ) v 2 + 1 0 0 0 1 .
Hence, F ( x * ) = I . Let v R 3 with v = ( v 1 , v 2 , v 3 ) t r . Moreover, the nor for N R 3 × R 3 is
N = max 1 j 3 i = 1 3 n j , i .
Conditions (C1)–(C3) are verified for w 0 ( t ) = ( e 1 ) t and w ( t ) = 2 ( 1 + 1 e 1 ) t . Then, the radii are
ρ 1 = 0.3030 , ρ 2 = 0.1033 = ρ a n d ρ 3 = 0.1461 .
Example 3.
If X = C [ 0 , 1 ] is equipped with the max-norm, Ω = U [ 0 , 1 ] , consider G : Ω E 1 given as
G ( λ ) ( x ) = φ ( x ) 6 0 1 x τ λ ( τ ) 3 d τ .
We obtain
G ( λ ( ξ ) ) ( x ) = ξ ( x ) 18 0 1 x τ λ ( τ ) 2 ξ ( τ ) d τ , f o r e a c h ξ D .
Clearly, x * = 0 and the conditions (C1)–(C3) hold for w 0 ( t ) = 9 t and w ( t ) = 18 t . Then, the radii are
ρ 1 = 0.0556 , ρ 2 = 0.0089 = ρ a n d ρ 3 = 0.0206 .

6. Semi-Local Convergence of Method

As in the local case, we use some functions and parameters for the method (23).
Suppose:
There exists function v 0 : S R that is continuous and non-decreasing such that equation
v 0 ( t ) 1 = 0
has a smallest solution τ 0 S { 0 } . Consider function v : S 0 R to be continuous and non-decreasing. Define the scalar sequences for η 0 and n = 0 , 1 , 2 , by
t 0 = 0 , s 0 = η u n = s n + 5 0 1 v ( θ ( s n t n ) ) d θ ( s n t n ) 1 v 0 ( t n ) , t n + 1 = u n + 1 1 v 0 ( t n ) [ ( 1 + 0 1 v 0 ( u n + θ ( u n s n ) ) d θ ( u n s n ) + 3 0 1 v ( θ ( s n t n ) ) d θ ( s n t n ) ] s n + 1 = t n + 1 + 1 1 v 0 ( t n + 1 ) [ 0 1 v ( θ ( t n + 1 t n ) ) d θ ( t n + 1 t n ) + ( 1 + 0 1 v 0 ( θ t n ) d θ ( t n + 1 s n ) ] .
This sequence is proven to be majorizing for method (23) in Theorem 4. However, first, we provide a general convergence result for sequence (54).
Lemma 2.
Suppose that n = 0 , 1 , 2 ,
v 0 ( t n ) < 1
and there exists τ [ 0 , τ 0 ) such that
t n τ .
Then, sequence { t n } converges to some t * [ 0 , τ ] .
Proof. 
It follows by (54)–(56) that sequence { t n } is non-decreasing and bounded from above by τ . Hence, it converges to its unique least upper bound t * .
Next, the operator F is related to the scalar functions.
Suppose the following:
(h1)
There exists x 0 Ω , η 0 such that F ( x 0 ) 1 L ( B 2 , B 1 ) and F ( x 0 ) 1 F ( x 0 ) η .
(h2)
F ( x 0 ) 1 ( F ( x ) F ( x 0 ) ) v 0 ( x x 0 ) for all x Ω .
Set Ω 3 = Ω U ( x 0 , τ 0 ) .
(h3)
F ( x 0 ) 1 ( F ( y ) F ( x ) ) v ( y x ) for all x , y Ω 3 .
(h4)
Conditions of Lemma 2 hold.
and
(h5)
U [ x 0 , t * ] Ω .
We present the semi-local convergence result for the method (23).
Theorem 4.
Suppose that conditions (h1)–(h5) hold. Then, sequence { x n } given by method (23) is well defined, remains in U [ x 0 , t * ] and converges to a solution x * U [ x 0 , t * ] of equation F ( x ) = 0 . Moreover, the following assertions hold:
y n x n s n t n ,
z n y n u n s n
and
x n + 1 z n t n + 1 u n .
Proof. 
Mathematical induction is utilized to show estimates (57)–(59). Using (h1) and method (23) for n = 0
y 0 x 0 = F ( x 0 ) 1 F ( x 0 ) η = s 0 t 0 t * .
Thus, the iterate y 0 U [ x 0 , t * ] and (57) holds for n = 0 .
Let u U [ x 0 , t * ] . Then, as in Theorem 3, we get
F ( u ) 1 F ( x 0 ) 1 1 v 0 ( u x 0 ) .
Hence, if we set u = x 0 , iterates y 0 , z 0 and x 1 are well defined by method (23) for n = 0 . Suppose iterates x k , y k , z k , x k + 1 also exist for all integer values k smaller than n . Then, we have the estimates
z n y n = 5 F ( x n ) 1 F ( y n ) 5 0 1 v ( θ y n x n ) d θ y n x n 1 v 0 ( x n x 0 ) 5 0 1 v ( θ s n t n ) ) d θ ( s n t n ) 1 v 0 ( t n ) = u n s n ,
x n + 1 z n = 1 5 F ( x n ) 1 ( F ( y n ) F ( z n ) ) + 3 F ( x n ) 1 F ( y n ) 1 1 v 0 ( x n x 0 ) [ ( 1 + 1 5 0 1 v 0 ( z n x 0 + θ z n y n ) d θ ) y n x n + 3 0 1 v ( θ y n x n d θ y n x n ] t n + 1 u n
and
y n + 1 x n + 1 = F ( x n + 1 ) 1 F ( x n + 1 ) F ( x n + 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x n + 1 ) 1 1 v 0 ( x n + 1 x 0 ) [ 0 1 v ( θ x n + 1 x n ) d θ x n + 1 x n + ( 1 + 0 1 v 0 ( θ x n x 0 ) d θ ) x n + 1 y n ] s n + 1 t n + 1 ,
where we also used
F ( y n ) = F ( y n ) F ( x n ) F ( x n ) ( y n x n ) = 0 1 [ F ( x n + θ ( y n x n ) ) d θ F ( x n ) ] ( y n x n ) ,
so
F ( x 0 ) 1 F ( y n ) 0 1 v ( θ y n x n ) d θ y n x n
and
F ( x n + 1 ) = F ( x n + 1 ) F ( x n ) F ( x n ) ( y n x n ) F ( x n ) ( x n + 1 x n ) + F ( x n ) ( x n + 1 x n ) = F ( x n + 1 ) F ( x n ) F ( x n ) ( x n + 1 x n ) + F ( x n ) ( x n + 1 y n ) ,
so
F ( x 0 ) 1 F ( x n + 1 ) 0 1 v ( θ x n + 1 x n ) d θ x n + 1 x n + ( 1 + v 0 ( x n x 0 ) ) x n + 1 y n 0 1 v ( θ ( t n + 1 t n ) ) d θ ( t n + 1 t n ) + ( 1 + v 0 ( t n ) ) ( t n + 1 s n ) , z n x 0 z n y n + y n x 0 u n s n + s n t 0 t *
and
x n + 1 x 0 x n + 1 z n + z n x 0 t n + 1 u n + u n t 0 t * .
Hence, sequence { t n } is majorizing for method (2) and iterates { x n } , { y n } , { z n } belong in U [ x 0 , t * ] . The sequence { x n } is complete in Banach space X and as such, it converges to some x * U [ x 0 , t * ] . By using the continuity of F and letting n in (61), we deduce F ( x * ) = 0 .
Proposition 2.
Suppose:
(i)
There exists a solution x * U ( x 0 , ρ 2 ) of equation F ( x ) = 0 for some ρ 2 > 0 .
(ii)
Condition (h2) holds.
(iii)
There exists ρ 3 ρ 2 such that
0 1 v 0 ( ( 1 θ ) ρ 2 + θ ρ 3 ) d θ < 1 .
Set Ω 4 = Ω U [ x 0 , ρ 3 ] . Then, x * is the only solution of equation F ( x ) = 0 in the region Ω 4 .
Proof. 
Let y * Ω 4 with F ( y * ) = 0 . Define the linear operator Q = 0 1 F ( x * + θ ( y * x * ) ) d θ . Then, by (h2) and (62), we obtain in turn that
F ( x 0 ) 1 ( Q F ( x 0 ) ) 0 1 v 0 ( ( 1 θ ) x 0 y * + θ x 0 x * ) d θ 0 1 v 0 ( ( 1 θ ) ρ 2 + θ ρ 3 ) d ρ < 1 .
Thus, x * = y * .
The next two examples show how to choose the functions v 0 , v , and the parameter η .
Example 4.
Set X = R . Let us consider a scalar function F defined on the set Ω = U [ x 0 , 1 μ ] for μ ( 0 , 1 ) by
F ( x ) = x 3 μ .
Choose x 0 = 1 . Then, the conditions (h1)–(h3) are verified for η = 1 μ 3 , v 0 ( t ) = ( 3 μ ) t and v ( t ) = 2 ( 1 + 1 3 μ ) t .
Example 5.
Consider X = C [ 0 , 1 ] and Ω = U [ 0 , 1 ] . Then the problem [5]
Ξ ( 0 ) = 0 , Ξ ( 1 ) = 1 ,
Ξ = Ξ ι Ξ 2
is also given as integral equation of the form
Ξ ( q 2 ) = q 2 + 0 1 Θ ( q 2 , q 1 ) ( Ξ 3 ( q 1 ) + ι Ξ 2 ( q 1 ) ) d q 1
where ι is a constant and Θ ( q 2 , q 1 ) is the Green’s function
Θ ( q 2 , q 1 ) = q 1 ( 1 q 2 ) , q 1 q 2 q 2 ( 1 q 1 ) , q 2 < q 1 .
Consider F : Ω X as
[ F ( x ) ] ( q 2 ) = x ( q 2 ) q 2 0 1 Θ ( q 2 , q 1 ) ( x 3 ( q 1 ) + ι x 2 ( q 1 ) ) d q 1 .
Choose Ξ 0 ( q 2 ) = q 2 and Ω = U ( Ξ 0 , ϵ 0 ) . Then, clearly U ( Ξ 0 , ϵ 0 ) U ( 0 , ϵ 0 + 1 ) , since Ξ 0 = 1 . If 2 ι < 5 . Then, conditions (C1)–(C3) are satisfied for
w 0 ( t ) = 2 ι + 3 ρ 0 + 6 8 t , w ( t ) = ι + 6 ρ 0 + 3 4 t .
Hence, w 0 ( t ) w ( t ) .

7. Local Convergence of Method

The local analysis is using on certain parameters and real functions. Let L 0 , L and α be positive parameters. Set T 1 = [ 0 , 1 ( 2 + α ) L 0 ] provided that ( 2 + α ) L 0 < 1 .
Define the function h 1 : T 1 R by
h 1 ( t ) = ( 1 + α ) L t 1 ( 2 + α ) L 0 t .
Notice that parameter ρ
ρ = 1 ( 1 + α ) L + ( 2 + α ) L 0
is the only solution of equation
h 1 ( t ) 1 = 0
in the set T 1 .
Define the parameter ρ 0 by
ρ 0 = 1 ( 2 + α ) ( L 0 + L ) .
Notice that ρ 0 < ρ . Set T 0 = [ 0 , ρ 0 ] .
Define the function h 2 : T 0 R by
h 2 ( t ) = ( 2 + 2 α + h 1 ( t ) ) L h 1 ( t ) t 1 ( 2 + α ) ( L 0 + L ) t .
The equation
h 2 ( t ) 1 = 0
has a smallest solution ρ T 0 { 0 } by the intermediate value theorem, since h 2 ( 0 ) 1 = 1 and h 2 ( t ) as y ρ 0 . It shall be shown that R is a radius of convergence for method (20). It follows by these definitions that t T 0
0 ( L 0 + L ) ( 2 + α ) t < 1
0 h 1 ( t ) < 1
and
0 h 2 ( t ) < 1 .
The following conditions are used:
(C1)
There exists a solution x * Ω of equation F ( x ) = 0 such that F ( x * ) 1 L ( X , X ) .
(C2)
There exist positive parameters L 0 and α such that v , z Ω
F ( x * ) 1 ( [ v , z ; F ] F ( x * ) ) L 0 ( v x * + z x * )
and
F ( x ) α x x * .
Set Ω 1 = U ( x * , ρ ) Ω .
(C3)
There exists a positive constant L > 0 such that x , y , v , z Ω 1
F ( x * ) 1 ( [ x , y ; F ] [ v , z ; F ] ) L ( x v + y z )
and
(C4)
U [ x 0 , ρ ] Ω .
Next, the local convergence of method (20) is presented using the preceding terminology and conditions.
Theorem 5.
Under conditions (C1)–(C4), further suppose that x 0 U ( x * , ρ ) . Then, the sequence { x n } generated by method (20) is well defined in U ( x * , ρ ) , stays in U ( x * , ρ ) n = 0 , 1 , 2 , and is convergent to x * so that
y n x * h 1 ( x n x * ) x n x * x n x * < Ω
and
x n + 1 x * h 2 ( x n x * ) x n x * x n x * ,
where the functions h 1 , h 2 and the radius ρ are defined previously.
Proof. 
It follows by method (20), (C1), (C2) and x 0 U ( x * , ρ ) in turn that
F ( x * ) 1 ( A 0 F ( x * ) ) = F ( x * ) 1 ( [ x 0 , x 0 + F ( x 0 ) ; F ] F ( x * ) ) L 0 ( 2 x 0 x * + F ( x 0 ) F ( x * ) ) L 0 ( 2 + α ) x 0 x * < L 0 ( 2 + α ) ρ .
It follows by (68) and the Banach lemma on invertible operators [24] that A 0 1 L ( X , X ) and
A 0 1 F ( x * ) 1 1 ( 2 + α ) L 0 x 0 x * .
Hence, the iterate y 0 exists by the first substep of method (20) for n = 0 . It follows from the first substep of method (20), (C2) and (C3), that
y 0 x * x 0 x * A 0 1 F ( x 0 ) A 0 1 F ( x * ) F ( x * ) 1 ( A 0 ( F ( x 0 ) F ( x * ) ) ) ( x 0 x * ) A 0 1 F ( x * ) F ( x * ) 1 ( A 0 ( F ( x 0 ) F ( x * ) ) ) x 0 x * L ( x 0 x * + F ( x 0 ) F ( x * ) ) 1 L 0 ( 2 + α ) x 0 x * h 1 ( x 0 x * ) x 0 x * x 0 x * < ρ .
Thus, the iterate y 0 U ( x * , ρ ) and (66) holds for n = 0 . Similarly, by the second substep of method (20), we have
F ( x * ) 1 ( B 0 F ( x * ) ) = F ( x * ) 1 ( [ y 0 , w 0 ; F ] [ y 0 , x 0 ; F ] [ x 0 , w 0 ; F ] [ x * , x * ; F ] ) L y 0 w 0 + L 0 ( y 0 x * + w 0 x * ) L ( y 0 x * + w 0 x * ) + L 0 ( y 0 x * + w 0 x * ) ( L + L 0 ) ( 2 + α ) ρ L + L 0 L + L 0 = 1 .
Hence, B 0 1 L ( X , X ) and
B 0 1 F ( x * ) 1 1 ( L + L 0 ) ( 2 + α ) x 0 x * .
Thus, the iterate x 1 exists by the second sub-step of method (20). Then, as in (70) we obtain in turn that
x 1 x * y 0 x * B 0 1 F ( y 0 ) B 0 1 F ( x * ) F ( x * ) 1 ( B 0 ( F ( y 0 ) F ( x * ) ) ) y 0 x * F ( x * ) 1 ( [ y 0 , w 0 ; F ] + [ y 0 , x 0 ; F ] [ x 0 , w 0 ; F ] [ y 0 , x * : F ] ) 1 ( L + L 0 ) ( 2 + α ) x 0 x * y 0 x * L ( 2 + 2 α + h 2 ( x 0 x * ) ) x 0 x * 1 ( L + L 0 ) ( 2 + α ) x 0 x * h 1 ( x 0 x * ) x 0 x * h 2 ( x 0 x * ) x 0 x * x 0 x * < ρ .
Therefore, the iterate x 1 U ( x * , ρ ) and (67) holds for n = 0 .
Simply replace x 0 , y 0 , x 1 by x m , y m , x m + 1 m = 0 , 1 , 2 in the preceding calculations to complete the induction for (66) and (67). It then follows from the estimate
x m + 1 x * μ x m x * < ρ ,
where, μ = h 2 ( x 0 x * ) [ 0 , 1 ) leading to x m + 1 U ( x * , ρ ) and l i m m x m = x * .
Concerning the uniqueness of the solution x * (not given in [9]), we provide the result.
Proposition 3.
Suppose:
(i)
The point x * is a simple solution x * U ( x * , r ) Ω for some r > 0 of equation F ( x ) = 0 .
(ii)
There exists positive parameter L 1 such that y Ω
F ( x * ) 1 ( [ x * , y ; F ] F ( x * ) ) L 1 y x *
(iii)
There exists r 1 r such that
L 1 r 1 < 1 .
Set Ω 2 = U [ x * , r 1 ] Ω . Then, x * is the only solution of equation F ( x ) = 0 in the set Ω 2 .
Proof. 
Set P = [ x * , y * ; F ] for some y * D 2 with F ( y * ) = 0 . It follows by (i), (75) and (76) that
F ( x * ) 1 ( P F ( x * ) ) L 1 y * x * ) < 1 .
Thus, we conclude x * = y * by the invertability of P and identity P ( x * y * ) = F ( x * ) F ( y * ) = 0 .
Remark 4.
(i) Notice that not all conditions of Theorem 5 are used in Proposition 3. If they were, then we can set r 1 = ρ .
(ii) By the definition of set Ω 1 we have
Ω 1 Ω .
Therefore, the parameter
L L 2 ,
where L 2 is the corresponding Lipschitz constant in [1,3,9,19] appearing in the condition x , y , z Ω
F ( x * ) 1 ( [ x , y ; F ] [ v , z ; F ] ) L 2 ( x v + y z ) .
Thus, the radius of convergence R 0 in [1,7,8,20] uses L 2 instead of L . That is by (78)
R 0 ρ .
Examples where (77), (78) and (80) are strict can be found in [2,5,11,12,13,15,21,22,23,24].

8. Majorizing Sequences for Method

Let K 0 , K , be given positive parameters and δ [ 0 , 1 ) , K 0 K , η 0 , and T = [ 0 , 1 ) . Consider recurrent polynomials defined on the interval T for n = 1 , 2 , by
f n ( 1 ) ( t ) = K t 2 n η + K t 2 n 1 η + 2 K 0 ( 1 + t + + t 2 n + 1 ) η + K 0 ( t 2 n + 1 + 2 t 2 n ) t 2 n + 1 η + δ 1 , f n ( 2 ) ( t ) = K t 2 n + 1 η + K ( t 2 n + 1 + 2 t 2 n ) t 2 n η + 2 K 0 ( 1 + t + + t 2 n + 2 ) η + δ 1 , g n ( 1 ) ( t ) = K t 3 + K t 2 K t K + 2 K 0 ( t 3 + t 4 ) + K 0 ( t 2 n + 3 + 2 t n + 2 ) t 4 η K 0 ( t 2 n + 1 + 2 t 2 n ) t 2 η , g n ( 2 ) ( t ) = K t 3 + K ( t 3 + 2 t 2 ) t 2 n + 2 η + 2 K 0 ( t 3 + t 4 ) K t K ( t + 2 ) t 2 n η , h n + 1 ( 1 ) ( t ) = g n + 1 ( 1 ) ( t ) g n ( 1 ) ( t ) , h n + 1 ( 2 ) ( t ) = g n + 1 ( 2 ) ( t ) g n ( 2 ) ( t ) ,
and polynomials
g ( 1 ) ( t ) = g 1 ( t ) = K t 3 + K t 2 K t K + 2 K 0 ( t 3 + t 4 ) ,
g ( 2 ) ( t ) = g 2 ( t ) = K t 3 + 2 K 0 ( t 3 + t 4 ) K t = g 3 ( t ) t
and
g ( t ) = ( t 1 ) 2 ( t 5 + 4 t 4 + 6 t 3 + 6 t 2 + 5 t + 2 ) .
Then, the following auxiliary result connecting these polynomials can be shown.
Lemma 3.
The following assertions hold:
f n + 1 ( 1 ) ( t ) = f n ( 1 ) ( t ) + g n ( 1 ) ( t ) t 2 n 1 η ,
f n + 1 ( 2 ) ( t ) = f n ( 2 ) ( t ) + g n ( 2 ) ( t ) t 2 n η ,
h n + 1 ( 1 ) ( t ) = g ( t ) K 0 t 2 n + 2 η ,
h n + 1 ( 2 ) ( t ) = g ( t ) K t 2 n η ,
polynomials g 1 and g 2 have smallest zeros in the interval T { 0 } denoted by ξ 1 and α 2 , respectively,
h n + 1 ( 1 ) ( t ) 0 t [ 0 , ξ 1 )
and
h n + 1 ( 2 ) ( t ) 0 t [ 0 , ξ 2 ) .
Moreover, define functions on the interval T by
g ( 1 ) ( t ) = lim n g n ( 1 ) ( t )
and
g ( 2 ) ( t ) = lim n g n ( 2 ) ( t ) .
Then,
g ( 1 ) ( t ) = g 1 ( t ) t [ 0 , α 1 ) ,
g ( 2 ) ( t ) = g 2 ( t ) t [ 0 , α 2 ) ,
f n + 1 ( 1 ) ( t ) f n ( 1 ) ( t ) + g 1 ( t ) t 2 n 1 η t [ 0 , ξ 1 ) ,
f n + 1 ( 2 ) ( t ) f n ( 2 ) ( t ) + g 2 ( t ) t 2 n η t [ 0 , ξ 2 ) ,
f n + 1 ( 1 ) ( ξ 1 ) f n ( 1 ) ( ξ 1 ) ,
and
f n + 1 ( 2 ) ( ξ 2 ) f n ( 2 ) ( ξ 2 ) .
Proof. 
Assertions (81)–(84) hold by the definition of these functions and basic algebra. By the intermediate value theorem polynomials g 1 and g 3 have zeros in the interval T { 0 } , since g 1 ( 0 ) = K , g 1 ( 1 ) = 4 K 0 , g 2 ( 0 ) = K and g 2 ( 1 ) = 4 K 0 . Then, assertions (85) and (86) follow by the definition of these polynomials and zeros ξ 1 and ξ 2 . Next, assertions (91) and (94) also follow from (87), (88) and the definition of these polynomials. □
The preceding result is connected to the scalar sequence defined n = 0 , 1 , 2 , by t 0 = 0 , s 0 = η ,
t 1 = s 0 + K ( η + δ ) η 1 K 0 ( 2 η + δ ) , s n + 1 = t n + 1 + K ( t n + 1 t n + s n t n ) ( t n + 1 s n ) 1 K 0 ( 2 t n + 1 + γ n + δ ) t n + 2 = s n + 1 + K ( s n + 1 t n + 1 + γ n ) ( s n + 1 t n + 1 ) 1 K 0 ( 2 s n + 1 + δ ) ,
where γ n = K ( t n + 1 t n + s n t n ) ( t n + 1 s n ) , δ γ 0 .
Moreover, define parameters ξ 1 = K ( s 1 t 1 + γ 0 ) 1 K 0 ( 2 s 1 + δ ) , ξ 2 = K ( t 1 + s 0 ) 1 K 0 ( 2 t 1 + γ 0 + δ ) and a = max { ξ 1 , ξ 2 } ,
Then, the first convergence result for sequence { t n } follows.
Lemma 4.
Suppose
K η 1 , 0 < ξ 1 , 0 < ξ 2 , a < ξ < 1 ,
f 1 ( 1 ) ( ξ 1 ) 0
and
f 2 ( 1 ) ( ξ 2 ) 0 .
Then, scalar sequence { t n } is non-decreasing, bounded from above by t * * = η 1 ξ , and converges to its unique least upper bound t * [ 0 , t * * ] . Moreover, the following error bounds hold
0 < t n + 1 s n ξ ( s n t n ) ξ 2 n + 1 η ,
0 < s n t n ξ ( t n s n 1 ) ξ 2 n η
and
γ n + 1 γ n γ 0 .
Proof. 
Assertions (99)–(101) hold if we show using induction that
0 < K ( t n + 1 t n + s n t n ) 1 K 0 ( 2 t n + 1 + γ n + δ ) ξ 1 ,
0 < K ( s n + 1 t n + 1 + γ n ) 1 K 0 ( 2 s n + 1 + δ ) ξ 2 ,
and
t n s n t n + 1 .
By the definition of t 1 , we obtain
t 1 s 0 = 1 K η 1 K 0 ( 2 η + δ ) > 1 ,
so s 0 < t 1 , and (103) holds for n = 0 . Suppose assertions (101)–(103) hold for each m = 0 , 1 , 2 , 3 , , n . By (99) and (100) we have
s m t m + ξ 2 m η s m 1 + ξ 2 m 1 η + ξ 2 m η η + ξ η + + ξ 2 m η = 1 ξ 2 m + 1 1 ξ η t * *
and
t m + 1 s m + ξ 2 m + 1 η t m + ξ 2 m + 1 η + ξ 2 m η η + ξ η + + ξ 2 m + 1 η = 1 ξ 2 m + 2 1 ξ η t * * .
By the induction hypotheses sequences { t m } , { s m } are increasing. Evidently, estimate (101) holds if
K ξ 2 m + 1 η + K ξ 2 m η + 2 K 0 ξ 1 ξ 2 m + 2 1 ξ η + K 0 ξ δ + ξ γ m K 0 ξ 0
or
f m ( 1 ) ( t ) 0 a t t = ξ 1 ,
where γ m K ( ξ 2 m + 1 + 2 ξ 2 m ) ξ 2 m + 1 η 2 . By (91), (93), and (98) estimate (107) holds.
Similarly, assertion (103) holds if
K ξ 2 m + 2 η + K 2 ( ξ 2 m + 1 η + 2 ξ 2 m η ) ξ 2 m + 1 η + 2 ξ K 0 ( 1 + ξ + + ξ 2 m + 2 ) η + δ ξ ξ 0
or
f m ( 2 ) ( t ) 0 a t t = ξ 2 .
By (92) and (94), assertion (108) holds. Hence, (100) and (103) also hold. Notice that γ n can be written as γ n = K ( E n + E n 1 ) E n 2 , where E n = t n + 1 t n > 0 , E n 1 = s n t n , and E n 2 = t n + 1 s n > 0 . Hence, we get
E n + 1 E n = t n + 2 2 t n + 1 + t n ξ 2 n ( ξ 2 1 ) ( ξ + 1 ) η < 0 ,
E n + 1 1 E n 1 = s n + 1 t n + 1 ( s n t n ) ξ 2 n ( ξ 2 1 ) η < 0 ,
and
E n + 1 2 E n 2 = t n + 2 s n + 1 ( t n + 1 s n ) ξ 2 n + 1 ( ξ 2 1 ) η < 0 ,
so
γ n + 1 γ n γ 0 .
It follows that sequence { t n } is non-decreasing, bounded from above by t * * . Thus, it converges to t * .
Next, a second convergence result for sequence (95) is presented but the sufficient criteria are weaker but more difficult to verify than those of Lemma 4.
Lemma 5.
Suppose
K 0 δ < 1 ,
K 0 ( 2 t n + 1 + γ n + δ ) < 1 ,
and
K 0 ( 2 s n + 1 + δ ) < 1
hold. Then, sequence { t n } is increasing and bounded from above by t 1 * * = 1 K 0 δ 2 K 0 , so it converges to its unique least upper bound t 1 * [ 0 , t 1 * * ] .
Proof. 
It follows from the definition of sequence (95), and conditions (109)–(111). □

9. Semi-Local Convergence of Method

The conditions (C) shall be used in the semi-local convergence analysis of method (20).
Suppose
(C1)
There exist x 0 Ω , η 0 , δ [ 0 , 1 ) such that A 0 1 L ( X , X ) , A 0 1 F ( x 0 ) η , and F ( x 0 ) δ .
(C2)
There exists K 0 > 0 such that for all u , v Ω
A 0 1 ( [ u , v ; F ] A 0 ) K 0 ( u x 0 + v w 0 ) .
Set Ω 0 = U ( x 0 , 1 K 0 δ 2 K 0 ) Ω for K 0 δ < 1 .
(C3)
There exists K > 0 such that for all u , v , u ¯ , v ¯ Ω 0
A 0 1 ( [ u , v ; F ] [ u ¯ , v ¯ ; F ] ) K ( u u ¯ + v v ¯ ) .
(C4)
U [ x 0 , ρ + δ ] Ω , where ρ = t * + γ 0 o r t * * , i f   c o n d i t i o n s   o f   L e m m a   4   h o l d t 1 * + γ 0 o r t 1 * * , i f   c o n d i t i o n s   o f   L e m m a   5   h o l d .
Remark 5.
The results in [19] are given in the non-affine form. The benefits of using affine invariant results over non-affine are well-known [1,5,11,21]. In particular, they assumed A 0 1 β and
(C3)′ [ x , y ; F ] [ x ¯ , y ¯ ; F ] K ¯ ( x x ¯ + y y ¯ ) holds for all x , y , x ¯ y ¯ Ω . By the definition of the set Ω 0 , we get
Ω 0 Ω ,
so
K 0 β K ¯
and
K β K ¯ .
Hence, K can replace β K ¯ in the results in [19]. Notice also that using (C3)′ they estimated
B n + 1 1 A 0 1 1 β K ¯ ( 2 s ¯ n + 1 + δ )
and
A 0 1 ( A n + 1 A 0 ) 1 1 β K ¯ ( t ¯ n + 1 t ¯ 0 ) + γ ¯ n + δ ) ,
where { t ¯ n } , { s ¯ n } are defined for n = 0 , 1 , 2 , by t ¯ 0 = 0 , s ¯ 0 = η ,
t ¯ 1 = s ¯ 0 + β K ¯ ( η + δ ) η 1 β K ¯ ( 2 s ¯ 0 + δ ) , s ¯ n + 1 = t ¯ n + 1 + β γ ¯ 1 β K ¯ ( 2 t ¯ n + 1 + γ ¯ n + δ ) t ¯ n + 2 = s ¯ n + 1 + β K ¯ ( s ¯ n + 1 t ¯ n + 1 + γ ¯ n ) ( s ¯ n + 1 t ¯ n + 1 ) 1 β K ¯ ( 2 s ¯ n + 1 + δ ) ,
where γ ¯ n = K ¯ ( t ¯ n + 1 t ¯ n + s ¯ n t ¯ n ) ( t ¯ n + 1 s ¯ n ) , δ γ ¯ 0 . But using the weaker condition (C2) we obtain respectively,
B n + 1 1 A 0 1 1 K 0 ( 2 s n + 1 + δ )
and
A 0 1 ( A n + 1 A 0 ) 1 1 K 0 ( t n + 1 t 0 + γ n + δ )
which are tighter estimates than (115) and (116), respectively. Hence, K 0 , K can replace β K ¯ , β , K ¯ and (118), (119) can replace (115), (116), respectively, in the proof of Theorem 3 in [19]. Examples where (112)–(114) are strict can be found in [1,5,11,21]. Simple induction shows that
0 < s n t n s ¯ n t ¯ n
0 < t n + 1 s n t ¯ n + 1 s ¯ n
and
t * t ¯ * = lim n t ¯ n .
These estimates justify the claims made at the introduction of this work along the same lines. The local results in [19] can also be extended using our technique.
Next, we present the semi-local convergence result for the method (20).
Theorem 6.
Suppose that conditions (C) hold. Then, iteration { x n } generated by method (20) exists in U [ x 0 , t * ] , remains in U [ x 0 , t * ] and lim n x n = x * U [ x 0 , t * ] with F ( x * ) = 0 , so that
x n x * t * t n .
Proof. 
It follows from the comment above Theorem 6. □
Next, we present the uniqueness of the solution result, where conditions (C) are not necessarily utilized.
Proposition 4.
Suppose the following:
(i)
There exists a simple solution x * U ( x 0 , r ) Ω for some r > 0 .
(ii)
Condition (C2) holds
and
(iii)
There exists r * r such that K 0 ( r + r * + δ ) < 1 .
Set Ω 1 = U ( x 0 , 1 K 0 ( δ + r ) K 0 ) Ω . Then, the element x * is the only solution of equation F ( x ) = 0 in the region Ω 1 .
Proof. 
Let z * Ω 1 with F ( z * ) = 0 . Define Q = [ x * , z * ; F ] . Then, in view of (ii) and (iii),
A 0 1 ( Q A 0 ) K 0 ( x * x 0 + z * w 0 K 0 ( r + r * + δ ) < 1 .
Therefore, we conclude z * = x * is a consequence of the invertibility of Q and the identity Q ( x * z * ) = F ( x * ) F ( z * ) = 0 .
Remark 6.
(i) Notice that r can be chosen to be t * .
(ii) The results can be extended further as follows. Replace
(C3)″ A 0 1 ( [ u , v ; F ] [ u ¯ , v ¯ ; F ] ) K ˜ ( u u ¯ + v v ¯ ) , u , u ¯ Ω 0 , v = u A ( u ) 1 F ( u ) and v ¯ = A ( u ¯ ) 1 F ( u ¯ ) . Then, we have
(iii) K ˜ K .
Another way is if we define the set Ω 2 = U ( x 1 , 1 K 0 ( δ + γ 0 ) 2 K 0 η ) provided that K 0 ( δ + γ 0 ) < 1 . Moreover, suppose Ω 2 Ω . Then, we have Ω 2 Ω 0 if condition (C3)″ on Ω 2 , say, with constant K ˜ 0 . Then, we have that
K ˜ 0 K
also holds. Hence, tighter K ˜ or K ˜ 0 can replace K in Theorem 6.

10. Conclusions

The convergence analysis is developed for generalized three-step numerical methods. The advantages of the new approach include weaker convergence criteria and a uniform set of conditions utilizing information on these methods in contrast to earlier works on special cases of these methods, where the existence of high-order derivatives is assumed to prove convergence. The methodology is very general and does not depend on the methods. That is why it can be applied to multi-step and other numerical methods that shall be the topic of future work.
The weak point of this methodology is the observation that the computation of the majorant functions “h” at this generality is hard in general. Notice that this is not the case for the special cases of method (2) or method (3) given below them (see, for example, Examples 4 and 5). As far as we know, there is no other methodology that can be compared to the one introduced in this article to handle the semi-local or the local convergence of method (2) or method (3) at this generality.

Author Contributions

Conceptualization, M.I.A., I.K.A., S.R. and S.G.; methodology, M.I.A., I.K.A., S.R. and S.G.; software, M.I.A., I.K.A., S.R. and S.G.; validation, M.I.A., I.K.A., S.R. and S.G.; formal analysis, M.I.A., I.K.A., S.R. and S.G.; investigation, M.I.A., I.K.A., S.R. and S.G.; resources, M.I.A., I.K.A., S.R. and S.G.; data curation, M.I.A., I.K.A., S.R. and S.G.; writing—original draft preparation, M.I.A., I.K.A., S.R. and S.G.; writing—review and editing, M.I.A., I.K.A., S.R. and S.G.; visualization, M.I.A., I.K.A., S.R. and S.G.; supervision, M.I.A., I.K.A., S.R. and S.G.; project administration, M.I.A., I.K.A., S.R. and S.G.; funding acquisition, M.I.A., I.K.A., S.R. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Appell, J.; DePascale, E.; Lysenko, J.V.; Zabrejko, P.P. New results on Newton-Kantorovich approximations with applications to nonlinear integral equations. Numer. Funct. Anal. Optim. 1997, 18, 1–17. [Google Scholar] [CrossRef]
  2. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhäuser: Cham Switzerland, 2018. [Google Scholar]
  3. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  4. Regmi, S.; Argyros, I.K.; George, S.; Argyros, C. Numerical Processes for Approximating Solutions of Nonlinear Equations. Axioms 2022, 11, 307. [Google Scholar] [CrossRef]
  5. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Group: Abingdon, UK, 2022. [Google Scholar]
  6. Zhanlav, K.H.; Otgondorj, K.H.; Sauul, L. A unified approach to the construction of higher-order derivative-free iterative methods for solving systems of nonlinear equations. Int. J. Comput. Math. 2021. [Google Scholar]
  7. Zhanlav, T.; Chun, C.; Otgondorj, K.H.; Ulziibayar, V. High order iterations for systems of nonlinear equations. Int. J. Comput. Math. 2020, 97, 1704–1724. [Google Scholar] [CrossRef]
  8. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameters. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  9. Moccari, M.; Lofti, T. On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability. J. Math. Anal. Appl. 2018, 468, 240–269. [Google Scholar] [CrossRef]
  10. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839… for solving nonlinear least squares problems. Appl. Math. Comput. 2005, 161, 253–264. [Google Scholar]
  11. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  12. Potra, F.-A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
  13. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  14. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  15. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  16. Xiao, X.; Yin, H. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Guha, R.K. Simple yet efficient Newton-like method for systems of nonlinear equations. Calcolo 2016, 53, 451–473. [Google Scholar] [CrossRef]
  19. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
  21. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier: Amsterdam, The Netherlands; Academic Press: New York, NY, USA, 2018. [Google Scholar]
  22. Grau-Sanchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving system of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  23. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef] [Green Version]
  24. Kou, J.; Wang, X.; Li, Y. Some eight order root finding three-step methods. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 536–544. [Google Scholar] [CrossRef]
  25. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, M.I.; Argyros, I.K.; Regmi, S.; George, S. Generalized Three-Step Numerical Methods for Solving Equations in Banach Spaces. Mathematics 2022, 10, 2621. https://0-doi-org.brum.beds.ac.uk/10.3390/math10152621

AMA Style

Argyros MI, Argyros IK, Regmi S, George S. Generalized Three-Step Numerical Methods for Solving Equations in Banach Spaces. Mathematics. 2022; 10(15):2621. https://0-doi-org.brum.beds.ac.uk/10.3390/math10152621

Chicago/Turabian Style

Argyros, Michael I., Ioannis K. Argyros, Samundra Regmi, and Santhosh George. 2022. "Generalized Three-Step Numerical Methods for Solving Equations in Banach Spaces" Mathematics 10, no. 15: 2621. https://0-doi-org.brum.beds.ac.uk/10.3390/math10152621

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop