Next Article in Journal
Regularities in Ordered n-Ary Semihypergroups
Next Article in Special Issue
An Iterative Algorithm for Approximating the Fixed Point of a Contractive Affine Operator
Previous Article in Journal
On the Connection between the GEP Performances and the Time Series Properties
Previous Article in Special Issue
On the Convergence of a New Family of Multi-Point Ehrlich-Type Iterative Methods for Polynomial Zeros
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extrapolation Method for Non-Linear Weakly Singular Volterra Integral Equation with Time Delay †

1
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
2
School of Mathematics, Chengdu Normal University, Chengdu 611130, China
*
Author to whom correspondence should be addressed.
This work was supported by the Program of Chengdu Normal University (CS18ZDZ02).
Submission received: 15 June 2021 / Revised: 26 July 2021 / Accepted: 3 August 2021 / Published: 5 August 2021
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)

Abstract

:
This paper proposes an extrapolation method to solve a class of non-linear weakly singular kernel Volterra integral equations with vanishing delay. After the existence and uniqueness of the solution to the original equation are proved, we combine an improved trapezoidal quadrature formula with an interpolation technique to obtain an approximate equation, and then we enhance the error accuracy of the approximate solution using the Richardson extrapolation, on the basis of the asymptotic error expansion. Simultaneously, a posteriori error estimate for the method is derived. Some illustrative examples demonstrating the efficiency of the method are given.

1. Introduction

Delay functional equations are often encountered in biological processes, such as the growth of the population and the spread of an epidemic with immigration into the population [1,2], and a time delay can cause the population to fluctuate. In general, some complicated dynamics systems are also modeled by delay integral equations since the delay argument could cause a stable equilibrium to become unstable. The motivation of our work is twofold: one of the reasons is based on the first-kind delay Volterra integral equation (VIE) of the form [3]
q t t k ( t , s ) y ( s ) d s = f ( t ) , t I : = [ 0 , T ] ,
which was discussed and transformed into the second-kind equivalent form
k ( t , t ) y ( t ) q k ( t , q t ) y ( q t ) + q t t k ( t , s ) t y ( s ) d s = f ( t ) ,
if k ( t , t ) 0 for t I , the normal form was given by
y ( t ) = f ( t ) + y ( q t ) + 0 t K 1 ( t , s ) y ( s ) d s + 0 q t K ( t , s ) y ( s ) d s , t I .
There has been some research [4,5,6] to the following form
y ( t ) = f ( t ) + 0 t K 1 ( t , s ) y ( s ) d s + 0 q t K ( t , s ) y ( s ) d s , t I .
Another source of motivation comes from the weakly singular delay VIE [7,8,9]
y ( t ) = f ( t ) + 0 q t K ( t , s ) ( q t s ) λ G ( s , y ( s ) ) d s , t [ 0 , 1 ] ,
where λ ( 0 , 1 ) , K ( t , s ) is smooth and G ( s , y ( s ) ) is a smooth non-linear function. However, there has not yet been investigated for the case where two integral terms are presented, the first integral term is the weakly singular Volterra integral and the second integral terms not only has weak singularity in the left endpoint but also its upper limit is a delay function, which is challenging to calculate. It is the aim of this paper to fill this gap.
With theoretical and computational advances, some numerical methods for delay differential equations [10,11,12,13], delay integral equations [14], delay integral–differential equations [15,16,17,18], and fractional differential equations with time delay [19,20,21,22] have been investigated widely. Here, we consider the following non-linear weakly singular kernel VIE with vanishing delay
y ( t ) = f ( t ) + 0 t s λ k 1 ( t , s ; y ( s ) ) d s + 0 θ ( t ) s μ k 2 ( t , s ; y ( s ) ) d s , t I ,
where θ ( t ) : = q t , q ( 0 , 1 ) , λ , μ ( 1 , 0 ) , f ( t ) , k 1 ( t , s ; y ( s ) ) , k 2 ( t , s ; y ( s ) ) are r ( r 1 , r N ) times continuously differentiable on I , D × R , D θ × R , respectively, D : = { ( t , s ) : 0 s t T } and D θ : = { ( t , s ) : 0 s θ ( t ) θ ( T ) , t I } . Additionally, k i ( t , s ; y ( s ) ) ( i = 1 , 2 ) satisfy the Lipschitz conditions with respect to y ( s ) on the domains, respectively. That is, for fixed s and t, there are two positive constants L j ( j = 1 , 2 ) which are independent of s and t, such that
| k j ( t , s ; y ( s ) ) k j ( t , s ; v ( s ) ) | L j | y ( s ) v ( s ) | .
Then, Equation (1) possesses a unique solution (see Theorem 1). In this paper, we consider the case where the solution is smooth.
Some numerical investigations of delay VIE have been conducted, such as discontinuous Galerkin methods [23], collocation methods [24,25,26], the iterative numerical method [27], and the least squares approximation method [28]. In [29], an h p version of the pseudo-spectral method was analyzed, based on the variational form of a non-linear VIE with vanishing variable delays. The algorithm increased the accuracy by refining the mesh and/or increasing the degree of the polynomial. Mokhtary et al. [7] used a well-conditioned Jacobi spectral Galerkin method for a VIE with weakly singular kernels and proportional delay by solving sparse upper triangular non-linear algebraic systems. In [8], the Chebyshev spectral-collocation method was investigated for the numerical solution of a class of weakly singular VIEs with proportional delay. An error analysis showed that the approximation method could obtain spectral accuracy. Zhang et al. [9] used some variable transformations to change the weakly singular VIE with pantograph delays into new equations defined on [ 1 , 1 ] , and then combined it with the Jacobi orthogonal polynomial.
The extrapolation method has been used extensively [30,31]. We apply the extrapolation method for the solution of the non-linear weakly singular kernel VIE with proportional delay. We prove the existence of the solution to the original equation using an iterative method, while uniqueness is demonstrated by the Gronwall integral inequality. We obtain the approximate equation by using the quadrature method based on the improved trapezoidal quadrature formula, combining the floor technique and the interpolation technique. Then, we solve the approximate equation through an iterative method. The existence of the approximate solution is validated by analyzing the convergence of the iterative sequence, while uniqueness is shown using a discrete Gronwall inequality. In addition, we provide an analysis of the convergence of the approximate solution and obtain the asymptotic expansion of the error. Based on the error asymptotic expansion, the Richardson extrapolation method is applied to enhance the numerical accuracy of the approximate solution. Furthermore, we obtain the posterior error estimate of the method. Numerical experiments effectively support the theoretical analysis, and all the calculations can be easily implemented.
This paper is organized as follows: In Section 2, the existence and uniqueness of the solution for (1) are proven. The numerical algorithm is introduced in Section 3. In Section 4, we prove the existence and uniqueness of the approximate solution. In Section 5, we provide the convergence analysis of the approximate solution. In Section 6, we obtain the asymptotic expansion of error, the corresponding extrapolation technique is used for achieving high precision, and a posterior error estimate is derived. Numerical examples are described in Section 7. Finally, we outline the conclusions of the paper in Section 8.

2. Existence and Uniqueness of Solution of the Original Equation

In this section, we discuss the existence and uniqueness of the solution of the original equation. There are two cases, 0 t T 1 and 1 < t T , that we will discuss in the following.
Lemma 1
([32]). Let y ( t ) and g ( t ) be non-negative integrable functions, t [ 0 , T ] , A 0 , satisfying
y ( t ) A + 0 t g ( s ) y ( s ) d s ,
then, for all 0 t T ,
y ( t ) A e 0 t g ( s ) d s .
Theorem 1.
f ( t ) , k 1 ( t , s ; y ( s ) ) , k 2 ( t , s ; y ( s ) ) are r ( r 1 , r N ) times continuously differentiable on I , D × R , D θ × R , respectively. Additionally, assume that k i ( t , s ; y ( s ) ) ( i = 1 , 2 ) satisfies the Lipschitz conditions (2), respectively. Then, Equation (1) has a unique solution.
Proof. 
We first construct the sequence { y n ( t ) , n N } as follows:
y 0 ( t ) = f ( t ) , y n ( t ) = f ( t ) + 0 t s λ k 1 ( t , s ; y n 1 ( s ) ) d s + 0 q t s μ k 2 ( t , s ; y n 1 ( s ) ) d s .
Let b = max 0 t T | y 1 ( t ) y 0 ( t ) | , L = max { L 1 , L 2 } , γ = min { λ , μ } .
  • Case I. For 0 s t T 1 , by means of mathematical induction, when n = 1 ,
    | y 2 ( t ) y 1 ( t ) | = | 0 t s λ ( k 1 ( t , s ; y 1 ( s ) ) k 1 ( t , s ; y 0 ( s ) ) ) d s + 0 q t s μ ( k 2 ( t , s ; y 1 ( s ) ) k 2 ( t , s ; y 0 ( s ) ) ) d s | 0 t s λ | k 1 ( t , s ; y 1 ( s ) ) k 1 ( t , s ; y 0 ( s ) ) | d s + 0 q t s μ | k 2 ( t , s ; y 1 ( s ) ) k 2 ( t , s ; y 0 ( s ) ) | d s 0 t L 1 s λ | y 1 ( s ) y 0 ( s ) | + 0 t L 2 s μ | y 1 ( s ) y 0 ( s ) | d s 0 t ( s λ L b + s μ L b ) d s 2 L b 0 t s γ d s = 2 L b t γ + 1 γ + 1 .
    Suppose that the following expression is established when n = k ,
    | y k ( t ) y k 1 ( t ) | b ( 2 L ) k 1 ( k 1 ) ! ( γ + 1 ) k 1 t ( k 1 ) ( γ + 1 ) .
    Let n = k + 1 ; then,
    | y k + 1 ( t ) y k ( t ) | 0 t s λ | k 1 ( t , s ; y k ( s ) ) k 1 ( t , s ; y k 1 ( s ) ) | d s + 0 q t s μ | k 2 ( t , s ; y k ( s ) ) k 2 ( t , s ; y k 1 ( s ) ) | d s 0 t L 1 s λ | y k ( s ) y k 1 ( s ) | + 0 t L 2 s μ | y k ( s ) y k 1 ( s ) | d s 2 L 0 t s γ | y k ( s ) y k 1 ( s ) | d s b ( 2 L ) k k ! ( γ + 1 ) k t k ( γ + 1 ) ,
    that is, the recurrence relation is established when n = k + 1 , then the inequality (4) is also established. Next, we prove that the sequence y n ( t ) is a Cauchy sequence,
    | y n ( t ) y n + m ( t ) | | y n + 1 ( t ) y n ( t ) | + | y n + 2 ( t ) y n + 1 ( t ) | + + | y n + m ( t ) y n + m 1 ( t ) | b ( 2 L ) n n ! ( γ + 1 ) n t n ( γ + 1 ) + + b ( 2 L ) n + m 1 ( n + m 1 ) ! ( γ + 1 ) n + m 1 t ( n + m 1 ) ( γ + 1 ) b i = n n + m + 1 ( 2 L γ + 1 ) i T i ( γ + 1 ) 1 i ! .
    The term i = 0 ( 2 L γ + 1 ) i T i ( γ + 1 ) 1 i ! is convergent, so the Cauchy sequence { y n } n N is convergent uniformly to y ( t ) . Thus, y ( t ) is the solution to Equation (1), the existence is proved.
  • Case II. For 1 < s t T , the process is similar. Let γ ˜ = max { λ , μ } , when n = 1 ,
    | y 2 ( t ) y 1 ( t ) | 2 L b t γ ˜ + 1 γ ˜ + 1 .
    Suppose that the following expression is established when n = k ,
    | y k ( t ) y k 1 ( t ) | b ( 2 L ) k 1 ( k 1 ) ! ( γ ˜ + 1 ) k 1 t ( k 1 ) ( γ ˜ + 1 ) .
    Let n = k + 1 . Then, we have
    | y k + 1 ( t ) y k ( t ) | b ( 2 L ) k k ! ( γ ˜ + 1 ) k t k ( γ ˜ + 1 ) ,
    i.e., the recurrence relation is established when n = k + 1 , such that the inequality (6) is also established. For the sequence y n ( t ) ,
    | y n ( t ) y n + m ( t ) | b i = n n + m + 1 ( 2 L γ ˜ + 1 ) i T i ( γ ˜ + 1 ) 1 i ! .
Since the term i = 0 ( 2 L γ ˜ + 1 ) i T i ( γ ˜ + 1 ) 1 i ! is convergent, so the Cauchy sequence { y n } n N is convergent uniformly to y ( t ) . Thus, y ( t ) is the solution to Equation (1), the existence is proved.
Now, we prove that the solution to Equation (1) is unique. Let y ( t ) and v ( t ) be two distinct solutions to Equation (1), and denote the difference between them by w ( t ) = | y ( t ) v ( t ) | . We obtain
w ( t ) = | 0 t s λ ( k 1 ( t , s ; y ( s ) ) k 1 ( t , s ; v ( s ) ) ) d s + 0 q t s μ ( k 2 ( t , s ; y ( s ) ) k 2 ( t , s ; v ( s ) ) ) d s | 0 t s λ | k 1 ( t , s ; y ( s ) ) k 1 ( t , s ; v ( s ) ) | d s + 0 q t s μ | k 2 ( t , s ; y ( s ) ) k 2 ( t , s ; v ( s ) ) | d s 0 t L 1 s λ w ( s ) d s + 0 q t L 2 s μ w ( s ) d s 0 t ( L s λ + L s μ ) w ( s ) d s .
Let g ( s ) = L s λ + L s μ , then g ( s ) is a non-negative integrable function, according to Lemma 1. We obtain w ( t ) = 0 , i.e., y ( t ) = v ( t ) , the solution to Equation (1) is unique. □

3. The Numerical Algorithm

In this section, we first provide some essential lemmas which are useful for the derivation of the approximate equation. Next, the discrete form of Equation (1) is obtained by combining an improved trapezoidal quadrature formula and linear interpolation. Finally, we solve the approximate equation using an iterative method. The process does not have to compute the integrals; hence, the method can be implemented easily.

3.1. Some Lemmas

Lemma 2
([32]). Let u C 3 ( 0 , 1 ) and z = β x + ( 1 β ) y with β [ 0 , 1 ] , x , y [ 0 , T ] . Then,
u ( z ) = β u ( x ) + ( 1 β ) u ( y ) β ( 1 β ) 2 ( x y ) 2 u ( z ) + O ( ( x y ) 3 ) .
Proof. 
The Taylor expansion of function u ( x ) at the point z is
u ( x ) = u ( β x + ( 1 β ) x ) = u ( β x + ( 1 β ) y + ( 1 β ) ( x y ) ) = u ( z + ( 1 β ) ( x y ) ) = u ( z ) + ( 1 β ) ( x y ) u ( z ) + ( 1 β ) 2 2 ( x y ) 2 u ( z ) + O ( ( x y ) 3 ) .
Similarly, the Taylor expansion of function u ( y ) at point z is
u ( y ) = u ( z β ( x y ) ) = u ( z ) β ( x y ) u ( z ) + β 2 2 ( x y ) 2 u ( z ) + O ( ( x y ) 3 ) ,
combining (8) with (9), the proof is completed. □
Lemma 3
([33,34]). Let g ( t ) C 2 r ˜ [ a , b ] ( r ˜ 1 , r ˜ N ) , G ( t ) = ( b t ) λ g ( t ) , h = ( b a ) N , and t k = a + k h for k = 0 , , N , as for the integral a b G ( t ) d t . Then, the error of the modified trapezoidal integration rule
T N ( G ) = h 2 G ( t 0 ) + h j = 1 N 1 G ( t k ) ζ ( λ ) g ( b ) h 1 + λ ,
has an asymptotic expansion
E N ( G ) = j = 1 r ˜ 1 B 2 j ( 2 j ) ! G ( 2 j 1 ) ( a ) h 2 j + j = 1 2 r ˜ 1 ( 1 ) j ζ ( λ j ) g ( j ) ( b ) h j + λ + 1 ( j ! ) + O ( h 2 r ˜ ) ,
where 1 < λ < 0 , ζ is the Riemann–Zeta function and B 2 j represents the Bernoulli numbers.

3.2. The Approximation Process

In this subsection, we describe the numerical method used to find the approximate solution to Equation (1). Let y ( t ) have continuous partial derivatives up to 3 on I, f ( t ) , k 1 ( t , s ; y ( s ) ) , k 2 ( t , s ; y ( s ) ) are four times continuously differentiable on I , D × R , D θ × R , respectively. Let y ( t i ) , y i denote the exact solution and approximate solution when t = t i , respectively. We divide I = [ 0 , T ] into N subintervals with a uniform step size h = T N , t i = i h , i = 0 , 1 , , N . Let t = t i in Equation (1). Then,
y ( t i ) = f ( t i ) + 0 t i s λ k 1 ( t i , s ; y ( s ) ) d s + 0 q t i s μ k 2 ( t i , s ; y ( s ) ) d s = f ( t i ) + 0 t i s λ k 1 ( t i , s ; y ( s ) ) d s + 0 t [ q i ] s μ k 2 ( t i , s ; y ( s ) ) d s + t [ q i ] q t i s μ k 2 ( t i , s ; y ( s ) ) d s = f ( t i ) + I 1 + I 2 + I 3 ,
where [ q i ] denotes the maximum integer less than q i . According to Lemma 3, we have
I 1 = 0 t i s λ k 1 ( t i , s ; y ( s ) ) d s ζ ( λ ) k 1 ( t i , t 0 ; y ( t 0 ) ) h 1 + λ + h k = 1 i 1 t k λ k 1 ( t i , t k ; y ( t k ) ) + h 2 t i λ k 1 ( t i , t i ; y ( t i ) ) .
For I 2 and I 3 , there are two cases.
  • Case I. If [ q i ] = 0 , then
    I 2 = 0 ; I 3 = 0 q t i s μ k 2 ( t i , s ; y ( s ) ) d s ζ ( μ ) ( q t i ) 1 + μ k 2 ( t i , t 0 ; y ( t 0 ) ) + q t i 2 ( q t i ) μ k 2 ( t i , q t i ; y ( q t i ) ) .
  • Case II. If [ q i ] 1 , we obtain
    I 2 ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y ( t 0 ) ) + h 2 t 1 μ k 2 ( t i , t 1 ; y ( t 1 ) ) , [ q i ] = 1 , ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y ( t 0 ) ) + h k = 1 [ q i ] 1 t k μ k 2 ( t i , t k ; y ( t k ) ) + h 2 t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) , [ q i ] > 1 .
    I 3 q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) + ( q t i ) μ k 2 ( t i , q t i ; y ( q t i ) ) ) .
y ( q t i ) can be represented by linear interpolation of the adjacent points y ( t [ q i ] ) and y ( t [ q i ] + 1 ) . For the node t i = i h , i = 0 , 1 , , N , since [ q i ] q i [ q i ] + 1 , we obtain t [ q i ] q t i t [ q i ] + 1 ; according to Lemma 2, there exists β i [ 0 , 1 ] such that q t i = β i t [ q i ] + ( 1 β i ) t [ q i ] + 1 . The value of β i = 1 + [ q i ] q i can be calculated easily. Then, the approximate expression of y ( q t i ) is
y ( q t i ) β i y ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) .
Then, (15) can be written as
I 3 q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) + ( q t i ) μ k 2 ( t i , q t i ; β i y ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) ) ) .
The approximation equations are as follows
  • Case I. When [ q i ] = 0 ,
    y 0 = f ( t 0 ) ; y i f ( t i ) ζ ( λ ) k 1 ( t i , t 0 ; y 0 ) h 1 + λ + h k = 1 i 1 t k λ k 1 ( t i , t k ; y k ) + h 2 t i λ k 1 ( t i , t i ; y i ) ζ ( μ ) ( q t i ) 1 + μ k 2 ( t i , t 0 ; y 0 ) + q t i 2 ( q t i ) μ k 2 ( t i , q t i ; β i y [ q i ] + ( 1 β i ) y [ q i ] + 1 ) .
  • Case II. When [ q i ] 1 ,
    y 0 = f ( t 0 ) ; y i f ( t i ) ζ ( λ ) k 1 ( t i , t 0 ; y 0 ) h 1 + λ + h k = 1 i 1 t k λ k 1 ( t i , t k ; y k ) + h 2 t i λ k 1 ( t i , t i ; y i ) ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y 0 ) + δ i + h 2 t [ q i ] μ k 2 ( t i , t [ q i ] ; y [ q i ] ) + q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y [ q i ] ) + ( q t i ) μ k 2 ( t i , q t i ; β i y [ q i ] + ( 1 β i ) y [ q i ] + 1 ) ) ,
    where
    δ i 0 , [ q i ] = 1 , h k = 1 [ q i ] 1 t k μ k 2 ( t i , t k ; y k ) , [ q i ] 2 .

3.3. Iterative Scheme

Now, the solution of the approximate equation can be solved by an iterative algorithm.
  • Iterative algorithm
Step 1. 
Take sufficiently small ϵ > 0 and set y ˜ 0 = f ( t 0 ) , i : = 1 .
Step 2. 
Let y ˜ i 0 = y ˜ i 1 , m : = 0 , then we compute y i m + 1 ( i N ) as follows:
  • Case I. When [ q i ] = 0 ,
    y 0 = f ( t 0 ) ; y i m + 1 f ( t i ) ζ ( λ ) k 1 ( t i , t 0 ; y ˜ 0 ) h 1 + λ + h k = 1 i 1 t k λ k 1 ( t i , t k ; y ˜ k ) + h 2 t i λ k 1 ( t i , t i ; y i m ) ζ ( μ ) ( q t i ) 1 + μ k 2 ( t i , t 0 ; y ˜ 0 ) + q t i 2 ( q t i ) μ k 2 ( t i , q t i ; β i y ˜ [ q i ] + ( 1 β i ) y [ q i ] + 1 m + 1 ) .
  • Case II. When [ q i ] 1 ,
    y 0 = f ( t 0 ) ; y i m + 1 f ( t i ) ζ ( λ ) k 1 ( t i , t 0 ; y ˜ 0 ) h 1 + λ + h k = 1 i 1 t k λ k 1 ( t i , t k ; y ˜ k ) + h 2 t i λ k 1 ( t i , t i ; y i m ) ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y ˜ 0 ) + δ i ˜ + h 2 t [ q i ] μ k 2 ( t i , t [ q i ] ; y ˜ [ q i ] ) + q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y ˜ [ q i ] ) + ( q t i ) μ k 2 ( t i , q t i ; β i y ˜ [ q i ] + ( 1 β i ) y [ q i ] + 1 m + 1 ) ) ,
    where
    δ i ˜ 0 , [ q i ] = 1 , h k = 1 [ q i ] 1 t k μ k 2 ( t i , t k ; y ˜ k ) , [ q i ] 2 .
Step 3. 
If | y i m + 1 y i m | ϵ , then let y ˜ i : = y i m + 1 and i : = i + 1 , and return to step 2. If otherwise, let m : = m + 1 , and return to step 2.
Remark 1.
In Section 3.2, we considered the regularity of k i ( t , s ; y ( s ) ) ( i = 1 , 2 ) only up to r ˜ = 2 in Lemma 3, since the desired accuracy has been obtained, and it is sufficient for the subsequent convergence analysis and extrapolation algorithm.

4. Existence and Uniqueness of the Solution to the Approximate Equation

In this section, we investigate the existence and uniqueness of the solution to the approximate equation. We first introduce the following discrete Gronwall inequality.
Lemma 4
([35,36]). Suppose that the non-negative sequence { w n } , n = 0 , , N , satisfy
w n h k = 1 n 1 B k w k + A , 0 n N ,
where A and B k , k = 1 , , N are non-negative constants, h = 1 / N , when h max 0 k N w k 1 2 then we have
max 0 n N w n A exp ( 2 h k = 1 N B k ) .
Theorem 2.
Let f ( t ) , k 1 ( t , s ; y ( s ) ) , k 2 ( t , s ; y ( s ) ) are four times continuously differentiable on I , D × R , D θ × R , respectively. Additionally, y ( t ) has continuous partial derivatives up to 3 on I and k i ( t , s ; y ( s ) ) ( i = 1 , 2 ) satisfy Lipschitz conditions (2). Assume that h is sufficiently small, then the solution to Equation (21) exists and is unique.
Proof. 
We discuss the existence of the approximate solution under two cases.
  • Case I. When [ q i ] = 0 ,
    | y i m + 1 y i m | = | h 2 t i λ ( k 1 ( t i , t i ; y i m ) k 1 ( t i , t i ; y i m 1 ) ) | L 1 h 2 t i λ | y i m y i m 1 | .
    When h is sufficiently small, such that L 1 h 2 t i λ 1 2 , then | y i m + 1 y i m | 1 2 | y i m y i m 1 | holds. Therefore, the iterative algorithm is convergent and the limit is the solution to the approximation equation. The existence of approximation is proved when [ q i ] = 0 .
    Now, we prove the uniqueness of approximation. Suppose y i and x i are both solutions to Equation (20). Denote the absolute differences as w i = | y i x i | . We have
    w 0 = 0 , w i ζ ( λ ) | k 1 ( t i , t 0 ; y 0 ) k 1 ( t i , t 0 ; x 0 ) | h 1 + λ + h k = 1 i 1 t k λ | k 1 ( t i , t k ; y k ) k 1 ( t i , t k ; x k ) | + h 2 t i λ | k 1 ( t i , t i ; y i ) k 1 ( t i , t i ; x i ) | ζ ( μ ) ( q t 1 ) 1 + μ | k 2 ( t i , t 0 ; y 0 ) k 2 ( t i , t 0 ; x 0 ) | + q t i 2 ( q t i ) μ | k 2 ( t i , q t i ; β i y [ q i ] + ( 1 β i ) y [ q i ] + 1 ) k 2 ( t i , q t i ; β i x [ q i ] + ( 1 β i ) x [ q i ] + 1 ) | L 1 h k = 1 i 1 t k λ w k + L 1 h 2 t i λ w i + L 2 q t i 2 ( q t i ) μ ( β i w [ q i ] + ( 1 β i ) w [ q i ] + 1 ) L h k = 1 i 1 t k λ w k + L h 2 t i λ w i + L q t i 2 ( q t i ) μ ( 1 β i ) w 1 .
    where L = max { L 1 , L 2 } . When h is sufficiently small, such that L 1 h 2 t i λ 1 2 , we have
    w i 2 L h k = 1 i 1 t k λ w k + L h t i λ w i + L q t i ( q t i ) μ ( 1 β i ) w 1 [ 2 L h t k λ + L h ( q t i ) μ ( 1 β i ) ] w 1 + 2 L h k = 2 i 1 t k λ w k = h k = 1 i 1 B k w k ,
    where
    B k = 2 L t k λ + L ( q t i ) μ ( 1 β i ) , j = 1 , 2 L h t k λ , j = 2 , , i 1 .
    According to Lemma 4 with A = 0 , we have w i = 0 , i.e., y i = x i , the solution of Equation (20) is unique.
  • Case II. For [ q i ] > 1 , we consider the following cases.
    (1)
    The first situation is [ q i ] + 1 = i , namely, when i 1 1 q , we have
    | y i m + 1 y i m | = | h 2 t i λ ( k 1 ( t i , t i ; y i m ) k 1 ( t i , t i ; y i m 1 ) ) | + q t i t [ q i ] 2 ( q t i ) μ | k 2 ( t i , q t i ; β i y ˜ [ q i ] + ( 1 β i ) y [ q i ] + 1 m ) k 2 ( t i , q t i ; β i y ˜ [ q i ] + ( 1 β i ) y [ q i ] + 1 m 1 ) | L 1 h 2 t i λ | y i m y i m 1 | + L 2 q t i t [ q i ] 2 ( q t i ) μ | ( 1 β i ) y [ q i ] + 1 m ( 1 β i ) y [ q i ] + 1 m 1 | L h 2 ( t i λ + ( q t i ) μ ( 1 β i ) ) | y i m y i m 1 | .
    Let the step size h be small enough, such that L h 2 ( t i λ + ( q t i ) μ ( 1 β i ) ) 1 2 . Then, we can determine that | y i m + 1 y i m | 1 2 | y i m y i m 1 | holds.
    (2)
    The second situation is [ q i ] + 1 < i , namely, when i > 1 1 q , we obtain
    | y i m + 1 y i m | = | h 2 t i λ ( k 1 ( t i , t i ; y i m ) k 1 ( t i , t i ; y i m 1 ) ) | L 1 h 2 t i λ | y i m y i m 1 | L h 2 t i λ | y i m y i m 1 | .
Let L h 2 t i λ 1 2 for a sufficiently small h, then | y i m + 1 y i m | 1 2 | y i m y i m 1 | holds.
The above two situations show that the iterative algorithm is convergent and that the limit is the solution to Equation (21).
Next, we prove that the solution to Equation (21) is unique. Suppose y i and x ˜ i are both solutions to Equation (21). Denote the differences as w ˜ i = | y i x ˜ i | , i = 1 , , N . Then, we have
w ˜ 0 = 0 ; w ˜ i ζ ( λ ) | k 1 ( t i , t 0 ; y 0 ) k 1 ( t i , t 0 ; x ˜ 0 ) | h 1 + λ + h k = 1 i 1 t k λ | k 1 ( t i , t k ; y k ) k 1 ( t i , t k ; x ˜ k ) | + h 2 t i λ | k 1 ( t i , t i ; y i ) k 1 ( t i , t i ; x ˜ i ) | ζ ( μ ) h 1 + μ | k 2 ( t i , t 0 ; y 0 ) k 2 ( t i , t 0 ; x ˜ 0 ) | + h k = 1 [ q i ] 1 t k μ | k 2 ( t i , t k ; y k ) k 2 ( t i , t k ; x ˜ k ) | + h 2 t [ q i ] μ | k 2 ( t i , t [ q i ] ; y [ q i ] ) k 2 ( t i , t [ q i ] ; x ˜ [ q i ] ) | + q t i t [ q i ] 2 ( t [ q i ] μ | k 2 ( t i , t [ q i ] ; y [ q i ] ) k 2 ( t i , t [ q i ] ; x ˜ [ q i ] ) | + ( q t i ) μ | k 2 ( t i , q t i ; β i y [ q i ] + ( 1 β i ) y [ q i ] + 1 ) ) k 2 ( t i , q t i ; β i x ˜ [ q i ] + ( 1 β i ) x ˜ [ q i ] + 1 ) ) | h k = 1 i 1 t k λ L 1 w ˜ k + h 2 t i λ L 1 w ˜ i + h k = 1 [ q i ] 1 t k μ L 2 w ˜ k + h 2 t [ q i ] μ L 2 w ˜ [ q i ] + q t i t [ q i ] 2 ( t [ q i ] μ L 2 w ˜ [ q i ] + ( q t i ) μ L 2 ( β i w ˜ [ q i ] + ( 1 β i ) w ˜ [ q i ] + 1 ) ) L h k = 1 i 1 t k γ w ˜ k + L h 2 t i γ w ˜ i + L h k = 1 [ q i ] 1 t k γ w ˜ k + L h 2 t [ q i ] γ w ˜ [ q i ] + L h 2 ( t [ q i ] γ w ˜ [ q i ] + t [ q i ] γ ( β i w ˜ [ q i ] + ( 1 β i ) w ˜ [ q i ] + 1 ) ) .
(1)
The first situation is [ q i ] + 1 = i (i.e., when i 1 1 q ). Then, (24) entails
w ˜ i L h k = 1 [ q i ] 1 t k γ w ˜ k + L h t [ q i ] γ w ˜ [ q i ] + L h 2 t i γ w ˜ i + L h k = 1 [ q i ] 1 t k γ w ˜ k + L h 2 t [ q i ] γ w ˜ [ q i ] + L h 2 ( t [ q i ] γ w ˜ [ q i ] + t [ q i ] γ ( β i w ˜ [ q i ] + ( 1 β i ) w ˜ [ q i ] + 1 ) ) = 2 L h k = 1 [ q i ] 1 t k γ w ˜ k + ( 2 L h t [ q i ] γ + L h 2 β i t [ q i ] γ ) w ˜ [ q i ] + ( L h 2 t i γ + L h 2 t [ q i ] γ ( 1 β i ) ) w ˜ [ q i ] + 1 .
By letting h be so small that ( L h 2 t i γ + L h 2 t [ q i ] γ ( 1 β i ) ) 1 2 , we can easily derive
w ˜ i 4 L h k = 1 [ q i ] 1 t k γ w k + ( 4 L h t [ q i ] γ + L h β i t [ q i ] γ ) w ˜ [ q i ] = h k = 1 i 1 B k w ˜ k ,
where
B k = 4 L t k γ , j = 1 , , [ q i ] 1 , 4 L t [ q i ] γ + L β i t [ q i ] γ , j = [ q i ] .
According to Lemma 4 with A = 0 , we have w i = 0 , and the solution of Equation (21) is unique.
(2)
The second situation is [ q i ] + 1 < i (i.e., when i > 1 1 q ). Then, (24) can imply
w ˜ i L h k = 1 [ q i ] 1 t k γ w ˜ k + L h t [ q i ] γ w ˜ [ q i ] + L h t [ q i ] + 1 γ w ˜ [ q i ] + 1 + L h k = [ q i ] + 2 i 1 t k γ w ˜ k + L h 2 t i γ w ˜ i + L h k = 1 [ q i ] 1 t k γ w ˜ k + L h 2 t [ q i ] γ w ˜ [ q i ] + L h 2 ( t [ q i ] γ w ˜ [ q i ] + t [ q i ] γ ( β i w ˜ [ q i ] + ( 1 β i ) w ˜ [ q i ] + 1 ) ) = 2 L h k = 1 [ q i ] 1 t k γ w ˜ k + ( 2 L h t [ q i ] γ + L h 2 β i t [ q i ] γ ) w ˜ [ q i ] + ( L h t [ q i ] + 1 γ + L h 2 t [ q i ] γ ( 1 β i ) ) w ˜ [ q i ] + 1 + L h k = [ q i ] + 2 i 1 t k γ w ˜ k + L h 2 t i γ w ˜ i .
Letting h be so small that L h 2 t i γ 1 2 , then
w ˜ i 4 L h k = 1 [ q i ] 1 t k γ w ˜ k + ( 4 L h t [ q i ] γ + L h β i t [ q i ] γ ) w ˜ [ q i ] + ( 2 L h t [ q i ] + 1 γ + L h t [ q i ] γ ( 1 β i ) ) w ˜ [ q i ] + 1 + 2 L h k = [ q i ] + 2 i 1 t k γ w ˜ k = h k = 1 n 1 B ˜ k w ˜ k ,
where
B ˜ k = 4 L t k γ , j = 1 , , [ q i ] 1 , 4 L t [ q i ] γ + L β i t [ q i ] γ , j = [ q i ] , 2 L t [ q i ] + 1 γ + L t [ q i ] γ ( 1 β i ) , j = [ q i ] + 1 , 2 L t k γ , j = [ q i ] + 2 , , i 1 .
According to Lemma 4 with A = 0 , we have w ˜ i = 0 , i.e., y i = x ˜ i , the solution of Equation (21) is unique. Combining the above situations, the proof of Theorem 2 is completed. □

5. Convergence Analysis

In this section, we will discuss errors caused by the process of obtaining discrete equations using a quadrature formula and interpolation technique and the errors caused by solving the discrete equation using iterative algorithms. According to the quadrature rule, Equation (12) can be expressed as
y ( t 0 ) = f ( t 0 ) , y ( t i ) = f ( t i ) ζ ( λ ) k 1 ( t i , t 0 ; y ( t 0 ) ) h 1 + λ + h k = 1 i 1 t k λ k 1 ( t i , t k ; y ( t k ) ) + h 2 t i λ k 1 ( t i , t i ; y ( t i ) ) + E 1 , i ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y ( t 0 ) ) + h k = 1 [ q i ] 1 t k μ k 2 ( t i , t k ; y ( t k ) ) + h 2 t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) + E 2 , i + q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) + ( q t i ) μ k 2 ( t i , q t i ; β i y ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) ) ) + E 3 , i .
From Lemmas 2 and 3, the remainders are
E 1 , i = [ k 1 ( t i , s ; y ( s ) ) ] | s = 0 ζ ( λ 1 ) h 2 + λ + [ k 1 ( t i , s ; y ( s ) ) ] | s = 0 2 ! ζ ( λ 2 ) h 3 + λ + O ( h 4 + λ ) = T 1 ( t i ) h 2 + λ + O ( h 3 + λ ) ,
E 2 , i = [ k 2 ( t i , s ; y ( s ) ) ] | s = 0 ζ ( μ 1 ) h 2 + γ + [ k 2 ( t i , s ; y ( s ) ) ] | s = 0 2 ! ζ ( μ 2 ) h 3 + μ + O ( h 4 + μ ) = T 2 ( t i ) h 2 + μ + O ( h 3 + μ ) ,
E 3 , i = β ( 1 β ) 2 h 2 y ( q t i ) ( q t i ) γ k 2 ( t i , q t i ; ( β i y ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) ) ) + ( q t i t [ q i ] ) 2 12 t [ q i ] q t i 2 s 2 ( k 2 ( t i , s ; y ( s ) ) s μ ) d s + O ( h 3 ) = T 3 ( t i ) h 2 + ( q t i t [ q i ] ) 2 h 2 12 t [ q i ] q t i 2 s 2 ( k 2 ( t i , s ; y ( s ) ) s μ ) d s + O ( h 3 ) = T 3 ( t i ) h 2 + O ( h 3 ) ,
where
T 1 ( t i ) = [ k 1 ( t i , s ; y ( s ) ) ] | s = 0 ζ ( λ 1 ) , T 2 ( t i ) = [ k 2 ( t i , s ; y ( s ) ) ] | s = 0 ζ ( μ 1 ) , T 3 ( t i ) = β ( 1 β ) 2 u ( q t i ) ( q t i ) μ k 2 ( t i , q t i ; ( β i y ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) ) ) + 1 12 t [ q i ] q t i 2 s 2 k 2 ( t i , s ; y ( s ) ) s μ d s .
In order to investigate the error between the exact solution and the approximate solution of Equation (1), we first give the following theorem.
Theorem 3.
Under the conditions of Theorem 2, y ( t i ) is the exact solution of Equation (1) when t = t i and y i is the solution of discrete Equation (19) at t i . Assume that h is sufficiently small, then, the absolute error denote by e 1 , i = | y ( t i ) y i | has the estimate
max 1 i N | e 1 , i | O ( h 2 + γ ) .
Proof. 
Subtracting (19) from (27),
| e 1 , 0 | = 0 , | e 1 , i | = ζ ( λ ) | k 1 ( t i , t 0 ; y ( t 0 ) ) k 1 ( t i , t 0 ; y 0 ) | h 1 + λ + h k = 1 i 1 t k λ | k 1 ( t i , t k ; y ( t k ) ) k 1 ( t i , t k ; y k ) | + h 2 t i λ | k 1 ( t i , t i ; y ( t i ) ) k 1 ( t i , t i ; y i ) | ζ ( μ ) h 1 + μ | k 2 ( t i , t 0 ; y ( t 0 ) ) k 2 ( t i , t 0 ; y 0 ) | + h k = 1 [ q i ] 1 t k μ | k 2 ( t i , t k ; y ( t k ) ) k 2 ( t i , t k ; y k ) | + h 2 t [ q i ] μ | k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) k 2 ( t i , t [ q i ] ; y [ q i ] ) | + q t i t [ q i ] 2 ( t [ q i ] μ | k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) k 2 ( t i , t [ q i ] ; y [ q i ] ) | + ( q t i ) μ | k 2 ( t i , q t i ; β i y ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) ) k 2 ( t i , q t i ; β i y [ q i ] + ( 1 β i ) y [ q i ] + 1 ) | ) + T 1 ( t i ) h 2 + λ + T 2 ( t i ) h 2 + μ + T 3 ( t i ) h 2 + O ( h 3 + γ ) = h k = 1 i 1 t k λ L 1 | e 1 , k | + h 2 t i λ L 1 | e 1 , i | + h k = 1 [ q i ] 1 t k μ L 2 | e 1 , k | + h 2 t [ q i ] μ L 2 | e 1 , [ q i ] | + q t i t [ q i ] 2 ( t [ q i ] μ L 2 | e 1 , [ q i ] | + ( q t i ) μ L 2 | β i e 1 , [ q i ] + ( 1 β i ) e 1 , [ q i ] + 1 | ) + T 1 ( t i ) h 2 + λ + T 2 ( t i ) h 2 + μ + T 3 ( t i ) h 2 + O ( h 3 + γ ) .
Letting h be so small, that h 2 t i λ L 1 1 2 , it is easy to derive
| e 1 , i | A + h j = 1 i 1 B j | e 1 , j | , 0 i N ,
where
A = 2 | T 1 ( t i ) h 2 + λ + T 2 ( t i ) h 2 + μ + T 3 ( t i ) h 2 + O ( h 3 + γ ) = O ( h 2 + γ ) .
By Lemma 4, we have
max 1 i N | e 1 , i | O ( h 2 + γ ) .
The proof is complete. □
Next, we evaluate the error arising from the iterative process.
Theorem 4.
Under the conditions of Theorem 2, y i is the solution of Equation (19) and y ˜ i is the approximate solution of Equation (1), and y ˜ i is defined by (21). The absolute error is denoted by e 2 , i = | y i y ˜ i | . Assume that h is sufficiently small, then, there exist two positive constants, C 1 and C 2 , which are independent of h = T N , such that
v i C 1 h ϵ , [ q i ] + 1 = i , C 2 h ϵ , [ q i ] + 1 i .
Proof. 
Subtracting (21) from (19), we have e 2 , 0 = 0 . We consider two cases.
(1)
The first case is [ q i ] + 1 = i (i.e., when i 1 1 q ). Then, we have
e 2 , i = h k = 1 i 1 t k λ L 1 e 2 , k + h 2 t i λ L 1 ϵ + h k = 1 [ q i ] 1 t k μ L 2 e 2 , k + h 2 t [ q i ] μ L 2 e 2 , [ q i ] + q t i t [ q i ] 2 ( t [ q i ] μ L 2 e 2 , [ q i ] + ( q t i ) μ L 2 ( β i e 2 , [ q i ] + ( 1 β i ) ϵ ) ) h k = 1 i 1 t k λ L e 2 , k + h k = 1 [ q i ] 1 t k μ L e 2 , k + h 2 t [ q i ] μ L e 2 , [ q i ] + h 2 ( t [ q i ] μ L + ( q t i ) μ β i ) e 2 , [ q i ] + ( h 2 ( q t i ) μ L ( 1 β i ) + h 2 t i λ L ) ϵ = h k = 1 i 1 B k e 2 , k + ( 1 2 ( q t i ) μ L ( 1 β i ) + 1 2 t i λ L ) h ϵ .
According to Lemma 4, we have e 2 , i C 1 h ϵ .
(2)
The second case is [ q i ] + 1 i (i.e., when i > 1 1 q ). Then, we obtain
e 2 , i = h k = 1 i 1 t k λ L 1 e 2 , k + h 2 t i λ L 1 ϵ + h k = 1 [ q i ] 1 t k μ L 2 e 2 , k + h 2 t [ q i ] μ L 2 e 2 , [ q i ] + q t i t [ q i ] 2 ( t [ q i ] μ L 2 e 2 , [ q i ] + ( q t i ) μ L 2 ( β i e 2 , [ q i ] + ( 1 β i ) e 2 , [ q i ] + 1 ) ) h k = 1 i 1 t k λ L e 2 , k + h 2 t i λ L ϵ + h k = 1 [ q i ] 1 t k μ L e 2 , k + h 2 t [ q i ] μ L e 2 , [ q i ] + h 2 ( t [ q i ] μ L e 2 , [ q i ] + ( q t i ) μ L ( β i e 2 , [ q i ] + ( 1 β i ) e 2 , [ q i ] + 1 ) ) = h k = 1 i 1 B k e 2 , k + 1 2 t i λ L h ϵ .
According to Lemma 4, we have e 2 , i C 2 h ϵ . □
Theorem 5.
Under the conditions of Theorem 2, y ( t i ) is the exact solution of Equation (1), y ˜ i is the approximate solution of Equation (1) when t = t i , we have
| y ( t i ) y ˜ i | C 1 h ϵ + O ( h 2 + γ ) , [ q i ] + 1 = i , C 2 h ϵ + O ( h 2 + γ ) , [ q i ] + 1 i .
Proof. 
By Theorems 3 and 4, the absolute error between y ( t i ) and y ˜ i has the expression
| y ( t i ) y ˜ i | = | y ( t i ) y i + y i y ˜ i | | y ( t i ) y i | + | y i y ˜ i | .
We obtain the conclusion of Theorem 5. □

6. Extrapolation Method

In this section, we first describe the asymptotic error expansion and then present an extrapolation technique for achieving high precision. Finally, a posterior error estimate is derived.
Theorem 6.
Let f ( t ) , k 1 ( t , s ; y ( s ) ) , k 2 ( t , s ; y ( s ) ) are four times continuously differentiable on I , D × R , D θ × R , respectively. Additionally, y ( t ) has continuous partial derivatives up to 3 on I and k i ( t , s ; y ( s ) ) ( i = 1 , 2 ) satisfy Lipschitz conditions (2). There exist functions W i ^ ( t ) ( i = 1 , 2 , 3 ) independent of h, such that we have the following asymptotic expansions:
y i = y ( t i ) + W ^ 1 ( t i ) h 2 + λ + W ^ 2 ( t i ) ) h 2 + μ + W ^ 3 ( t i ) h 2 + O ( h 3 + γ ) , 1 < λ < 0 , 1 < μ 0 .
Proof. 
Assume that { W k ^ ( t ) , k = 1 , 2 , 3 } satisfy the auxiliary delay equations
W k ^ ( t ) = W k ( t ) + 0 t s λ k 1 ( t , s ; y ( s ) ) W k ^ ( s ) d s + 0 q t s μ k 2 ( t , s ; y ( s ) ) W k ^ ( s ) d s ,
and W k ^ ( t i ) , i = 1 , , N satisfy the approximation equations
W k ^ ( t i ) = ζ ( λ ) h 1 + λ k 1 ( t i , t 0 ; y ( t 0 ) ) W k ^ ( t 0 ) + h k = 1 i 1 t k λ k 1 ( t i , t k ; y ( t k ) ) W k ^ ( t k ) + h 2 t i λ k 1 ( t i , t i ; y ( t i ) ) W k ^ ( t i ) ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y ( t 0 ) ) W k ^ ( t 0 ) + h k = 1 [ q i ] 1 t k μ k 2 ( t i , t k ; y ( t k ) ) W k ^ ( t k ) + h 2 t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) W k ^ ( t i ) + q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) W k ^ ( t [ q i ] ) + ( q t i ) μ k 2 ( t i , q t i ; β i y ( t [ q i ] ) W k ^ ( t [ q i ] ) + ( 1 β i ) y ( t [ q i ] + 1 ) W k ^ ( t [ q i ] + 1 ) ) ) + W k ( t i ) .
The analysis procedure is similar to the proof of Theorem 3. We obtain
max 1 i N | W k ^ ( t i ) W ( t i ) | L h 2 + γ .
Let
E i = e i ( W 1 ( t i ) h 2 + λ + W 2 ( t i ) h 2 + μ + W 3 ( t i ) ) h 2 .
Then, we obtain
E i = ζ ( λ ) h 1 + λ k 1 ( t i , t 0 ; y ( t 0 ) ) E 0 + h k = 1 i 1 t k λ k 1 ( t i , t k ; y ( t k ) ) E k + h 2 t i λ k 1 ( t i , t i ; y ( t i ) ) E i ζ ( μ ) h 1 + μ k 2 ( t i , t 0 ; y ( t 0 ) ) E 0 + h k = 1 [ q i ] 1 t k μ k 2 ( t i , t k ; y ( t k ) ) E k + h 2 t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) E [ q i ] + q t i t [ q i ] 2 ( t [ q i ] μ k 2 ( t i , t [ q i ] ; y ( t [ q i ] ) ) E [ q i ] + ( q t i ) μ k 2 ( t i , q t i ; β i y ( t [ q i ] ) E [ q i ] + ( 1 β i ) y ( t [ q i ] + 1 ) E [ q i ] + 1 ) ) .
According to Lemma 4, there exists a constant d such that
max 1 i N | E i | d h 3 + γ .
The asymptotic expansion is
y ˜ i = y ( t i ) + W ^ 1 ( t i ) h 2 + λ + W ^ 2 ( t i ) ) h 2 + μ + W ^ 3 ( t i ) h 2 + O ( h 3 + γ ) .
From Theorem 6, we consider the Richardson extrapolation method to achieve higher accuracy.
  • Extrapolation algorithm
Step 1. 
Assume γ = min ( λ , μ ) = λ , and halve the step length to obtain
y ˜ i h 2 = y ( t i ) + W ^ 1 ( t i ) h 2 2 + λ + W ^ 2 ( t i ) h 2 2 + μ + W ^ 3 ( t i ) h 2 2 + O h 2 3 + λ .
Then, the term W ^ 1 ( t i ) h 2 + λ can be removed.
y ˜ i 1 , h = 2 2 + λ y ˜ i h 2 y ˜ i h 2 2 + λ 1 = y ( t i ) + W ^ 2 ( t i ) ) h 2 + μ + W ^ 3 ( t i ) h 2 + O ( h 3 + λ ) .
Step 2. 
To eliminate W ^ 2 ( t i ) ) h 2 + μ , we apply Richardson h 2 + μ extrapolation:
y ˜ i 1 , h 2 = y ( t i ) + W ^ 2 ( t i ) h 2 2 + μ + W ^ 3 ( t i ) h 2 2 + O h 2 3 + λ .
Combining (35) and (36), we have
y ˜ i 2 , h = 2 2 + μ y ˜ i 1 , h 2 y ˜ i 1 , h 2 2 + μ 1 = y ( t i ) + W ^ 3 ( t i ) h 2 + O ( h 3 + λ ) .
A posterior asymptotic error estimate is
| y ˜ i h 2 y ( t i ) | = | 2 2 + λ y ˜ i h 2 y ˜ i h 2 2 + λ 1 y ( t i ) + y ˜ i h y ˜ i h 2 2 2 + λ 1 | | 2 2 + λ y ˜ i h 2 y ˜ i h 2 2 + λ 1 y ( t i ) | + | y ˜ i h y ˜ i h 2 2 2 + λ 1 | = | y ˜ i 1 , h y ( t i ) | + | y ˜ i h y ˜ i h 2 2 2 + λ 1 | + O ( h 2 )
The error y ˜ i h 2 y ( t i ) is bounded by y ˜ i h y ˜ i h 2 2 2 + λ 1 , which is important for constructing adaptable algorithms.

7. Numerical Experiments

In this section, we illustrate the performance and accuracy of the quadrature method using the improved trapezoid formula. For ease of notation, we define
E h = | y ( t i ) y ˜ i h | , E k , i = | y ( t i ) y ˜ i k , h | ( k = 1 , 2 ) , R a t e = log 2 ( E h E h 2 ) ,
where y ˜ i h is the approximate solution of Equation (1), y ˜ i k , h is the approximate solution of k-th extrapolation, E k , i is the absolute error between the exact solution and the approximate solution of k-th extrapolation when t = t i . The procedure was implemented in MATLAB.
Example 1.
Consider the following equation
y ( t ) = f ( t ) 0 t s λ sin ( y ( s ) ) d s + 0 q t ( t + s ) sin ( y ( s ) ) d s , t [ 0 , T ] ,
with T = 1 , λ = 1 2 , and q = 0.95 . The exact solution is given by y ( t ) = t and f ( t ) is determined by the exact solution.
Applying the algorithm with N = 2 4 , 2 5 , 2 6 , 2 7 , 2 8 , the numerical results at t = 0.4 are presented in Table 1, the CPU time(s) are 0.34, 0.55, 0.98, 1.62, and 3.01 s, respectively. By comparing E h and E 1 , i , we can observe that the accuracy was improved and the extrapolation algorithm was effective. In the third column, the r a t e values show that the convergence order was consistent with the theoretical analysis.
Example 2.
Consider the following equation
y ( t ) = f ( t ) 0 t s λ ( t 2 + s ) ( y ( s ) ) 2 d s + 0 q t s μ sin ( y ( s ) ) d s , t [ 0 , T ] ,
where T = 1 , λ = μ = 1 2 , q = 0.8 and the analytical solution is y ( t ) = t . Then, f ( t ) is determined by the exact solution.
By applying the numerical method for N = 2 4 , 2 5 , 2 6 , 2 7 , 2 8 , the obtained results at t = 0.2 are shown in Table 2. By comparing E h and E 1 , i , we can observe that the accuracy was improved, proving that the extrapolation algorithm is effective. The results verified the theoretic convergence order, which is O ( h 1.5 ) .
Example 3.
We consider the following equation
y ( t ) = f ( t ) 0 t s λ ( t + s ) sin ( y ( s ) ) d s + 0 q t s μ ( t + s ) ( y ( s ) ) 2 d s , t [ 0 , T ] ,
where T = 1 , λ = 1 3 , μ = 1 4 , q = 0.9 , and the analytical solution is y ( t ) = t . Then, f ( t ) is determined by the exact solution.
By applying the numerical method for N = 2 4 , 2 5 , 2 6 , 2 7 , and 2 8 , the obtained results at t = 0.4 are shown in Table 3. As λ was not equal to μ, we first applied the Richardson h 2 + λ extrapolation, and then adopted the Richardson h 2 + μ extrapolation. By comparing E h , E 1 , i and E 2 , i , these results verify the theoretical results, and we can see that the extrapolation improved the accuracy dramatically. When N = 8, 16, 32, 64, 128, the CPU time(s) are 1.43, 2.41, 3.99, 17.46, and 21.36 s, respectively. The exact solution and the approximation when N=8 are plotted in Figure 1.

8. Conclusions

In this paper, by using the improved trapezoidal quadrature formula and linear interpolation, we obtained the approximate equation for non-linear Volterra integral equations with vanishing delay and weak singular kernels. The approximate solutions were obtained by an iterative algorithm, which possessed a high accuracy order O ( h 2 + γ ) . Additionally, we analyzed the existence and uniqueness of both the exact and approximate solutions. The significance of this work was that it demonstrated the efficiency and reliability of the Richardson extrapolation. The computational findings were compared with the exact solution: we found that our methods possess high accuracy and low computational complexity, and the results showed good agreement with the theoretical analysis. For future work, we can apply this method for solving two-dimensional delay integral equations.

Author Contributions

Conceptualization, J.H. and L.Z.; methodology, J.H. and L.Z.; validation, J.H. and H.L.; writing—review and editing, L.Z. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Program of Chengdu Normal University, grant number CS18ZDZ02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and referees for their careful comments and fruitful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Avaji, M.; Hafshejani, J.; Dehcheshmeh, S.; Ghahfarokhi, D. Solution of delay Volterra integral equations using the Variational iteration method. J. Appl. Sci. 2012, 12, 196–200. [Google Scholar] [CrossRef]
  2. Williams, L.R.; Leggett, R.W. Nonzero Solutions of Nonlinear Integral Equations Modeling Infectious Disease. Siam J. Math. Anal. 1982, 13, 121. [Google Scholar] [CrossRef]
  3. Volterra, V. On some questions of the inversion of definite integrals. (Sopra alcune questioni di inversione di integrali definiti). Ann. Mat. Pura Appl. 1897, 25, 139–178. [Google Scholar] [CrossRef]
  4. Yang, K.; Zhang, R. Analysis of continuous collocation solutions for a kind of Volterra functional integral equations with proportional delay. J. Comput. Appl. Math. 2011, 236, 743–752. [Google Scholar] [CrossRef] [Green Version]
  5. Xie, H.; Zhang, R.; Brunner, H. Collocation methods for general Volterra functional integral equations with vanishing delays. SIAM J. Sci. Comput. 2011, 33, 3303–3332. [Google Scholar] [CrossRef]
  6. Ming, W.; Huang, C.; Zhao, L. Optimal superconvergence results for Volterra functional integral equations with proportional vanishing delays. Appl. Math. Comput. 2018, 320, 292–301. [Google Scholar] [CrossRef]
  7. Mokhtary, P.; Moghaddam, B.P.; Lopes, A.M.; Machado, J.A.T. A computational approach for the non-smooth solution of non-linear weakly singular Volterra integral equation with proportional delay. Numer. Algorithms 2019, 83, 987–1006. [Google Scholar] [CrossRef]
  8. Gu, Z.; Chen, Y. Chebyshev spectral-collocation method for a class of weakly singular Volterra integral equations with proportional delay. J. Numer. Math. 2014, 22, 311–342. [Google Scholar] [CrossRef]
  9. Zhang, R.; Zhu, B.; Xie, H. Spectral methods for weakly singular Volterra integral equations with pantograph delays. Front. Math. China 2014, 8, 281–299. [Google Scholar] [CrossRef]
  10. Brunner, H. Collocation Methods for Volterra Integral and Related Functional Differential Equations; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  11. Zhang, G.; Song, M. Impulsive continuous Runge—Kutta methods for impulsive delay differential equations. Appl. Math. Comput. 2019, 341, 160–173. [Google Scholar] [CrossRef]
  12. Shu, H.; Xu, W.; Wang, X.; Wu, J. Complex dynamics in a delay differential equation with two delays in tick growth with diapause. J. Differ. Equ. 2020, 269, 10937–10963. [Google Scholar] [CrossRef]
  13. Fang, J.; Zhan, R. High order explicit exponential Runge—Kutta methods for semilinear delay differential equations. J. Comput. Appl. Math. 2021, 388, 113279. [Google Scholar] [CrossRef]
  14. Song, H.; Xiao, Y.; Chen, M. Collocation methods for third-kind Volterra integral equations with proportional delays. Appl. Math. Comput. 2021, 388, 125509. [Google Scholar]
  15. Brunner, H. The numerical solution of weakly singular Volterra functional integro-differential equations with variable delays. Commun. Pure Appl. Anal. 2006, 5, 261–276. [Google Scholar] [CrossRef]
  16. Huang, C. Stability of linear multistep methods for delay integro-differential equations. Comput. Math. Appl. 2008, 55, 2830–2838. [Google Scholar] [CrossRef] [Green Version]
  17. Sheng, C.; Wang, Z.; Guo, B. An hp-spectral collocation method for nonlinear Volterra functional integro-differential equations with delays. Appl. Numer. Math. 2016, 105, 1–24. [Google Scholar] [CrossRef]
  18. Abdi, A.; Berrut, J.P.; Hosseini, S.A. The Linear Barycentric Rational Method for a Class of Delay Volterra Integro-Differential Equations. J. Sci. Comput. 2018, 75, 1757–1775. [Google Scholar] [CrossRef] [Green Version]
  19. Yaghoobi, S.; Moghaddam, B.P.; Ivaz, K. An efficient cubic spline approximation for variable-order fractional differential equations with time delay. Nonlinear Dyn. 2017, 87, 815–826. [Google Scholar] [CrossRef]
  20. Rahimkhani, P.; Ordokhani, Y.; Babolian, E. A new operational matrix based on Bernoulli wavelets for solving fractional delay differential equations. Numer. Algorithms 2017, 74, 223–245. [Google Scholar] [CrossRef]
  21. Rakhshan, S.A.; Effati, S. A generalized Legendre—Gauss collocation method for solving nonlinear fractional differential equations with time varying delays. Appl. Numer. Math. 2019, 146, 342–360. [Google Scholar] [CrossRef]
  22. Zuniga-Aguilar, C.J.; Gomez-Aguilar, J.F.; Escobar-Jimenez, R.F.; Romero-Ugalde, H.M. A novel method to solve variable-order fractional delay differential equations based in lagrange interpolations. Chaos Solitons Fractals 2019, 126, 266–282. [Google Scholar] [CrossRef]
  23. Xu, X.; Huang, Q. Superconvergence of discontinuous Galerkin methods for nonlinear delay differential equations with vanishing delay. J. Comput. Appl. Math. 2019, 348, 314–327. [Google Scholar] [CrossRef]
  24. Bellour, A.; Bousselsal, M. A Taylor collocation method for solving delay integral equations. Numer. Algorithms 2014, 65, 843–857. [Google Scholar] [CrossRef]
  25. Darania, P.; Sotoudehmaram, F. Numerical analysis of a high order method for nonlinear delay integral equations. J. Comput. Appl. Math. 2020, 374, 112738. [Google Scholar] [CrossRef]
  26. Khasi, M.; Ghoreishi, F.; Hadizadeh, M. Numerical analysis of a high order method for state-dependent delay integral equations. Numer. Algorithms 2014, 66, 177–201. [Google Scholar] [CrossRef]
  27. Bica, A.M.; Popescu, C. Numerical solutions of the nonlinear fuzzy Hammerstein-Volterra delay integral equations. Inf. Sci. 2013, 223, 236–255. [Google Scholar] [CrossRef]
  28. Mosleh, M.; Otadi, M. Least squares approximation method for the solution of Hammerstein-Volterra delay integral equations. Appl. Math. Comput. 2015, 258, 105–110. [Google Scholar] [CrossRef]
  29. Zhang, X. A new strategy for the numerical solution of nonlinear Volterra integral equations with vanishing delays. Appl. Math. Comput. 2020, 365, 124608. [Google Scholar]
  30. Lima, P.; Diogo, T. Numerical solution of a nonuniquely solvable Volterra integral equation using extrapolation. J. Comput. Appl. Math. 2002, 140, 537–557. [Google Scholar] [CrossRef] [Green Version]
  31. Sidi, A. Practical Extrapolation Methods; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  32. Tao, L.; Jin, H. High Precision Algorithm for Integral Equations; China Science Press: Beijing, China, 2013. [Google Scholar]
  33. Tao, L.; Yong, H. Extrapolation method for solving weakly singular nonlinear Volterra integral equations of the second kind. J. Math. Anal. Appl. 2006, 324, 225–237. [Google Scholar] [CrossRef] [Green Version]
  34. Navot, J. A Further Extension of the Euler-Maclaurin Summation Formula. J. Math. Phys. 1962, 41, 155–163. [Google Scholar] [CrossRef]
  35. Tao, L.; Yong, H. A generalization of Gronwall inequality and its application to weakly singular Volterra integral equation of the second kind. J. Math. Anal. Appl. 2003, 282, 56–62. [Google Scholar] [CrossRef] [Green Version]
  36. Qin, Y. Integral and Discrete Inequalities and Their Applications; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
Figure 1. The absolute errors and the approximations when N = 2 3 .
Figure 1. The absolute errors and the approximations when N = 2 3 .
Mathematics 09 01856 g001
Table 1. Numerical results at t = 0.4 of Example 1.
Table 1. Numerical results at t = 0.4 of Example 1.
N E h Rate E 1 , i Posteriori Errors
2 4 3.32 × 10 4
2 5 1.18 × 10 4 2 1.50 4.70 × 10 6 1.17 × 10 4
2 6 4.01 × 10 5 2 1.55 2.04 × 10 6 4.21 × 10 5
2 7 1.35 × 10 5 2 1.57 9.90 × 10 7 1.46 × 10 5
2 8 4.51 × 10 6 2 1.58 4.21 × 10 7 4.92 × 10 6
Table 2. Numerical results at t = 0.2 of Example 2.
Table 2. Numerical results at t = 0.2 of Example 2.
N E h Rate E 1 , i Posteriori Errors
2 4 6.57 × 10 4
2 5 2.30 × 10 4 2 1.51 3.13 × 10 6 2.33 × 10 4
2 6 8.03 × 10 5 2 1.52 1.58 × 10 6 8.19 × 10 5
2 7 2.81 × 10 5 2 1.52 5.38 × 10 7 2.86 × 10 5
2 8 9.82 × 10 6 2 1.52 1.56 × 10 7 9.97 × 10 6
Table 3. Numerical results at t = 0.4 of Example 3.
Table 3. Numerical results at t = 0.4 of Example 3.
N E h Rate E 1 , i E 2 , i Posteriori Errors
2 4 9.36 × 10 5
2 5 3.23 × 10 5 2 1.53 4.12 × 10 6 2.82 × 10 5
2 6 1.06 × 10 5 2 1.61 6.00 × 10 7 8.89 × 10 7 9.99 × 10 6
2 7 3.41 × 10 6 2 1.63 1.17 × 10 7 8.70 × 10 8 3.30 × 10 6
2 8 1.10 × 10 6 2 1.65 2.31 × 10 8 1.68 × 10 8 1.07 × 10 6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, L.; Huang, J.; Li, H.; Wang, Y. Extrapolation Method for Non-Linear Weakly Singular Volterra Integral Equation with Time Delay. Mathematics 2021, 9, 1856. https://0-doi-org.brum.beds.ac.uk/10.3390/math9161856

AMA Style

Zhang L, Huang J, Li H, Wang Y. Extrapolation Method for Non-Linear Weakly Singular Volterra Integral Equation with Time Delay. Mathematics. 2021; 9(16):1856. https://0-doi-org.brum.beds.ac.uk/10.3390/math9161856

Chicago/Turabian Style

Zhang, Li, Jin Huang, Hu Li, and Yifei Wang. 2021. "Extrapolation Method for Non-Linear Weakly Singular Volterra Integral Equation with Time Delay" Mathematics 9, no. 16: 1856. https://0-doi-org.brum.beds.ac.uk/10.3390/math9161856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop