Next Article in Journal
Q-Curve and Area Rules for Choosing Heuristic Parameter in Tikhonov Regularization
Previous Article in Journal
Stancu Type Baskakov—Durrmeyer Operators and Approximation Properties
Previous Article in Special Issue
Stochastic Memristive Quaternion-Valued Neural Networks with Time Delays: An Analysis on Mean Square Exponential Input-to-State Stability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space

by
Pasakorn Yordsorn
1,
Poom Kumam
1,2,3,*,
Habib ur Rehman
1 and
Abdulkarim Hassan Ibrahim
1
1
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 19 April 2020 / Revised: 5 July 2020 / Accepted: 10 July 2020 / Published: 16 July 2020

Abstract

:
In this paper, we presented a modification of the extragradient method to solve pseudomonotone equilibrium problems involving the Lipschitz-type condition in a real Hilbert space. The method uses an inertial effect and a formula for stepsize evaluation, that is updated for each iteration based on some previous iterations. The key advantage of the algorithm is that it is achieved without previous knowledge of the Lipschitz-type constants and also without any line search procedure. A weak convergence theorem for the proposed method is well established by assuming mild cost bifunction conditions. Many numerical experiments are presented to explain the computational performance of the method and to equate them with others.

1. Introduction

An equilibrium problem (EP) consider various mathematical problems as a special case i.e., the variational inequality problems (VIP), the optimization problems, the fixed point problems, the complementarity problems, Nash equilibrium of non-cooperative games, the saddle point and the vector minimization problem (see [1,2,3] for more details). To the best of our knowledge the term “equilibrium problem” was initially established in 1992 by Muu and Oettli [2] and has been further developed by Blum and Oettli [1]. The problem of equilibrium is also considered as the Ky Fan inequality [4]. It is interesting and useful to develop new iterative schemes and study their convergence analysis. Many methods have been well-established to figure out the solution of an equilibrium problem (1) in finite and infinite-dimensional spaces. Some of these algorithms involve projection methods [5,6,7,8], the proximal point methods [9,10], the extragradient methods with or without line searches [11,12,13,14,15,16,17,18], the methods using the inertial effect [19,20,21,22] and other methods in [23,24,25,26,27,28].
The proximal point method (PPM) is one of the well-established methods to deal with equilibrium problems. Martinet [29] initially developed this strategy for the case of monotone variational inequality problems, and afterwards, they extended it to monotone operators by Rockafellar [30]. Moudafi [9] presented the proximal point method to solve monotone equilibrium. Konnov [31] also proposed another modification of the proximal point method, with weaker assumptions to deal with equilibrium problems involving monotone bifunction. Normally, the proximal point method applies to monotone equilibrium problems. Then, each regularized sub-problem converts into strongly monotone and accordingly its unique solution exists. Another well-known technique is the auxiliary problem principle, i.e., formulate a new equivalent problem that is usually simpler and easier to figure out compared to the initial problem. This idea was originally initiated by Cohen [32] for solving optimization problems and thereafter applied to variational inequality problems [33]. In addition, Mastroeni [34] extended the auxiliary problem principle to solve equilibrium problems involving a strongly monotone bifunction.
Next, we consider a well-known two-step extragradient method [35]. Tran et al. [36] and Flam et al. [10] used the auxiliary problem principle and extend the extragradient method in [35] for monotone equilibriums problems. The first drawback of the extragradient method is to compute two projections on the feasible set C to achieve the next iteration. So, in the case, when the computation of a projection on a constraint set C is difficult to compute, then it is hard to solve two minimal distance problems. A fact that could affect the performance and applicability of the method. To overcome this situation, Censor et al. introduced a subgradient extragradient method [37] wherein the second projection in the extragradient method is replaced by a specific half-plane projection that can be computed easily and efficiently. An iterative sequence developed due to extragradient-like methods essentially needs a stepsize that depends upon the Lipschitz-type constants of a cost bifunction. The prior information of such constants imposes some restrictions on implementing these methods because these Lipschitz-type constants are normally not known or hard to compute.
In this paper, we are introducing a new algorithm to solve the pseudomonotone equilibrium problem (1) incorporating the Lipschitz-type condition (2) on a bifunction in a real Hilbert space. The algorithm can be viewed as a combination of Algorithm 1 in [16] with inertial effects. The stepsize used in the proposed scheme is not fixed but updated on the basis of the previous iterations using an explicit formula. Under certain mild conditions, a weak convergence theorem is established which ensures that the sequence of iterations converges towards a solution to the problem of equilibrium. We are also examine the computational performance of the new method on various test problems in comparison to Algorithm 1 in [16].
The rest of this paper is arranged in the following way: in Section 2, we provide some definitions and necessary results that will be needed throughout this article. Section 3 contains our algorithm and the weak convergence theorem. Section 4 sets out the numerical experiments to show the computational performance of our proposed method on different test problems.

2. Preliminaries

We use C to be a closed, convex subset of the real Hilbert space H . The notion . , . and . denote the inner product and induced norm on the Hilbert space, respectively. The notion E P ( f , C ) is the solution set of an equilibrium problem over C and q represent any arbitrary element of E P ( f , C ) .
Definition 1 ([1]). Let C be a nonempty, close and convex subset of H and f : H × H R be a bifunction such that f ( x , x ) = 0 for all x C . The equilibrium problem for the bifunction f on C is formulated as follows:
Determine q C such that f ( q , y ) 0 , y C .
Definition 2 ([38]). The metric projection P C ( x ) of x onto a closed, convex subset C of H is defined as follows:
P C ( x ) = arg min y C { y x } .
Now, we recall classical concepts of monotonicity of nonlinear operators (see [39,40]).
Definition 3.
An operator F : H H is said to be
(1)
strongly monotone on C if
F ( x ) F ( y ) , x y γ x y 2 , x , y C ;
(2)
monotone on C if
F ( x ) F ( y ) , x y 0 , x , y C ;
(3)
strongly pseudomonotone on C if
F ( x ) , y x 0 F ( y ) , y x γ x y 2 , x , y C ;
(4)
pseudomonotone on C if
F ( x ) , y x 0 F ( y ) , y x 0 , x , y C ;
(5)
L-Lipschitz continuous on C if there exists a constant L > 0 such that
F ( x ) F ( y ) L x y , x , y C .
Now, we consider different notions of the bifunction monotonicity (see [1,41]).
Definition 4.
A bifunction f : E × E R on C for γ > 0 is said to be
(1)
strongly monotone if
f ( x , y ) + f ( y , x ) γ x y 2 , x , y C ;
(2)
monotone if
f ( x , y ) + f ( y , x ) 0 , x , y C ;
(3)
strongly pseudomonotone if
f ( x , y ) 0 f ( y , x ) γ x y 2 , x , y C ;
(4)
pseudomonotone if
f ( x , y ) 0 f ( y , x ) 0 , x , y C .
Remark 1.
It is clear to see that the following consequences are as follows:
( 1 ) ( 2 ) ( 4 ) and ( 1 ) ( 3 ) ( 4 ) .
Let a bifunction f : E × E R satisfy the Lipschitz-type condition [34] on C if there are c 1 , c 2 > 0 such that
f ( x , z ) f ( x , y ) + f ( y , z ) + c 1 x y 2 + c 2 y z 2 , x , y , z C .
The normal cone of C at x C is defined by
N C ( x ) = { w H : w , y x 0 , y C } .
Let g : C R is a convex function and subdifferential of g at x C is defined as follows:
g ( x ) = { w H : g ( y ) g ( x ) w , y x , y C } .
Lemma 1 ( [42]). Let g : C R be a convex, sub-differentiable and lower semi-continuous function on C , where C to be a nonempty, convex and closed subset of a Hilbert space H . An element x C is said to be a minimizer of a function g if and only if 0 g ( x ) + N C ( x ) , while g ( x ) and N C ( x ) are serve as the sub-differential of g on x and the normal cone of C at x, respectively.
Lemma 2 ([43]). For every x , y H and η R , then the following relation is true:
η x + ( 1 η ) y 2 = η x 2 + ( 1 η ) y 2 η ( 1 η ) x y 2 .
Lemma 3 ([44]). Let a n , b n and c n be sequences in [ 0 , + ) such that
a n + 1 a n + b n ( a n a n 1 ) + c n , n 1 , with n = 1 + c n < + ,
and also with b > 0 such that 0 b n b < 1 for all n N . Then the following relations are hold.
(i) 
n = 1 + [ a n a n 1 ] + < , with [ s ] + : = max { s , 0 } ;
(ii) 
lim n + a n = a * [ 0 , ) .
Lemma 4 ([45]). Let { x n } be a sequence in H and C H such that the following conditions hold.
(i) 
For each x C , lim n x n x exists;
(ii) 
All sequentially weak cluster point of { x n } lies in C.
Then, { x n } weakly converges to a point in C .

3. Main Results

In this section, we present an inertial method to solve pseudomonotone equilibrium problem involving the Lipschitz-type bifunction condition. The detailed method is described below.
Assumption 1.
Let a bifunction f : H × H R satisfy the following conditions:
(A1)
f ( y , y ) = 0 , y C and also f is pseudomonotone on C;
(A2)
f satisfies the Lipschitz-type condition on H through positive constants c 1 and c 2 ;
(A3)
lim sup n f ( x n , y ) f ( x * , y ) for every y C and { x n } C satisfying x n x * ;
(A4)
f ( x , . ) needs to be convex and subdifferentiable on C for each fixed x H .
Remark 2.
It is noted that T n represent a half-space and C T n . From value of y n , we have
y n = arg min y C { ξ n f ( w n , y ) + 1 2 w n y 2 } .
By using Lemma 1, we obtain
0 2 ξ n f ( w n , y ) + 1 2 w n y 2 ( y n ) + N C ( y n ) .
Thus, there exists υ n f ( w n , y n ) and ω ¯ N C ( y n ) such that
ξ n υ n + y n w n + ω ¯ = 0 .
Thus, we have
w n y n , y y n = ξ n υ n , y y n + ω ¯ , y y n , y C .
Since ω ¯ N C ( y n ) , then ω ¯ , y y n 0 for all y C . Therefore, we have
w n y n , y y n ξ n υ n , y y n , y C .
The above implies that
w n ξ n υ n y n , y y n 0 , y C .
This shows that C T n for every n N .
Lemma 5.
If y n = w n in Algorithm 1 then w n E P ( f , C ) .
Algorithm 1 Inertial Explicit Extragradient Algorithm for Pseudomonotone EP
  • Initialization: Choose x 1 , x 0 H , ξ 0 > 0 and nondecreasing sequence 0 θ n θ < 5 2 . Set
    w 0 = x 0 + θ 0 ( x 0 x 1 ) .
  • Iterative steps: Let ξ n > 0 and x n 1 , x n H are known for n 0 .
  • Step 1: Compute
    y n = arg min y C { ξ n f ( w n , y ) + 1 2 w n y 2 } ,
    where w n = x n + θ n ( x n x n 1 ) . If y n = w n then stop and w n is the solution of equilibrium problem. Otherwise, go to Step 2.
  • Step 2: Create a half space first
    T n = { z H : w n ξ n υ n y n , z y n 0 } ,
    where υ n 2 f ( w n , y n ) . Compute
    z n = arg min y T n { ξ n f ( y n , y ) + 1 2 w n y 2 } .
  • Step 3: Next, take β n ( 0 , 1 ] and compute
    x n + 1 = ( 1 β n ) w n + β n z n .
  • Step 4: Assume μ ( θ ) > 0 and set d = f ( w n , z n ) f ( w n , y n ) f ( y n , z n ) with
    ξ n + 1 = min ξ n , μ ( w n y n 2 + z n y n 2 ) 2 d if d > 0 , ξ n ξ n otherwise .
    Set n : = n + 1 and go back Step 1.
Proof. 
By definition of y n and Lemma 1, we have
0 2 ξ n f ( w n , y ) + 1 2 w n y 2 ( y n ) + N C ( y n ) .
Thus, there exists υ n f ( w n , y n ) and ω ¯ N C ( y n ) such that
ξ n υ n + y n w n + ω ¯ = 0 .
Thus, we have
w n y n , y y n = ξ n υ n , y y n + ω ¯ , y y n , y C .
Given that w n = y n and ω ¯ N C ( y n ) implies that
ξ n υ n , y y n 0 .
Due to υ n f ( w n , y n ) and using subdifferential definition, we obtain
f ( w n , y ) f ( w n , y n ) υ n , y y n , , y H .
Combining expressions (3), (4) and due to ξ n > 0 , we get
f ( w n , y ) f ( w n , y n ) 0 .
By y n = w n and condition (A1) implies that f ( w n , y ) 0 for all y C . □
Lemma 6.
Let f : H × H R is a bifunction satisfying the conditions(A1)-(A4). Let q E P ( f , C ) , we have
z n q 2 w n q 2 1 μ ξ n ξ n + 1 w n y n 2 1 μ ξ n ξ n + 1 z n y n 2 .
Proof. 
By value of z n and using Lemma 1, we have
0 2 ξ n f ( y n , y ) + 1 2 w n y 2 ( z n ) + N T n ( z n ) .
Thus, for ω f ( y n , z n ) there exists ω ¯ N T n ( z n ) such that
ξ n ω + z n w n + ω ¯ = 0 .
Therefore, we have
w n z n , y z n = ξ n ω , y z n + ω ¯ , y z n , y T n .
Since ω ¯ N T n ( z n ) , then ω ¯ , y z n 0 for all y T n . This implies that
ξ n ω , y z n w n z n , y z n , y T n .
Due to ω f ( y n , z n ) , we have
f ( y n , y ) f ( y n , z n ) ω , y z n .
Combining expressions (6) and (7), we obtain
ξ n f ( y n , y ) ξ n f ( y n , z n ) w n z n , y z n , y T n .
By substituting y = q in (8), we get
ξ n f ( y n , q ) ξ n f ( y n , z n ) w n z n , q z n , y T n .
Since q E P ( f , C ) , then f ( q , y n ) 0 and it implies that f ( y n , q ) 0 due to pseudomonotonicity of a bifunction f . Thus, from (9) we get
w n z n , z n q ξ n f ( y n , z n ) .
From the definition of ξ n + 1 , we get
f ( w n , z n ) f ( w n , y n ) f ( y n , z n ) μ ( w n y n 2 + z n y n 2 ) 2 ξ n + 1 ,
which after multiplying both sides by ξ n > 0 , gives that
ξ n f ( y n , z n ) ξ n { f ( w n , z n ) f ( w n , y n ) } μ ξ n 2 ξ n + 1 w n y n 2 μ ξ n 2 ξ n + 1 z n y n 2 .
From expressions (10) and (11), we obtain
w n z n , z n q ξ n f ( w n , z n ) f ( w n , y n ) μ ξ n 2 ξ n + 1 w n y n 2 μ ξ n 2 ξ n + 1 z n y n 2 .
Since z n T n and by the definition of T n , we have
w n ξ n υ n y n , z n y n 0 .
Thus, above implies that
w n y n , z n y n ξ n υ n , z n y n .
Due to υ n 2 f ( w n , y n ) and with definition of subdifferential, we have
f ( w n , y ) f ( w n , y n ) υ n , y y n , y H .
Substituting y = z n in expression (13), we have
f ( w n , z n ) f ( w n , y n ) υ n , z n y n .
From (13) and (14), we obtain
ξ n f ( w n , z n ) f ( w n , y n ) w n y n , z n y n .
Thus, expressions (12) and (15) implies that
w n z n , z n q w n y n , z n y n μ ξ n 2 ξ n + 1 w n y n 2 μ ξ n 2 ξ n + 1 z n y n 2 .
We have the following facts:
2 w n z n , z n q = w n q 2 z n w n 2 z n q 2 .
2 y n w n , y n z n = w n y n 2 + z n y n 2 w n z n 2 .
From the above two facts and expression (16), we obtain
z n q 2 w n q 2 1 μ ξ n ξ n + 1 w n y n 2 1 μ ξ n ξ n + 1 z n y n 2 .
Theorem 1.
Let sequences { w n } , { y n } and { x n } generated by Algorithm 1 are converges weakly to the solution q of the problem (EP) where
0 < μ < 1 2 2 θ 1 2 θ 2 1 2 θ + 1 2 θ 2 and 0 θ < 5 2 .
Proof. 
From Lemma 6, we have
z n q 2 w n q 2 1 μ ξ n ξ n + 1 w n y n 2 1 μ ξ n ξ n + 1 z n y n 2 w n q 2 1 2 1 μ ξ n ξ n + 1 z n w n 2 .
From the definition of x n + 1 and using Lemma 2, we obtain
x n + 1 q 2 ( 1 β n ) w n q 2 + β n z n q 2 .
From expressions (17) and (18), we have
x n + 1 q 2 ( 1 β n ) w n q 2 + β n w n q 2 β n 2 1 μ ξ n ξ n + 1 z n w n 2 w n q 2 β n 2 1 μ ξ n ξ n + 1 z n w n 2 w n q 2 1 2 β n 1 μ ξ n ξ n + 1 x n + 1 w n 2 w n q 2 1 2 1 μ ξ n ξ n + 1 x n + 1 w n 2 .
Due to the definition of x n + 1 we have x n + 1 w n = β n z n w n which is used to obtain the last inequality. From the value of w n , we obtain
w n q 2 = x n + θ n ( x n x n 1 ) q 2 = ( 1 + θ n ) ( x n q ) θ n ( x n 1 q ) 2 = ( 1 + θ n ) x n q 2 θ n x n 1 q 2 + θ n ( 1 + θ n ) x n x n 1 2 .
By the definition of w n and using Cauchy inequality, we get
x n + 1 w n 2 = x n + 1 x n θ n ( x n x n 1 2 = x n + 1 x n 2 + θ n 2 x n x n 1 2 2 θ n x n + 1 x n , x n x n 1 x n + 1 x n 2 + θ n 2 x n x n 1 2 2 θ n x n + 1 x n x n x n 1 x n + 1 x n 2 + θ n 2 x n x n 1 2 θ n x n + 1 x n 2 θ n x n x n 1 2
= ( 1 θ n ) x n + 1 x n 2 + ( θ n 2 θ n ) x n x n 1 2 .
Combining expressions (19), (20) and (22), we obtain
x n + 1 q 2 ( 1 + θ n ) x n q 2 θ n x n 1 q 2 + θ n ( 1 + θ n ) x n x n 1 2 ϱ n ( 1 θ n ) x n + 1 x n 2 ϱ n ( θ n 2 θ n ) x n x n 1 2 = ( 1 + θ n ) x n q 2 θ n x n 1 q 2 ϱ n ( 1 θ n ) x n + 1 x n 2 + θ n ( 1 + θ n ) ϱ n ( θ n 2 θ n ) x n x n 1 2 = ( 1 + θ n ) x n q 2 θ n x n 1 q 2 Q n x n + 1 x n 2
+ R n x n x n 1 2 ,
where ϱ n : = 1 2 1 μ ξ n ξ n + 1 , Q n : = ϱ n ( 1 θ n ) 0 and R n : = θ n ( 1 + θ n ) ϱ n ( θ n 2 θ n ) .
Next, we substitute
Ψ n = x n q 2 θ n x n 1 q 2 + R n x n x n 1 2 .
We have compute
Ψ n + 1 Ψ n = x n + 1 q 2 θ n + 1 x n q 2 + R n + 1 x n + 1 x n 2 x n q 2 + θ n x n 1 q 2 R n x n x n 1 2 x n + 1 q 2 ( 1 + θ n ) x n q 2 + θ n x n 1 q 2 + R n + 1 x n + 1 x n 2 R n x n x n 1 2 .
Combining expressions (24) and (25), we obtain
Ψ n + 1 Ψ n Q n x n + 1 x n 2 + R n + 1 x n + 1 x n 2 = ( Q n R n + 1 ) x n + 1 x n 2 .
Next, we need to compute
Q n R n + 1 = ϱ n ( 1 θ n ) θ n + 1 ( 1 + θ n + 1 ) + ϱ n + 1 ( θ n + 1 2 θ n + 1 ) ϱ n ( 1 θ n + 1 ) θ n + 1 ( 1 + θ n + 1 ) + ϱ n ( θ n + 1 2 θ n + 1 ) = ϱ n ( 1 θ n + 1 ) 2 θ n + 1 ( 1 + θ n + 1 ) ϱ n ( 1 θ ) 2 θ ( 1 + θ ) = ( ϱ n θ θ 2 ) + ϱ n θ 2 2 ϱ n θ = 1 2 θ θ 2 + θ 2 2 θ μ ξ n 2 ξ n + 1 + ξ n 2 ξ n + 1 θ 2 ξ n ξ n + 1 θ = 1 2 2 θ 1 2 θ 2 μ ξ n 2 ξ n + 1 ξ n ξ n + 1 θ + ξ n 2 ξ n + 1 θ 2 .
Since from our hypothesis, we have
0 < μ < 1 2 2 θ 1 2 θ 2 1 2 θ + 1 2 θ 2 and 0 θ < 5 2 .
Since ξ n ξ n + 1 1 , then there exist n 0 1 such that
ϵ 0 , 1 2 2 θ 1 2 θ 2 μ 1 2 θ + 1 2 θ 2 .
Thus, expression (27) implies that
Q n R n + 1 ϵ , n n 0 .
Combining expressions (26) and (28) implies that for all n n 0 , we have
Ψ n + 1 Ψ n ( Q n R n + 1 ) x n + 1 x n 2 0 .
The sequence { Ψ n } is nonincreasing for n n 0 . From definition of Ψ n + 1 , we have
Ψ n + 1 = x n + 1 q 2 θ n + 1 x n q 2 + R n + 1 x n + 1 x n 2 θ n + 1 x n q 2 .
From Ψ n , we have
Ψ n = x n q 2 θ n x n 1 q 2 + R n x n x n 1 2 x n q 2 θ n x n 1 q 2 .
The expression (31) implies that for n n 0 , we have
x n q 2 Ψ n + θ n x n 1 q 2 Ψ n 0 + θ x n 1 q 2 Ψ n 0 ( θ n n 0 + + 1 ) + θ n n 0 x n 0 q 2 Ψ n 0 1 θ + θ n n 0 x n 0 q 2 .
Combining expressions (30) and (32), we obtain
Ψ n + 1 θ n + 1 x n q 2 θ x n q 2 θ Ψ n 0 1 θ + θ n n 0 + 1 x n 0 q 2 .
It follows from (29) and (33) such that
ϵ n = n 0 k | x n + 1 x n 2 Ψ n 0 Ψ k + 1 Ψ n 0 + θ Ψ n 0 1 θ + θ k n 0 + 1 x n 0 q 2 Ψ n 0 1 θ + x n 0 q 2 .
By letting k in (34) implies that n = 1 x n + 1 x n 2 < + . This implies that
x n + 1 x n 0 as n .
From the expressions (21) and (35), we obtain
x n + 1 w n 0 as n .
Moreover, by Lemma 3 with expression (23) and n = 1 x n + 1 x n 2 < + , implies that
lim n x n q 2 = b .
Thus, expressions (20), (35) and (37) implies that
lim n w n q 2 = b .
We also have
0 x n w n x n x n + 1 + x n + 1 w n 0 as n .
Next, need to show that lim n y n q 2 = b . Using Lemma 6 for n n 0 , we have
1 μ ξ n ξ n + 1 w n y n 2 w n q 2 x n + 1 q 2 ( w n q + x n + 1 q ) ( w n q x n + 1 q ) ( w n q + x n + 1 q ) x n + 1 w n 0 as n .
Further, it implies that
0 x n y n x n w n + w n y n 0 as n .
Combining expressions (35), (37) and (41), we obtain
x n + 1 y n 0 as n and lim n y n q 2 = b .
The above discussion implies that the sequences { x n } , { w n } and { y n } are bounded and for every q E P ( f , C ) , and the limit of w n q , x n q and z n q exists. Next, we determining that the set of all sequentially weak limit point of the sequence { x n } belongs to E P ( f , C ) . To confirm this, consider element z to be sequentially weak limit point of the sequence { x n } which yields a subsequence { x n k } of { x n } that weakly converge to z . Thus, the sequence { y n k } also converges weakly to z C , due to x n y n 0 . Next, the remaining part is to prove that element z belongs to the solution set E P ( f , C ) . By (8), the definition of ξ n + 1 and (15), we have
ξ n k f ( y n k , y ) ξ n k f ( y n k , z n k ) + w n k z n k , y z n k ξ n k f ( w n k , z n k ) ξ n k f ( w n k , y n k ) μ ξ n k 2 ξ n k + 1 w n k y n k 2 μ ξ n k 2 ξ n k + 1 y n k z n k 2 + w n k z n k , y z n k w n k y n k , z n k y n k μ ξ n k 2 ξ n k + 1 w n k y n k 2 μ ξ n k 2 ξ n k + 1 y n k z n k 2 + w n k z n k , y z n k ,
where y is an any element in T n . It follows from expressions (36), (40), (42) and the boundedness of { x n } that the right-hand side of the last inequality tends to zero. Using ξ n k > 0 , condition (A3) and y n k z , we have
0 lim sup k f ( y n k , y ) f ( z , y ) , y T n .
Since C T n and z C implies f ( z , y ) 0 , y C . This shows that z E P ( f , C ) . Thus Lemma 4, provides that { w n } , { x n } and { y n } weakly converge to q as n .  ☐
Note: Let consider the cost bifunction in the following way
f ( x , y ) : = F ( x ) , y x , x , y C ,
where F : C H is an operator. Then, the problem (1) is translated into the variational inequality problem and the value of Lipschitz constant is L = 2 c 1 = 2 c 2 . From the value of y n in the Algorithm 1 and expression (44), we have
y n = arg min y C { ξ n f ( w n , y ) + 1 2 w n y 2 } = arg min y C ξ n F ( w n ) , y w n + 1 2 w n y 2 + ξ n 2 2 F ( w n ) 2 ξ n 2 2 F ( w n ) 2 = arg min y C 1 2 y ( w n ξ n F ( w n ) ) 2 = P C ( w n ξ n F ( w n ) ) .
The value of z n translate into
z n = P T n ( w n ξ n F ( y n ) ) .
Let consider that F meets the following conditions:
(F1)
F satisfy the following condition on C for some positive constant L > 0
F ( x ) F ( y ) L x y , x , y C ;
(F2)
The solution set V I ( F , C ) and F is pseudomonotone operator on C;
(F3)
lim sup n F ( u n ) , y u n F ( u * ) , y u * for every y C and { u n } C satisfying u n u * .
Corollary 1.
Suppose that F : C H follows the requirement(F1)-(F3). Assume { w n } , { x n } and { y n } are the sequences formed in the following way:
(i)
Choose x 1 , x 0 H and nondecreasing sequence 0 < θ n θ < 5 2 . Set
w 0 = x 0 + θ 0 ( x 0 x 1 ) .
(ii)
Let ξ n > 0 and x n 1 , x n H are known for n 0 . Compute
w n = x n + θ n ( x n x n 1 ) , y n = P C w n ξ n F ( w n ) , z n = P T n w n ξ n F ( y n ) , x n + 1 = ( 1 β n ) w n + β n z n ,
where β n ( 0 , 1 ] and T n = { z H : w n ξ n F ( w n ) y n , z y n 0 } .
(iii)
Moreover, assume that
0 < μ < 1 2 2 θ 1 2 θ 2 1 2 θ + 1 2 θ 2 and 0 θ < 5 2 ,
and revise the stepsize ξ n + 1 as follows:
ξ n + 1 = min ξ n , μ ( w n y n 2 + z n y n 2 ) 2 F ( w n ) F ( y n ) , z n y n if F ( w n ) F ( y n ) , z n y n > 0 , ξ n otherwise .
Then, the sequence { w n } , { x n } and { y n } converge weakly to the solution q of V I ( F , C ) .

4. Numerical Experiments

For numerical experiments, we analyzed four examples by allowing different dimensions to see the effectiveness of our proposed method. We used the Matlab Quadprog program (https://www.mathworks.com/help/optim/ug/quadprog.html) to solve optimization problems and computed on a (Matlab R2018b) PC Intel(R) Core(TM)i5-6200 CPU @ 2.30GHz 2.40GHz, RAM 8.00 GB.
Example 1.
Assume that there have been n companies that manufacture the same commodities and this model taken as extension of the Nash-Cournot Oligopolistic equilibrium model [36]. Let x correspond to a vector in which every element x i indicates the amount of the commodity generated by the company i. So, in this case the price function P has been taken the form as P i ( S ) = ϕ i ψ i S , where ϕ i > 0 , ψ i > 0 and S = i = 1 n x i is the sum of total commodity generated by all companies. The profit function for each firm i is defined by F i ( x ) = P i ( S ) x i t i ( x i ) where t i ( x i ) is the value of tax and fee for producing x i . Suppose that C i = [ x i min , x i max ] is the set of strategies belongs to each firm i and the strategy scheme for the whole model take the form as C : = C 1 × C 2 × × C n . In particular, each organization desires to attain its optimum return by considering that the resulting amount of production is an input parameter and is dependent on the engagement of all the other parties. The methodology frequently managed to work on this type of model primarily concentrates on the well-known Nash equilibrium principle. A point x * C = C 1 × C 2 × × C n is an equilibrium point of the above-discussed model if
F i ( x * ) F i ( x * [ x i ] ) x i C i , i = 1 , 2 , , n ,
while the vector x * [ x i ] represent the vector get from x * by taking x i * with x i . Finally, we take f ( x , y ) : = φ ( x , y ) φ ( x , x ) with φ ( x , y ) : = i = 1 n F i ( x [ y i ] ) and the problem of finding the Nash equilibrium point of the model can be taken as follows:
Find x * C such that f ( x * , y ) 0 , y C .
It follows from [36,46], the bifunction f : C × C R could be taken in the following form
f ( x , y ) = P x + Q y + q , y x ,
while C R n is defined as
C : = { x R n : 5 x i 5 } .
The matrices P , Q are being taken arbitrary (Choosing two diagonal matrices randomly A 1 and A 2 with entries from [ 0 , 2 ] and [ 2 , 0 ] respectively. Two random orthogonal matrices B 1 and B 2 are able to generate a positive semidefinite matrix M 1 = B 1 A 1 B 1 T and negative semidefinite matrix M 2 = B 2 A 2 B 2 T . Finally, set  Q = M 1 + M 1 T , S = M 2 + M 2 T and P = Q S ) and vector q entries taken from interval [ n , n ] (for more details see [36,47]). The starting points are x 1 = x 0 = ( 1 , 1 , , 1 ) T R n and μ = 0 . 22 , θ n = 0 . 20 with β n = 0 . 85 . Figure 1, Figure 2, Figure 3 and Figure 4 and Table 1 demonstrates the numerical performance of Algorithm 1 (InExEgA) comparison with Algorithm 1 (ExEgA) in [16] relative to number of iterations.
Example 2.
Consider the following problem of fractional programming as in [48]:
min g ( x ) = x T Q x + a T x + a 0 b T x + b 0
subject to x C : = { x R 4 : b T x + b 0 > 0 } where a 0 = 2 , b 0 = 4 and
Q = 5 1 2 0 1 5 1 3 2 1 3 0 0 3 0 5 , a = 1 2 2 1 , b = 2 1 1 0 .
It is easy to verify that Q is symmetric and positive definite in R 4 and consequently g is pseudoconvex on C. We minimize g over C : = { x R 4 : 1 x i 10 , i = 1 , 2 , 3 , 4 } by using both algorithms with bifunction f ( x , y ) = G ( x ) , y x on C × C to R with unknown Lipschitz constants where
G ( x ) : = g ( x ) = ( ( b T x + b 0 ) ( 2 Q x + a ) b ( x T Q x + a T x + a 0 ) ) ( b T x + b 0 ) 2 .
This problem has a unique solution x * = ( 1 , 1 , 1 , 1 ) T C . Choose initial points are x 1 = x 0 = ( 1 , 1 , 1 , 1 ) T R 4 and μ = 0 . 22 , θ n = 0 . 20 , β n = 0 . 85 , ξ 0 = 0 . 5 . Figure 5 and Figure 6 demonstrates the numerical performance of Algorithm 1 (InExEgA) comparison with Algorithm 1 (ExEgA) in [16] in term of number of iterations and elapsed time respectively.
Example 3.
Let the bifunction f define on the convex set C as
f ( x , y ) = ( B B T + S + D ) x , y x ,
where B is an m × m matrix, S is an m × m skew-symmetric matrix, D is an m × m diagonal matrix, whose diagonal entries are nonnegative. The feasible set C R m is closed and convex and defined as
C = { x R m : A x b } ,
while A is an l × m matrix and b is a nonnegative vector. It is clear that bifunction f is monotone and its Lipschitz-type constants are c 1 = c 2 = B B T + S + D 2 . The initial points are x 1 = x 0 = ( 1 , 1 , , 1 ) T R n and μ = 0 . 22 , θ n = 0 . 20 with β n = 0 . 85 . Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 and Table 2 demonstrates the numerical performance of Algorithm 1 comparison with Algorithm 1 in [16] in term of number of iterations and elapsed time in seconds.
Example 4.
Consider a bifunction f : C × C R defined by
f ( x , y ) = F ( x ) , y x ,
with unknown Lipschitz constants c 1 , c 2 and C = { x R 4 : 1 x i 5 , i = 1 , 2 , 3 , 4 } and
F ( x ) = x 1 + x 2 + x 3 + x 4 4 x 2 x 3 x 4 x 1 + x 2 + x 3 + x 4 4 x 1 x 3 x 4 x 1 + x 2 + x 3 + x 4 4 x 1 x 2 x 4 x 1 + x 2 + x 3 + x 4 4 x 1 x 2 x 3 .
The initial values are x 1 = x 0 = ( 1 , 1 , 1 , 1 ) T R n and μ = 0 . 22 , θ n = 0 . 20 with β n = 0 . 85 . Figure 13 and Figure 14 demonstrates the numerical performance of Algorithm 1 comparison with Algorithm 1 in [16].

5. Conclusions

The paper proposed an inertial extragradient-like approach to addressing the problem of equilibrium of a pseudomonotone bifunction with a Lipschitz-like condition. A stepsize rule has been presented that does not depend on the Lipschitz-type constant information. Several experimental studies have been reported to demonstrate the numerical behavior of our method and compare it with other methods. Such numerical experiments indicate that inertial influences normally extend the performance of iterative sequences in this sense.

Author Contributions

Conceptualization, P.Y., H.u.R. and P.K.; Methodology, P.Y., H.u.R., P.K. and A.H.I.; Investigation, H.u.R., P.K. and A.H.I.; Writing–original draft preparation, P.Y., H.u.R., P.K. and A.H.I.; Writing–review and editing, P.Y., H.u.R. and P.K.; Visualization, P.Y., P.K. and A.H.I.; Software, H.u.R, P.K. and A.H.I.; Supervision, P.K.; Funding Acquisition, P.K. All authors have read and agree to the published version of the manuscript.

Funding

This project was supported by Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Faculty of Science, KMUTT. The first author was financially supported by the Research Professional Development Project Under the scholarship of Rajabhat Rajanagarindra University (RRU). The third and the fourth authors were supported by “the Petchra Pra Jom Klao Ph.D. Research Scholarship” from King Mongkut’s University of Technology Thonburi Grant Nos. 16/2561 and 16/2018, respectively).

Acknowledgments

The author has received support from ”Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”. We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Student 1994, 63, 123–145. [Google Scholar]
  2. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  3. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  4. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  5. Hieu, D.V. Projected subgradient algorithms on systems of equilibrium problems. Optim. Lett. 2017, 12, 551–566. [Google Scholar] [CrossRef]
  6. Scheimberg, S.; Santos, P. A relaxed projection method for finite-dimensional equilibrium problems. Optimization 2011, 60, 1193–1208. [Google Scholar] [CrossRef]
  7. Muu, L.D.; Quoc, T.D. Regularization Algorithms for Solving Monotone Ky Fan Inequalities with Application to a Nash-Cournot Equilibrium Model. J. Optim. Theory Appl. 2009, 142, 185–204. [Google Scholar] [CrossRef]
  8. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  9. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geometry 1999, 15, 91–100. [Google Scholar]
  10. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Progr. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  11. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2011, 52, 139–159. [Google Scholar] [CrossRef]
  12. Lyashko, S.I.; Semenov, V.V. A New Two-Step Proximal Algorithm of Solving the Problem of Equilibrium Programming. In Optimization and Its Applications in Control and Data Sciences; Springer International Publishing: Berlin, Germany, 2016; pp. 315–325. [Google Scholar] [CrossRef]
  13. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  14. Anh, P.N.; Hai, T.N.; Tuan, P.M. On ergodic algorithms for equilibrium problems. J. Glob. Optim. 2015, 64, 179–195. [Google Scholar] [CrossRef]
  15. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  16. Hieu, D.V.; Quy, P.K.; Vy, L.V. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56. [Google Scholar] [CrossRef]
  17. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 97, 811–824. [Google Scholar] [CrossRef]
  18. ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  19. ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  20. ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  21. Hieu, D.V. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 2017, 77, 983–1001. [Google Scholar] [CrossRef]
  22. Hieu, D.V.; Strodiot, J.J. Strong convergence theorems for equilibrium problems and fixed point problems in Banach spaces. J. Fixed Point Theory Appl. 2018, 20. [Google Scholar] [CrossRef]
  23. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  24. Hieu, D.V.; Gibali, A. Strong convergence of inertial algorithms for solving equilibrium problems. Optim. Lett. 2019. [Google Scholar] [CrossRef]
  25. Abubakar, J.; Kumam, P.; ur Rehman, H.; Ibrahim, A.H. Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator. Mathematics 2020, 8, 609. [Google Scholar] [CrossRef]
  26. Abubakar, J.; Sombut, K.; ur Rehman, H.; Ibrahim, A.H. An Accelerated Subgradient Extragradient Algorithm for Strongly Pseudomonotone Variational Inequality Problems. Thai J. Math. 2019, 18, 166–187. [Google Scholar]
  27. ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  28. Iusem, A.N.; Sosa, W. On the proximal point method for equilibrium problems in Hilbert spaces. Optimization 2010, 59, 1259–1274. [Google Scholar] [CrossRef]
  29. Martinet, B. Brève communication. Régularisation d’inéquations variationnelles par approximations successives. Revue Française D’inf. Rech. Opérationnelle. Série Rouge 1970, 4, 154–158. [Google Scholar] [CrossRef] [Green Version]
  30. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  31. Konnov, I. Application of the Proximal Point Method to Nonmonotone Equilibrium Problems. J. Optim. Theory Appl. 2003, 119, 317–333. [Google Scholar] [CrossRef]
  32. Cohen, G. Auxiliary problem principle and decomposition of optimization problems. J. Optim. Theory Appl. 1980, 32, 277–305. [Google Scholar] [CrossRef]
  33. Cohen, G. Auxiliary problem principle extended to variational inequalities. J. Optim. Theory Appl. 1988, 59, 325–333. [Google Scholar] [CrossRef]
  34. Mastroeni, G. On Auxiliary Principle for Equilibrium Problems. In Nonconvex Optimization and Its Applications; Springer: Berlin, Germany, 2003; pp. 289–298. [Google Scholar] [CrossRef]
  35. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  36. Quoc Tran, D.; Le Dung, M.N.V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  37. Censor, Y.; Gibali, A.; Reich, S. The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Goebel, K.; Reich, S. Uniform convexity, Hyperbolic Geometry, and Nonexpansive Mappings. 1984. Available online: https://www.researchgate.net/publication/248772020_Uniform_Convexity_Hyperbolic_Geometry_and_Nonexpansive_Mappings (accessed on 10 June 2020).
  39. Minty, G.J. Monotone (nonlinear) operators in Hilbert space. Duke Math. J. 1962, 29, 341–346. [Google Scholar] [CrossRef]
  40. Karamardian, S.; Schaible, S. Seven kinds of monotone maps. J. Optim. Theory Appl. 1990, 66, 37–46. [Google Scholar] [CrossRef]
  41. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  42. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  43. Heinz, H.; Bauschke, P.L.C.A. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer: Berlin, Germany, 2017. [Google Scholar]
  44. Attouch, F.A.H. An Inertial Proximal Method for Maximal Monotone Operators via Discretization of a Nonlinear Oscillator with Damping. Set-Valued Var. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  45. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 1967, 73, 591–598. [Google Scholar] [CrossRef] [Green Version]
  46. Hieu, D.V. Parallel extragradient-proximal methods for split equilibrium problems. Math. Modell. Anal. 2016, 21, 478–501. [Google Scholar] [CrossRef]
  47. ur Rehman, H.; Pakkaranang, N.; Hussain, A.; Wairojjana, N. A modified extra-gradient method for a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces. J. Math. Comput. Sci. 2020, 22, 38–48. [Google Scholar] [CrossRef]
  48. Hu, X.; Wang, J. Solving Pseudomonotone Variational Inequalities and Pseudoconvex Optimization Problems Using the Projection Neural Network. IEEE Trans. Neural Netw. 2006, 17, 1487–1499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 5 .
Figure 1. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 5 .
Mathematics 08 01165 g001
Figure 2. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 10 .
Figure 2. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 10 .
Mathematics 08 01165 g002
Figure 3. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 20 .
Figure 3. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 20 .
Mathematics 08 01165 g003
Figure 4. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 50 .
Figure 4. Algorithm 1 comparison with Algorithm 1 in [16] in term of iterations when n = 50 .
Mathematics 08 01165 g004
Figure 5. Algorithm 1 comparison with Algorithm 1 in [16] and number of iterations are 204 and 175 respectively.
Figure 5. Algorithm 1 comparison with Algorithm 1 in [16] and number of iterations are 204 and 175 respectively.
Mathematics 08 01165 g005
Figure 6. Algorithm 1 comparison with Algorithm 1 in [16] and elapsed time in seconds are 2.9404 and 0.9194 respectively.
Figure 6. Algorithm 1 comparison with Algorithm 1 in [16] and elapsed time in seconds are 2.9404 and 0.9194 respectively.
Mathematics 08 01165 g006
Figure 7. Algorithm 1 comparison with Algorithm 1 in [16] when m = 5 .
Figure 7. Algorithm 1 comparison with Algorithm 1 in [16] when m = 5 .
Mathematics 08 01165 g007
Figure 8. Algorithm 1 comparison with Algorithm 1 in [16] when m = 5 .
Figure 8. Algorithm 1 comparison with Algorithm 1 in [16] when m = 5 .
Mathematics 08 01165 g008
Figure 9. Algorithm 1 comparison with Algorithm 1 in [16] when m = 10 .
Figure 9. Algorithm 1 comparison with Algorithm 1 in [16] when m = 10 .
Mathematics 08 01165 g009
Figure 10. Algorithm 1 comparison with Algorithm 1 in [16] when m = 10 .
Figure 10. Algorithm 1 comparison with Algorithm 1 in [16] when m = 10 .
Mathematics 08 01165 g010
Figure 11. Algorithm 1 comparison with Algorithm 1 in [16] when m = 20 .
Figure 11. Algorithm 1 comparison with Algorithm 1 in [16] when m = 20 .
Mathematics 08 01165 g011
Figure 12. Algorithm 1 comparison with Algorithm 1 in [16] when m = 20 .
Figure 12. Algorithm 1 comparison with Algorithm 1 in [16] when m = 20 .
Mathematics 08 01165 g012
Figure 13. Algorithm 1 comparison with Algorithm 1 in [16] and number of iterations are 108 and 49 respectively.
Figure 13. Algorithm 1 comparison with Algorithm 1 in [16] and number of iterations are 108 and 49 respectively.
Mathematics 08 01165 g013
Figure 14. Algorithm 1 comparison with Algorithm 1 in [16] and elapsed time in seconds are 1.3294 and 0.1701 respectively.
Figure 14. Algorithm 1 comparison with Algorithm 1 in [16] and elapsed time in seconds are 1.3294 and 0.1701 respectively.
Mathematics 08 01165 g014
Table 1. Example 1: The numerical results for Figure 1, Figure 2, Figure 3 and Figure 4.
Table 1. Example 1: The numerical results for Figure 1, Figure 2, Figure 3 and Figure 4.
ExEgA InExEgA
n ξ 0 TOLIter.CPU(s)Iter.CPU(s)
0.1 10 12 890.8725710.5018
50.2 10 12 540.4895400.2080
0.1 10 12 1061.1755790.6562
100.2 10 12 730.8509580.4067
0.1 10 12 1221.5985980.9870
200.2 10 12 911.0032760.7689
0.1 10 12 1622.45671341.4567
500.2 10 12 1071.4356981.0067
Table 2. Example 3: The numerical results for Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
Table 2. Example 3: The numerical results for Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
ExEgA InExEgA
n ξ 0 TOLIter.CPU(s)Iter.CPU(s)
50.5 10 4 2057.29791311.8722
100.5 10 4 54423.08912624.6898
200.5 10 4 156265.7050108423.1766

Share and Cite

MDPI and ACS Style

Yordsorn, P.; Kumam, P.; Rehman, H.u.; Hassan Ibrahim, A. A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space. Mathematics 2020, 8, 1165. https://0-doi-org.brum.beds.ac.uk/10.3390/math8071165

AMA Style

Yordsorn P, Kumam P, Rehman Hu, Hassan Ibrahim A. A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space. Mathematics. 2020; 8(7):1165. https://0-doi-org.brum.beds.ac.uk/10.3390/math8071165

Chicago/Turabian Style

Yordsorn, Pasakorn, Poom Kumam, Habib ur Rehman, and Abdulkarim Hassan Ibrahim. 2020. "A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space" Mathematics 8, no. 7: 1165. https://0-doi-org.brum.beds.ac.uk/10.3390/math8071165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop