Next Article in Journal
Dissipative Dynamics of Non-Interacting Fermion Systems and Conductivity
Next Article in Special Issue
Approximation Results for Equilibrium Problems Involving Strongly Pseudomonotone Bifunction in Real Hilbert Spaces
Previous Article in Journal
Nonlinear Approximations to Critical and Relaxation Processes
Previous Article in Special Issue
Application of Bernoulli Polynomials for Solving Variable-Order Fractional Optimal Control-Affine Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Iterative Self-Adaptive Step Size Extragradient-Like Method for Solving Equilibrium Problems in Real Hilbert Space with Applications

1
Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Thanyaburi, Pathumthani 12110, Thailand
2
Faculty of Science and Technology, Rajamangala University of Technology Phra Nakhon (RMUTP), 1381 Pracharat 1 Road, Wongsawang, Bang Sue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Submission received: 2 September 2020 / Revised: 19 October 2020 / Accepted: 22 October 2020 / Published: 31 October 2020
(This article belongs to the Special Issue Numerical Analysis and Computational Mathematics)

Abstract

:
A number of applications from mathematical programmings, such as minimization problems, variational inequality problems and fixed point problems, can be written as equilibrium problems. Most of the schemes being used to solve this problem involve iterative methods, and for that reason, in this paper, we introduce a modified iterative method to solve equilibrium problems in real Hilbert space. This method can be seen as a modification of the paper titled “A new two-step proximal algorithm of solving the problem of equilibrium programming” by Lyashko et al. (Optimization and its applications in control and data sciences, Springer book pp. 315–325, 2016). A weak convergence result has been proven by considering the mild conditions on the cost bifunction. We have given the application of our results to solve variational inequality problems. A detailed numerical study on the Nash–Cournot electricity equilibrium model and other test problems is considered to verify the convergence result and its performance.

1. Introduction

An equilibrium problem (EP) is a generalized concept that unifies several mathematical problems, such as the variational inequality problems, minimization problems, complementarity problems, the fixed point problems, non-cooperative games of Nash equilibrium, the saddle point problems and scalar and vector minimization problems (see e.g., [1,2,3]). The particular form of an equilibrium problem was firstly established in 1992 by Muu and Oettli [4] and then further elaborated by Blum and Oettli [1]. Next, we consider the concept of an equilibrium problem introduced by Blum and Oettli in [1]. Let C be a non-empty, closed and convex subset H of a real Hilbert space and f : H × H R is bifunction with f ( v , v ) = 0 , for each v C . A equilibrium problem regarding f on the set C is defined in the following way:
Find p C such that f ( p , v ) 0 , for all v C .
Many methods have been already established over the past couple of years to figure out the equilibrium problem in Hilbert spaces [5,6,7,8,9,10,11,12,13,14,15], the inertial methods [11,16,17,18] and others in [18,19,20,21,22,23,24]. In particular, Tran et al. introduced an iterative scheme in [8], in that a sequence { u n } was generated in the following way:
u 0 C , v n = arg min { λ f ( u n , y ) + 1 2 u n y 2 : y C } , u n + 1 = arg min { λ f ( v n , y ) + 1 2 u n y 2 : y C } ,
where 0 < λ < min 1 2 c 1 , 1 2 c 2 and c 1 , c 2 are Lipschitz constants. Lyashko et al. [25] in 2016 introduced an improvement of the method (2) to solve equilibrium problem and sequence { u n } was generated in the following way:
u 0 , v 0 C , u n + 1 = arg min { λ f ( v n , y ) + 1 2 u n y 2 : y C } , v n + 1 = arg min { λ f ( v n , y ) + 1 2 u n + 1 y 2 : y C } ,
where 0 < λ < 1 2 c 2 + 4 c 1 and c 1 , c 2 are Lipschitz constants.
In this paper, we consider the extragradient method in (3) and to provide its improvement by using the inertial scheme [26] and continue to improve the step size rule for its second step. The step size is not fixed in our case, but it is dependent on a particular formula by using prior information of the bifunction values. A weak convergence theorem dealing with the suggested technique is presented by having the specific bi-functional condition. We have also considered how our results are presented to the problems of a variational inequality. A few other formulations of the problem of variational inequalities are discussed, and many computational examples in finite and infinite dimensions spaces are also presented to demonstrate the applicability of our proposed results.
In this study, we study the equilibrium problem through the following assumptions:
( f 1 )
A bifunction f : H × H R is said to be (see [1,27]) pseudomonotone on C if
f ( v 1 , v 2 ) 0 f ( v 2 , v 1 ) 0 , for all v 1 , v 2 C .
( f 2 )
A bifunction f : H × H R is said to be Lipschitz-type continuous [28] on C if there exist c 1 , c 2 > 0 such that
f ( v 1 , v 3 ) f ( v 1 , v 2 ) + f ( v 2 , v 3 ) + c 1 v 1 v 2 2 + c 2 v 2 v 3 2 , for all v 1 , v 2 , v 3 C .
( f 3 )
lim sup n + f ( v n , z ) f ( v * , z ) for each z C and { v n } C satisfying v n v * ;
( f 4 )
f ( u , · ) is convex and subdifferentiable on H for each u H .
The rest of this paper will be organized as follows: In Section 2, we give a few definitions and important lemmas to be used in this paper. Section 3 includes the main algorithm involving pseudomonotone bifunction and provides a weak convergence theorem. Section 4 describes some applications in the variational inequality problems. Section 5 sets out the numerical studies to demonstrate the algorithmic efficiency.

2. Preliminaries

In this section, some important lemmas and basic definitions are provided. Moreover, E P ( f , C ) denotes the solution set of an equilibrium problem on the set C and p is any arbitrary element of E P ( f , C ) .
A metric projection P C ( u ) of u onto a closed, convex subset C of H is defined by
P C ( u ) = arg min v C { v u } .
Lemma 1
([29]). Let P C : H C be a metric projection from H onto C . Then
(i)
For all u C , v H and
u P C ( v ) 2 + P C ( v ) v 2 u v 2 .
(ii)
w = P C ( u ) if and only if
u w , v w 0 , for all v C .
Lemma 2
([29]). For all u , v H with R . Then, the following relationship is holds.
u + ( 1 ) v 2 = u 2 + ( 1 ) v 2 ( 1 ) u v 2 .
Assume that g : C R be a convex function and subdifferential of g at u C is defined by
g ( u ) = { w C : g ( v ) g ( u ) w , v u , for all v C } .
Given that f ( u , . ) is convex and subdifferentiable on H for each fixed u H and subdifferential of f ( u , . ) at x H defined by
2 f ( u , . ) ( x ) = 2 f ( u , x ) = { z H : f ( u , v ) f ( u , x ) z , v x , for all v H } .
A normal cone of C at u C is defined by
N C ( u ) = { w H : w , v u 0 , for all v C } .
Lemma 3.
([30]). Assume that C is a nonempty, closed and convex subset of a real Hilbert space H and h : C R be a convex, lower semi-continuous and subdifferentiable function on C . Then, u C is a minimizer of a function h if and only if 0 h ( u ) + N C ( u ) where h ( u ) and N C ( u ) denotes the subdifferential of h at u and the normal cone of C at u, respectively.
Lemma 4
([31]). Let a n , b n and c n are non-negative real sequences such that
a n + 1 a n + b n ( a n a n 1 ) + c n , for all n 1 , with n = 1 + c n < + ,
where b > 0 such that 0 b n b < 1 for all n N . Then, the following relations are true.
(i)
n = 1 + [ a n a n 1 ] + < + , with [ s ] + : = max { s , 0 } ;
(ii)
lim n + a n = a * [ 0 , + ) .
Lemma 5
([32]). Let a sequence { a n } in H and C H and the following conditions have been met:
(i)
for each a C , lim n + a n a exists;
(ii)
each weak sequentially limit point of { a n } belongs to set C.
Then, { a n } weakly converges to an element in C .

3. Main Results

In this section, we present our main algorithm and provide a weak convergence theorem for our proposed method. The detailed method is given below.
Remark 1.
By Expression (5), we obtain
λ n + 1 f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 v n u n + 1 2 μ f ( v n , u n + 1 ) .
Lemma 6.
Let { u n } be a sequence generated by Algorithm 1. Then, the following inequality holds.
μ λ n f ( v n , y ) μ λ n f ( v n , u n + 1 ) ρ n u n + 1 , y u n + 1 , for all y C .
Algorithm 1 Modified Popov’s subgradient extragradient-like iterative scheme.
  Step 1:
Choose u 1 , v 1 , u 0 , v 0 H and a sequence n is non-decreasing such that 0 n < 1 3 , λ 0 > 0 and 0 < σ < min 1 3 ( 1 ) 2 + 4 c 1 ( + 2 ) , 1 2 c 2 + 4 c 1 ( 1 + ) and μ ( 0 , σ ) .
  Step 2:
Evaluate
u n + 1 = arg min { μ λ n f ( v n , y ) + 1 2 ρ n y 2 : y C } ,
where ρ n = u n + n ( u n u n 1 ) .
  Step 3:
Updated the step size in the following order:
λ n + 1 = min σ , μ f ( v n , u n + 1 ) f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 u n + 1 v n 2 + 1 , if μ f ( v n , u n + 1 ) f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 u n + 1 v n 2 + 1 > 0 , λ 0 e l s e .
  Step 4:
Evaluate
v n + 1 = arg min { λ n + 1 f ( v n , y ) + 1 2 ρ n + 1 y 2 : y C } ,
where ρ n + 1 = u n + 1 + n + 1 ( u n + 1 u n ) . If u n + 1 = v n = ρ n or ρ n + 1 = v n + 1 = v n then Stop. Else, take n : = n + 1 and go back to Step 2.
Proof. 
By the use of Lemma 3, we get
0 2 μ λ n f ( v n , y ) + 1 2 ρ n y 2 ( u n + 1 ) + N C ( u n + 1 ) .
From above there is a ω 2 f ( v n , u n + 1 ) and ω ¯ N C ( u n + 1 ) such that
μ λ n ω + u n + 1 ρ n + ω ¯ = 0 .
Therefore, we obtain
ρ n u n + 1 , y u n + 1 = μ λ n ω , y u n + 1 + ω ¯ , y u n + 1 , for all y C .
Due to ω ¯ N C ( u n + 1 ) then ω ¯ , y u n + 1 0 , for each y C . It implies that
μ λ n ω , y u n + 1 ρ n u n + 1 , y u n + 1 , for all y C .
Given that ω 2 f ( v n , u n + 1 ) , we have
f ( v n , y ) f ( v n , u n + 1 ) ω , y u n + 1 , for all y H .
By combining Expressions (6) and (7), we obtain
μ λ n f ( v n , y ) μ λ n f ( v n , u n + 1 ) ρ n u n + 1 , y u n + 1 , for all y C .
Lemma 7.
Let { v n } be a sequence generated by Algorithm 1. Then, the following inequality holds.
λ n + 1 f ( v n , y ) λ n + 1 f ( v n , v n + 1 ) ρ n + 1 v n + 1 , y v n + 1 , for all y C .
Proof. 
The proof is same as the proof of Lemma 6. □
Lemma 8.
If u n + 1 = v n = ρ n and ρ n + 1 = v n + 1 = v n in Algorithm 1, then v n E P ( f , C ) .
Proof. 
The proof of this can easily be seen from Lemmas 6 and 7. □
Lemma 9.
Let f : H × H R be a bifunction and satisfies the conditions( f 1 )–( f 4 ). Then, for each p E P ( f , C ) , we have
u n + 1 p 2 ρ n p 2 ( 1 λ n + 1 ) u n + 1 ρ n 2 + 4 c 1 λ n + 1 λ n ρ n v n 1 2 λ n + 1 ( 1 4 c 1 λ n ) ρ n v n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 .
Proof. 
By Lemma 6, we obtain
μ λ n f ( v n , p ) μ λ n f ( v n , u n + 1 ) ρ n u n + 1 , p u n + 1 .
Thus, p E P ( f , C ) and the condition ( f 1 ) implies that f ( v n , p ) 0 . From (8), we have
ρ n u n + 1 , u n + 1 p μ λ n f ( v n , u n + 1 ) .
From Expression (4), we obtain
μ f ( v n , u n + 1 ) λ n + 1 f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 v n u n + 1 2 .
Combining expression (9) and (10), implies that
ρ n u n + 1 , u n + 1 p λ n + 1 [ λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 λ n v n 1 v n 2 c 2 λ n u n + 1 v n 2 ] .
By Lemma 7, we have
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) ρ n v n , u n + 1 v n .
Thus, combining (11) and (12) we get
ρ n u n + 1 , u n + 1 p λ n + 1 [ ρ n v n , u n + 1 v n c 1 λ n v n 1 v n 2 c 2 λ n u n + 1 v n 2 ] .
We have the following mathematical expressions:
2 ρ n u n + 1 , u n + 1 p = ρ n p 2 u n + 1 ρ n 2 u n + 1 p 2 .
2 ρ n v n , u n + 1 v n = ρ n v n 2 + u n + 1 v n 2 ρ n u n + 1 2 .
From the above equation and (13), we have
u n + 1 p 2 ρ n p 2 ( 1 λ n + 1 ) u n + 1 ρ n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 λ n + 1 ρ n v n 2 + λ n + 1 ( 2 c 1 λ n ) v n 1 v n 2
We also have
v n 1 v n 2 v n 1 ρ n + ρ n v n 2 2 v n 1 ρ n 2 + 2 ρ n v n 2 .
Finally, we get
u n + 1 p 2 ρ n p 2 ( 1 λ n + 1 ) u n + 1 ρ n 2 + 4 c 1 λ n λ n + 1 ρ n v n 1 2 λ n + 1 ( 1 4 c 1 λ n ) ρ n v n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 .
Theorem 1.
Assume that f : H × H R satisfies the conditions( f 1 )–( f 4 ). Then, for some p E P ( f , C ) , the sequence { ρ n } , { u n } and { v n } generated by Algorithm 1, weakly converge to p E P ( f , C ) .
Proof. 
By Lemma 9, we obtain
u n + 1 p 2 ρ n p 2 ( 1 λ n + 1 ) u n + 1 ρ n 2 + 4 c 1 λ n λ n + 1 ρ n v n 1 2 λ n + 1 ( 1 4 c 1 λ n ) ρ n v n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 .
By definition of ρ n in the Algorithm 1, we have
ρ n v n 1 2 = u n + n ( u n u n 1 ) v n 1 2 = ( 1 + n ) ( u n v n 1 ) n ( u n 1 v n 1 ) 2 = ( 1 + n ) u n v n 1 2 n u n 1 v n 1 2 + n ( 1 + n ) u n u n 1 2 ( 1 + ) u n v n 1 2 + ( 1 + ) u n u n 1 2 .
Adding the term 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 on both sides in (14) with (15) for n 1 , we have
u n + 1 p 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 (16) ρ n p 2 ( 1 σ ) u n + 1 ρ n 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 + ( 1 + ) u n u n 1 2 λ n + 1 ( 1 4 c 1 σ ) ρ n v n 2 λ n + 1 ( 1 2 c 2 σ ) u n + 1 v n 2 ρ n p 2 ( 1 σ ) u n + 1 ρ n 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 (17) + 4 c 1 σ ( + 2 ) u n u n 1 2 λ n + 1 ( 1 4 c 1 σ ) ρ n v n 2 λ n + 1 ( 1 2 c 2 σ 4 c 1 σ ( 1 + ) ) u n + 1 v n 2 ρ n p 2 ( 1 σ ) u n + 1 ρ n 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 (18) + 4 c 1 σ ( + 2 ) u n u n 1 2 λ n + 1 2 ( 1 2 c 2 σ 4 c 1 σ ( 1 + ) ) 2 u n + 1 v n 2 + 2 ρ n v n 2 ρ n p 2 ( 1 σ ) u n + 1 ρ n 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 (19) + 4 c 1 σ ( + 2 ) u n u n 1 2 λ n + 1 2 ( 1 2 c 2 σ 4 c 1 σ ( 1 + ) ) u n + 1 ρ n 2 .
Given that 0 < λ n σ < 1 2 c 2 + 4 c 1 ( 1 + ) , then the last inequality turns into
u n + 1 p 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 ρ n p 2 ( 1 σ ) u n + 1 ρ n 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 + 4 c 1 σ ( + 2 ) u n u n 1 2 .
From the definition of ρ n , we have
ρ n p 2 = u n + n ( u n u n 1 ) p 2 = ( 1 + n ) ( u n p ) n ( u n 1 p ) 2 = ( 1 + n ) u n p 2 n u n 1 p 2 + n ( 1 + n ) u n u n 1 2 .
From ρ n + 1 , we obtain
u n + 1 ρ n 2 = u n + 1 u n n ( u n u n 1 ) 2 (22) = u n + 1 u n 2 + n 2 u n u n 1 2 2 n u n + 1 u n , u n u n 1 u n + 1 u n 2 + n 2 u n u n 1 2 2 n u n + 1 u n u n u n 1 (23) u n + 1 u n 2 + n 2 u n u n 1 2 n u n + 1 u n 2 n u n u n 1 2 = ( 1 n ) u n + 1 u n 2 + ( n 2 n ) u n u n 1 2 .
Combining the Expressions (20), (21) and (23) we have
u n + 1 p 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 ( 1 + n ) u n p 2 n u n 1 p 2 + n ( 1 + n ) u n u n 1 2 (24) ( 1 σ ) ( 1 n ) u n + 1 u n 2 + ( n 2 n ) u n u n 1 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 + 4 c 1 σ ( + 2 ) u n u n 1 2 ( 1 + n ) u n p 2 n u n 1 p 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 (25) + ( 1 + ) ( 1 σ ) ( n 2 n ) + 4 c 1 σ ( + 2 ) u n u n 1 2 ( 1 σ ) ( 1 n ) u n + 1 u n 2 (26) ( 1 + n ) u n p 2 n u n 1 p 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 + r n u n u n 1 2 q n u n + 1 u n 2 ,
where
r n = ( 1 + ) ( 1 σ ) ( n 2 n ) + 4 c 1 σ ( + 2 ) ;
q n = ( 1 σ ) ( 1 n ) .
Assume that
Γ n = Ψ n + r n u n u n 1 2 ,
where Ψ n = u n p 2 n u n 1 p 2 + 4 c 1 σ λ n ( 1 + ) u n v n 1 2 . Next, (26) implies that
Γ n + 1 Γ n = u n + 1 p 2 n + 1 u n p 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 + r n + 1 u n + 1 u n 2 u n p 2 + n u n 1 p 2 4 c 1 σ λ n ( 1 + ) u n v n 1 2 r n u n u n 1 2 u n + 1 p 2 ( 1 + n ) u n p 2 + n u n 1 p 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 + r n + 1 u n + 1 u n 2 4 c 1 σ λ n ( 1 + ) u n v n 1 2 r n u n u n 1 2 ( q n r n + 1 ) u n + 1 u n 2 .
Next, we have to compute
( q n r n + 1 ) = ( 1 σ ) ( 1 n ) ( 1 + ) + ( 1 σ ) ( n 2 n ) 4 c 1 σ ( + 2 ) ( 1 σ ) ( 1 ) 2 ( 1 + ) 4 c 1 σ ( + 2 ) = ( 1 ) 2 ( 1 + ) σ ( 1 ) 2 4 c 1 σ ( + 2 ) = 1 3 σ ( 1 ) 2 + 4 c 1 ( + 2 ) 0 .
By the use of (27) and (28) for some δ 0 implies that
Γ n + 1 Γ n ( q n r n + 1 ) u n + 1 u n 2 δ u n + 1 u n 2 0 .
The relation (29) implies that { Γ n } is non-increasing. From Γ n + 1 we have
Γ n + 1 = u n + 1 p 2 n + 1 u n p 2 + r n + 1 u n + 1 u n 2 + 4 c 1 σ λ n + 1 ( 1 + ) u n + 1 v n 2 n + 1 u n p 2 .
By definition Γ n , we have
u n p 2 Γ n + n u n 1 p 2 Γ 1 + u n 1 p 2 Γ 1 ( n 1 + + 1 ) + n u 0 p 2 Γ 1 1 + n u 0 p 2 .
From Equations (30) and (31), we obtain
Γ n + 1 n + 1 u n p 2 u n p 2 Γ 1 1 + n + 1 u 0 p 2 .
It follows (29) and (32) that
δ n = 1 k u n + 1 u n 2 Γ 1 Γ k + 1 Γ 1 + Γ 1 1 + k + 1 u 0 p 2 Γ 1 1 + u 0 p 2 .
By letting k + in (33), we obtain
n = 1 + u n + 1 u n 2 < + implies that lim n + u n + 1 u n = 0 .
From Expressions (22) with (34), we obtain
u n + 1 ρ n 0 as n + .
From (32), we have
Ψ n + 1 Γ 1 1 + n + 1 u 0 p 2 + r n + 1 u n + 1 u n 2 .
From Expression (18) and using (21), we have
λ n + 1 ( 1 2 c 2 σ 4 c 1 σ ( 1 + ) ) u n + 1 v n 2 + ρ n v n 2 Ψ n Ψ n + 1 + ( 1 + ) u n u n 1 2 + 4 c 1 σ ( 1 + ) u n u n 1 2 .
Fix k N and using above expression for n = 1 , 2 , , k . Summing them up, we obtain
λ n + 1 ( 1 2 c 2 σ 4 c 1 σ ( 1 + ) ) n = 1 k u n + 1 v n 2 + ρ n v n 2 Ψ 0 Ψ k + 1 + ( 1 + ) n = 1 k u n u n 1 2 + 4 c 1 σ ( 1 + ) n = 1 k u n u n 1 2 Ψ 0 + Γ 1 1 + k + 1 u 0 p 2 + r k + 1 u k + 1 u k 2 + ( 1 + ) n = 1 k u n u n 1 2 + 4 c 1 σ ( 1 + ) n = 1 k u n u n 1 2 ,
letting k + , and due to sum of the positive terms series, we obtain
n = 1 + u n + 1 v n 2 < + and n = 1 + ρ n v n 2 < + .
Moreover, we obtain
lim n + u n + 1 v n = lim n + ρ n v n = 0 .
By using the triangular inequality, we get
lim n + u n v n = lim n + u n ρ n = lim n + v n 1 v n = 0 .
It is follow from the relation (24), we obtain
u n + 1 p 2 ( 1 + n ) u n p 2 n u n 1 p 2 + ( 1 + ) u n u n 1 2 + 4 c 1 σ ( 1 + ) u n v n 1 2 + 4 c 1 σ ( + 2 ) u n u n 1 2 ,
with (34), (39) and Lemma 4 imply that the sequences u n p , ρ n p and v n p limits exist for every p E P ( f , C ) . It means that { u n } , { ρ n } and { v n } are bounded sequences. Take z an arbitrary sequential cluster point of the sequence { u n } . Now our aim to prove that z E P ( f , C ) . By Lemma 6 with Expressions (10) and (12), we write
μ λ n k f ( v n k , y ) μ λ n k f ( v n k , u n k + 1 ) + ρ n k u n k + 1 , y u n k + 1 λ n k λ n k + 1 f ( v n k 1 , u n k + 1 ) λ n k λ n k + 1 f ( v n k 1 , v n k ) c 1 λ n k λ n k + 1 v n k 1 v n k 2 c 2 λ n k λ n k + 1 v n k u n k + 1 2 + ρ n k u n k + 1 , y u n k + 1 λ n k + 1 ρ n k v n k , u n k + 1 v n k c 1 λ n k λ n k + 1 v n k 1 v n k 2 c 2 λ n k λ n k + 1 v n k u n k + 1 2 + ρ n k u n k + 1 , y u n k + 1
where y in C . Next, from (35), (40), (41) and due to boundedness of { u n } gives that the right hand side reaches to zero. Due to μ , λ n k > 0 and v n k z , we have
0 lim sup k + f ( v n k , y ) f ( z , y ) , for all y C .
Thus, z C implies that f ( z , y ) 0 , for all y C . It proves that z E P ( f , C ) . By Lemma 5, the sequence { u n } converges weakly to p E P ( f , C ) .
If n = 0 in Algorithm 1, we have a better version of Lyashko et al. [25] extragradient method in terms of step size improvement.
Corollary 1.
Let f : H × H R satisfy the conditions( f 1 )-( f 4 ). For some p E P ( f , C ) , the sequence { u n } and { v n } generated in the following way:
(i)
Given u 0 , v 1 , v 0 H , 0 < σ < min 1 , 1 2 c 2 + 4 c 1 , μ ( 0 , σ ) and λ 0 > 0 .
(ii)
Compute
u n + 1 = arg min y C { μ λ n f ( v n , y ) + 1 2 u n y 2 } , v n + 1 = arg min y C { λ n + 1 f ( v n , y ) + 1 2 u n + 1 y 2 } ,
where
λ n + 1 = min σ , μ f ( v n , u n + 1 ) f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 u n + 1 v n 2 + 1 .
Then, the sequences { u n } and { v n } converge weakly to p E P ( f , C ) .

4. Applications

Now, we consider the applications of Theorem 1 to solve the variational inequality problems involving pseudomonotone and Lipschitz continuous operator. A variational inequality problem is defined in the following way:
Find p * C such that F ( p * ) , v p * 0 , for all v C .
We consider that F meets the following conditions.
( F 1 )
Solution set V I ( F , C ) is non-empty and F is pseudomonotone on C, i.e.,
F ( u ) , v u 0 implies that F ( v ) , u v 0 , for all u , v C ;
( F 2 )
F is L-Lipschitz continuous on C if there exists a positive constants L > 0 such that
F ( u ) F ( v ) L u v , for all u , v C .
( F 3 )
lim sup n + F ( u n ) , v u n F ( p * ) , v p * for every v C and { u n } C satisfying u n p * .
Corollary 2.
Assume that F : C H meet the conditions( F 1 )( F 3 ). Let { ρ n } , { u n } and { v n } be the sequences are generated in the following way:
(i)
Choose u 1 , v 1 , u 0 , v 0 H and a sequence n is non-decreasing such that 0 n < 1 3 , λ 0 > 0 , 0 < σ < min 1 3 ( 1 ) 2 + 2 L ( + 2 ) , 1 3 L + 2 L ) and μ ( 0 , σ ) .
(ii)
Compute
u n + 1 = P C ( ρ n μ λ n F ( v n ) ) , where ρ n = u n + n ( u n u n 1 ) , v n + 1 = P C ( ρ n + 1 λ n + 1 F ( v n ) ) , where ρ n + 1 = u n + 1 + n + 1 ( u n + 1 u n ) ,
while
λ n + 1 = min σ , μ F v n , u n + 1 v n F v n 1 , u n + 1 v n L 2 v n 1 v n 2 L 2 u n + 1 v n 2 + 1 .
Then, the sequence { ρ n } , { u n } and { v n } converge weakly to p.
Corollary 3.
Assume that F : C H meets the conditions( F 1 )-( F 3 ). Let { u n } and { v n } be the sequences are generated in the following way:
(i)
Choose v 1 , u 0 , v 0 H , 0 < σ < min 1 , 1 3 L and λ 0 > 0 .
(ii)
Compute
u n + 1 = P C ( u n μ λ n F ( v n ) ) , v n + 1 = P C ( u n + 1 λ n + 1 F ( v n ) ) ,
while
λ n + 1 = min σ , μ F v n , u n + 1 v n F v n 1 , u n + 1 v n L 2 v n 1 v n 2 L 2 u n + 1 v n 2 + 1 .
Then, the sequence { u n } and { v n } converge weakly to p.

5. Computational Illustration

Numerical findings are discussed in this section to show the efficiency of our suggested method. Moreover, for Lyashko et al.’s [25] method (L.EgA), our proposed algorithm (Algo.1) and we use error term D n = u n + 1 u n .
Example 1.
Consider the Nash–Cournot equilibrium of electricity markets as in [7]. In this problem, there are total three electricity producing firms: i ( i = 1 , 2 , 3 ) . The firm’s 1,2,3 have generating units named as I 1 = { 1 } , I 2 = { 2 , 3 } and I 3 = { 4 , 5 , 6 } , respectively. Assume that u j denote the producing power of the unit for i = { 1 , 2 , 3 , 4 , 5 , 6 } . Suppose that the value p of electricity can be taken as p = 378.4 2 j = 1 6 u j . The cost of the manufacture j unit follows:
c j ( u j ) : = max { c j ( u j ) , c j ( u j ) } ,
where c j ( u j ) : = α j 2 u j 2 + β j u j + γ j and c j ( u j ) : = α j u j + β j β j + 1 γ j 1 β j ( u j ) ( β j + 1 ) β j . The values are provided in α j , β j , γ j , α j , β j and γ j in Table 1. Profit of the firm i is
f i ( u ) : = p j I i u j j I i c j ( u j ) = 378.4 2 l = 1 6 u l j I i u j j I i c j ( u j ) ,
where u = ( u 1 , , u 6 ) T with reference to set u C : = { u R 6 : u j min u j u j max } , with u j min and u j max give in Table 2. Define the equilibrium bifunction f in the following way:
f ( u , v ) : = i = 1 3 ϕ i ( u , u ) ϕ i ( u , v ) ,
where
ϕ i ( u , v ) : = 378.4 2 j I i u j + j I i v j j I i v j j I i c j ( v j ) .
This model of electricity markets can be viewed as an equilibrium problem
F i n d u * C such that f ( u * , v ) 0 , for all v C .
Numerical conclusions have shown in Figure 1, Figure 2, Figure 3 and Figure 4 and Table 3. For these numerical experiments we take u 1 = v 1 = u 0 = v 0 = ( 48 , 48 , 30 , 27 , 18 , 24 ) T and λ = 0.01 , σ = 0.026 , μ = 0.024 , n = 0.20 , λ 0 = 0.1 .
Example 2.
Assume that the following cost bifunction f defined by
f ( u , v ) = ( A A T + B + C ) u , v u ,
on the convex set C = { u R n : D u d } while D is an 100 × n matrix and d is a non-negative vector. In the above bifunction definition we take A is an n × n matrix, B is an n × n skew-symmetric matrix, C is an n × n diagonal matrix having diagonal entries are non-negative. The matrices are generated as; A = rand(n), K = rand(n), B = 0.5 K 0.5 K T and C = diag(rand(n,1)). The bifunction f is monotone and having Lipschitz-type constants are c 1 = c 2 = 1 2 A A T + B + C . Numerical results are presented in the Figure 5, Figure 6, Figure 7 and Figure 8 and Table 4. For these numerical experiments we take u 1 = v 1 = u 0 = v 0 = ( 1 , 1 , , 1 ) T and λ = 1 10 c 1 , σ = 1 8 c 1 , μ = 1 8.2 c 1 , n = 1 5 , λ 0 = 1 / 4 c 1 .
Example 3.
Assume that F : R 2 R 2 is defined by
F ( u ) = 0.5 u 1 u 2 2 u 2 10 7 4 u 1 0.1 u 2 2 10 7
with C = { u R 2 : ( u 1 2 ) 2 + ( u 2 2 ) 2 1 } . It is not hard to check that F is Lipschitz continuous with L = 5 and pseudomonotone. The step size λ = 10 6 for Lyashko et al. [25] and λ 0 = 0.1 , σ = 0.129 , n = 0.20 and μ = 0.119 . Computational results are shown in the Table 5 and in Figure 9, Figure 10, Figure 11 and Figure 12.
Example 4.
Let F : R 2 R 2 is defined by
F ( u ) = ( u 1 2 + ( u 2 1 ) 2 ) ( 1 + u 2 ) u 1 3 u 1 ( u 2 1 ) 2
and C = { u R 2 : ( u 1 2 ) 2 + ( u 2 2 ) 2 1 } . Here, F is not monotone but pseudomonotone on C and L-Lipschitz continuous through L = 5 (see, e.g., [33]). The stepsize λ = 10 2 for Lyashko et al. [25] and λ 0 = 0.01 , σ = 0.129 , n = 0.15 and μ = 0.119 . The computational experimental findings are written in Table 6 and in Figure 13, Figure 14 and Figure 15.

6. Conclusions

We have established an extragradient-like method to solve pseudomonotone equilibrium problems in real Hilbert space. The main advantage of the proposed method is that an iterative sequence has been incorporated with a certain step size evaluation formula. The step size formula is updated for each iteration based on the previous iterations. Numerical findings were presented to show our algorithm’s numerical efficiency compared with other methods. Such numerical investigations indicate that inertial effects often generally improve the effectiveness of the iterative sequence in this context.

Author Contributions

Conceptualization, W.K. and K.M.; methodology, W.K. and K.M.; writing–original draft preparation, W.K. and K.M.; writing–review and editing, W.K. and K.M.; software, W.K. and K.M.; supervision, W.K.; project administration and funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported by Rajamangala University of Technology Phra Nakhon (RMUTP).

Acknowledgments

The first author would like to thanks the Rajamangala University of Technology Thanyaburi (RMUTT) (Grant No. NSF62D0604). The second author would like to thanks the Rajamangala University of Technology Phra Nakhon (RMUTP).

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  3. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  4. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar]
  5. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  6. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar]
  7. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar]
  8. Quoc Tran, D.; Le Dung, M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar]
  9. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  10. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar]
  11. Ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  12. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  13. Ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies 2020, 13, 3293. [Google Scholar] [CrossRef]
  14. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 97, 811–824. [Google Scholar] [CrossRef]
  15. Hammad, H.A.; ur Rehman, H.; De la Sen, M. Advanced Algorithms and Common Solutions to Variational Inequalities. Symmetry 2020, 12, 1198. [Google Scholar] [CrossRef]
  16. Ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  17. Ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  18. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  19. Koskela, P.; Manojlović, V. Quasi-Nearly Subharmonic Functions and Quasiconformal Mappings. Potential Anal. 2012, 37, 187–196. [Google Scholar] [CrossRef] [Green Version]
  20. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  21. Rehman, H.U.; Kumam, P.; Dong, Q.L.; Peng, Y.; Deebani, W. A new Popov’s subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 2020, 1–36. [Google Scholar] [CrossRef]
  22. Yordsorn, P.; Kumam, P.; ur Rehman, H.; Ibrahim, A.H. A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space. Mathematics 2020, 8, 1165. [Google Scholar] [CrossRef]
  23. Todorčević, V. Harmonic Quasiconformal Mappings and Hyperbolic Type Metrics; Springer International Publishing: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
  24. Yordsorn, P.; Kumam, P.; Rehman, H.U. Modified two-step extragradient method for solving the pseudomonotone equilibrium programming in a real Hilbert space. Carpathian J. Math. 2020, 36, 313–330. [Google Scholar]
  25. Lyashko, S.I.; Semenov, V.V. A new two-step proximal algorithm of solving the problem of equilibrium programming. In Optimization and Its Applications in Control and Data Sciences; Springer: Berlin/Heidelberg, Germany, 2016; pp. 315–325. [Google Scholar]
  26. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  27. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  28. Mastroeni, G. On Auxiliary Principle for Equilibrium Problems. In Nonconvex Optimization and Its Applications; Springer: New York, NY, USA, 2003; pp. 289–298. [Google Scholar] [CrossRef]
  29. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  30. Tiel, J.V. Convex Analysis; John Wiley: Hoboken, NJ, USA, 1984. [Google Scholar]
  31. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  32. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  33. Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
Figure 1. Example 1 while tolerance is 0.01 .
Figure 1. Example 1 while tolerance is 0.01 .
Axioms 09 00127 g001
Figure 2. Example 1 while tolerance is 0.001 .
Figure 2. Example 1 while tolerance is 0.001 .
Axioms 09 00127 g002
Figure 3. Example 1 while tolerance is 0.0001 .
Figure 3. Example 1 while tolerance is 0.0001 .
Axioms 09 00127 g003
Figure 4. Example 1 while tolerance is 0.00001 .
Figure 4. Example 1 while tolerance is 0.00001 .
Axioms 09 00127 g004
Figure 5. Example 2 for average number of iterations while n = 5 .
Figure 5. Example 2 for average number of iterations while n = 5 .
Axioms 09 00127 g005
Figure 6. Example 2 for average number of iterations while n = 10 .
Figure 6. Example 2 for average number of iterations while n = 10 .
Axioms 09 00127 g006
Figure 7. Example 2 for average number of iterations while n = 20 .
Figure 7. Example 2 for average number of iterations while n = 20 .
Axioms 09 00127 g007
Figure 8. Example 2 for average number of iterations while n = 40 .
Figure 8. Example 2 for average number of iterations while n = 40 .
Axioms 09 00127 g008
Figure 9. Example 3 while u 0 = ( 1.5 , 1.7 ) .
Figure 9. Example 3 while u 0 = ( 1.5 , 1.7 ) .
Axioms 09 00127 g009
Figure 10. Example 3 while u 0 = ( 2 , 3 ) .
Figure 10. Example 3 while u 0 = ( 2 , 3 ) .
Axioms 09 00127 g010
Figure 11. Example 3 while u 0 = ( 1 , 2 ) .
Figure 11. Example 3 while u 0 = ( 1 , 2 ) .
Axioms 09 00127 g011
Figure 12. Example 3 while u 0 = ( 2.7 , 2.6 ) .
Figure 12. Example 3 while u 0 = ( 2.7 , 2.6 ) .
Axioms 09 00127 g012
Figure 13. Example 4 while u 0 = ( 10 , 10 ) .
Figure 13. Example 4 while u 0 = ( 10 , 10 ) .
Axioms 09 00127 g013
Figure 14. Example 4 while u 0 = ( 10 , 10 ) .
Figure 14. Example 4 while u 0 = ( 10 , 10 ) .
Axioms 09 00127 g014
Figure 15. Example 4 while u 0 = ( 10 , 20 ) .
Figure 15. Example 4 while u 0 = ( 10 , 20 ) .
Axioms 09 00127 g015
Table 1. Parameters for cost bi-function.
Table 1. Parameters for cost bi-function.
Unit j α j β j γ j α j β j γ j
10.04202125
20.0351.7501.75128.5714
30.12510118
40.01163.2503.25186.2069
50.05303120
60.05303120
Table 2. Values used for constraint set.
Table 2. Values used for constraint set.
j u j min u j max
1080
2080
3050
4055
5030
6040
Table 3. Figure 1, Figure 2, Figure 3 and Figure 4 numerical values.
Table 3. Figure 1, Figure 2, Figure 3 and Figure 4 numerical values.
L.EgAAlgo.1
TOLIter.time (s)Iter.time (s)
0.011257.3692613.4055
       u L . EgA * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
       u Algo . 1 * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
0.0012761193.39392063150.6757
       u L . EgA * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
       u Algo . 1 * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
0.000111,526818.71844687324.3571
       u L . EgA * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
       u Algo . 1 * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
0.0000120,9461449.39597307526.9766
       u L . EgA * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
       u Algo . 1 * = ( 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 , 47.3245 )
Table 4. Numerical results for Figure 5, Figure 6, Figure 7 and Figure 8.
Table 4. Numerical results for Figure 5, Figure 6, Figure 7 and Figure 8.
L.EgAAlgo.1
nT. SamplesAvg Iter.Avg time(s)Avg Iter.Avg time(s)
510350.806660.1438
1010511.177960.1302
2010841.744170.1801
4010300.685980.1999
Table 5. Numerical results for Figure 9, Figure 10, Figure 11 and Figure 12.
Table 5. Numerical results for Figure 9, Figure 10, Figure 11 and Figure 12.
L.EgAAlgo.1
u 0 Iter.time(s)Iter.time(s)
( 1.5 , 1.7 ) 200.750680.5316
( 2.0 , 3.0 ) 210.787980.6484
( 1.0 , 2.0 ) 231.1450140.9730
( 2.7 , 2.6 ) 190.725470.5835
Table 6. Figure 13, Figure 14 and Figure 15 numerical values.
Table 6. Figure 13, Figure 14 and Figure 15 numerical values.
L.EgAAlgo.1
u 0 Iter.time(s)Iter.time(s)
( 10 , 10 ) 671.9151311.0752
( 10 , 10 ) 922.5721712.0469
( 10 , 20 ) 601.7689411.1864
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kumam, W.; Muangchoo, K. Inertial Iterative Self-Adaptive Step Size Extragradient-Like Method for Solving Equilibrium Problems in Real Hilbert Space with Applications. Axioms 2020, 9, 127. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040127

AMA Style

Kumam W, Muangchoo K. Inertial Iterative Self-Adaptive Step Size Extragradient-Like Method for Solving Equilibrium Problems in Real Hilbert Space with Applications. Axioms. 2020; 9(4):127. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040127

Chicago/Turabian Style

Kumam, Wiyada, and Kanikar Muangchoo. 2020. "Inertial Iterative Self-Adaptive Step Size Extragradient-Like Method for Solving Equilibrium Problems in Real Hilbert Space with Applications" Axioms 9, no. 4: 127. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop