Next Article in Journal
Global Behavior of an Arbitrary-Order Nonlinear Difference Equation with a Nonnegative Function
Next Article in Special Issue
Modified King’s Family for Multiple Zeros of Scalar Nonlinear Functions
Previous Article in Journal
Queuing System with Two Types of Customers and Dynamic Change of a Priority
Previous Article in Special Issue
Oscillation Theorems for Advanced Differential Equations with p-Laplacian Like Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models

1
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, SCL 802 Fixed Point Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
5
Department of Mathematics, College of Science and Arts, King Abdulaziz University, P. O. Box 344, Rabigh 21911, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 31 March 2020 / Revised: 15 April 2020 / Accepted: 16 April 2020 / Published: 19 May 2020
(This article belongs to the Special Issue Computational Methods in Analysis and Applications 2020)

Abstract

:
In this paper, we propose two modified two-step proximal methods that are formed through the proximal-like mapping and inertial effect for solving two classes of equilibrium problems. A weak convergence theorem for the first method and the strong convergence result of the second method are well established based on the mild condition on a bifunction. Such methods have the advantage of not involving any line search procedure or any knowledge of the Lipschitz-type constants of the bifunction. One practical reason is that the stepsize involving in these methods is updated based on some previous iterations or uses a stepsize sequence that is non-summable. We consider the well-known Nash–Cournot equilibrium models to support our well-established convergence results and see the advantage of the proposed methods over other well-known methods.

1. Introduction

Let K to be a nonempty convex, closed subset of a Hilbert space E and f : E × E R be a bifunction with f ( u , u ) = 0 for each u K . The equilibrium problem for f upon K is defined as follows:
Find p * K such that f ( p * , v ) 0 , v K .
The equilibrium problem, pioneered by Blum and Oettli [1] as a unifying feature, is found to be involved more and more actively in a number of applications, such as poroelasticity for petroleum engineering [2], porous materials [3,4], financial analysis in economics [5,6], the reconstruction of images in imaging processing [7,8,9], telecommunication networks or public roads [10,11], and noncooperative games with the corresponding equilibrium concept by Nash [12]. The problem (EP) was also known as Ky Fan’s inequality [13] due to his contributions to this area of research. In fact, this problem did not receive sufficient consideration before this specific format. Nikaido characterized Nash equilibria [14] as the solutions of the problem (EP), and Gwinner designed it just as a tool to solve optimization and variational inequalities [15]; however, they did not deal with the problem itself in a separate setup.
The equilibrium problem involves many mathematical problems as a particular case, i.e., the variational inequality problems (VIP), fixed point problems, complementarity problems, optimization problems, saddle point problems, the Nash equilibrium of non-cooperative games, and the vector and scalar minimization problems (see [1,16]). Moreover, iterative methods are efficient tools to evaluate an approximate solution to an equilibrium problem. Several methods are well known for solving the problem (EP), for instance the proximal point method [17,18], projection methods [19], extragradient methods [20,21,22], the subgradient method [23,24,25], methods using the Bregman distance [26], and others [27,28,29,30].
The extragradient method was originally established by Korpelevich [31] for solving the variational inequality problem, which is a specific case of equilibrium problems. Korpelevich proved the weak convergence of the generated sequence under the hypotheses of Lipschitz continuity and pseudo-monotonicity on the operator. It involves determining two projections onto a closed convex set in each iteration of the method. If the closed convex set is simple enough, so that projections onto it are explicitly computed, then the extragradient method is exceptionally suitable. The modification of this method is the subgradient extragradient method [32], where the second projection is not taken over onto the closed convex set, but on a half space and provides a simple calculation.
In 2008, Tran et al. [33] proposed an extragradient method extension of the Korpelevich extragradient method [31] for dealing with the pseudo-monotone equilibrium problem in finite-dimensional spaces. It is crucial to determine two minimization problems on a closed convex set in each iteration of the method, and there is a reasonable fixed stepsize in minimization problems. Precisely, the iterative sequence { u n } is determined as follows:
u 0 C , v n = arg min y K { λ f ( u n , y ) + 1 2 u n y 2 } , u n + 1 = arg min y K { λ f ( v n , y ) + 1 2 u n y 2 } ,
where 0 < λ < min { 1 2 k 1 , 1 2 k 2 } , and k 1 ,   k 2 are Lipschitz constants. In 2016, Lyashko et al. [34] proposed an extension of the method (2) motivated by the result in [35]. Precisely, this iterative sequence { u n } is defined as follows:
u 0 , v 0 C , u n + 1 = arg min y K { λ f ( v n , y ) + 1 2 u n y 2 } , v n + 1 = arg min y K { λ f ( v n , y ) + 1 2 u n + 1 y 2 } ,
where 0 < λ < 1 2 k 2 + 4 k 1 and k 1 , k 2 are Lipschitz constants.
On the other hand, inertial-type methods are also important, based on the technique of the heavy-ball methods of the second-order time dynamic system. Polyak [36] began by taking inertial extrapolation as an acceleration method to solve smooth convex optimization problems. These methods are two-step iterative schemes, while the next iteration is evaluated by the use of the previous two iterations, and it could be viewed as an accelerating step of the iterative sequence. Several inertial-like algorithms for special cases of the problem (EP) can be found, for instance in [37,38,39,40].
This manuscript aims to introduce two modifications for the result proposed by Lyashko et al. [34] motivated by the results in [32,41,42]. These modifications are carried out by applying the inertial and subgradient strategy to speed up the iteration process and reduce the numerical computation. The first result incorporates the Lyashko two-step extragradient method with the inertial term, and a variable stepsize is followed that is updated on every single iteration by using the previous iterations. We prove a weak convergence theorem for our first proposed method through the standard assumptions on the cost bifunction. Furthermore, we come up with an alternative inertial-type method, which is another variant of the first method. The second method does not require any knowledge about the strongly pseudomonotone and Lipschitz-type constants of a bifunction, and the strong convergence of the method is achieved.
This paper is arranged as follows: Section 2 includes a number of definitions and basic results. Section 3 and Section 4 contain both of our methods involving the pseudomonotone and strongly pseudomonotone bifunction, as well as the convergence theorem. Section 5 covers the applications of our proposed results to the variational inequality problems. Section 6 illustrates numerical experiments in comparison with other existing algorithms using the Nash–Cournot equilibrium models to display the efficiency of our proposed algorithms.

2. Preliminaries

We make use of K as a closed, convex subset of the Hilbert space E . We denote u n p * and u n p * as the sequence { u n } strongly converges to p * and { u n } weakly converges to p * , respectively. In addition, E P ( f , K ) indicates the set of solutions of the equilibrium problem within K and p * E P ( f , K ) .
Definition 1.
Let a convex function g : K R , and a subdifferential of g at u K is:
g ( u ) = { w E : g ( v ) g ( u ) w , v u , for all v K } .
Definition 2.
The normal cone of K at u K is:
N K ( u ) = { w E : w , v u 0 , for all v K } .
Definition 3.
The metric projection P K ( u ) of u onto a closed, convex subset K of E is defined as:
P K ( u ) = arg min { v u : v K } .
Now, we consider the concepts of the monotonicity of a bifunction (see [1,43]).
Definition 4.
Let a bifunction f : E × E R on K for γ > 0 be said to be:
(1)
strongly monotone if:
f ( u , v ) + f ( v , u ) γ u v 2 , u , v K ;
(2)
monotone if:
f ( u , v ) + f ( v , u ) 0 , u , v K ;
(3)
strongly pseudomonotone if:
f ( u , v ) 0 f ( v , u ) γ u v 2 , u , v K ;
(4)
pseudomonotone if:
f ( u , v ) 0 f ( v , u ) 0 , u , v K ;
(5)
satisfying the Lipschitz-type condition on K if there exist two numbers k 1 , k 2 > 0 , such that:
f ( u , z ) f ( u , v ) + f ( v , z ) + k 1 u v 2 + k 2 v z 2 , u , v , z K .
Lemma 1
([44]).Let K be a nonempty, closed, and convex subset of a Hilbert space E and g : K R be a convex, subdifferentiable, and lower semi-continuous function. An element u K is a minimizer of a function g if and only if 0 g ( u ) + N K ( u ) where g ( u ) and N K ( u ) denote the subdifferential of g at u and the normal cone of K at u, respectively.
Lemma 2
([45]).Let a , b E and μ R , then the following is true:
μ a + ( 1 μ ) b 2 = μ a 2 + ( 1 μ ) b 2 μ ( 1 μ ) a b 2 .
Lemma 3
([46]).Let a n , b n , and c n be a sequence in [ 0 , + ) such that:
a n + 1 a n + b n ( a n a n 1 ) + c n , n 1 , with n = 1 + c n < + ,
and also, there exists b > 0 such that 0 b n b < 1 , n N . Then, we have:
(i)
n = 1 + [ a n a n 1 ] + < , with [ s ] + : = max { s , 0 } ;
(ii)
lim n + a n = a * [ 0 , ) .
Lemma 4
([47]).Let { ξ n } belong in E and K E satisfying:
(i)
For ξ K , lim n ξ n ξ exists;
(ii)
Every weak cluster point of the sequence { ξ n } belongs to K.
Then, { ξ n } converges weakly to an element of K .
Lemma 5
([48]).Let { p n } , { q n } [ 0 , ) be sequences and n = 1 + p n = with n = 1 + p n q n < , then lim inf n q n = 0 .

3. An Algorithm for the Pseudomonotone Equilibrium Problem and Its Convergence Analysis

The first method is developed by adding an inertial term to Algorithm 1 ([34], p. 4). It is important to note that the algorithm in the paper [34] cannot be used practically without knowledge of the Lipschitz-type constants of the bifunction. To overcome this situation, we suggest a new step-size rule that does not depend on Lipschitz-type constants. Rather, it depends on certain previous iterations and updates on each iteration. In a situation where Lipschitz constants are unknown or difficult to calculate, this approach is very useful. The following is our first algorithm in more detail:
Algorithm 1 Two-step proximal iterative method for pseudomonotone EP.
  • Initialization: Choose u 1 , v 1 K , λ 0 > 0 and μ [ 0 , g ( α ) ) . Set:
    u 0 = arg min y K { λ 0 f ( v 1 , y ) + 1 2 u 1 y 2 } and v 0 = arg min y K { λ 0 f ( v 1 , y ) + 1 2 u 0 y 2 } .
  • Iterative steps: Now, v n 1 , v n , u n 1 , u n K and λ n are known for n 0 .
  • Step 1: Construct a half space:
    Π n = { z E : u n λ n ω n v n , z v n 0 }
    where ω n f ( v n 1 , v n ) , and then, compute:
    u n + 1 = Prox λ n f ( v n , . ) t n = arg min y Π n { λ n f ( v n , y ) + 1 2 t n y 2 } ,
    where t n = u n + α n ( u n u n 1 ) and α n [ 0 , 1 6 ) is a nondecreasing sequence.
  • Step 2: Set d = f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) f ( v n , u n + 1 ) with:
    λ n + 1 = min λ n , μ ( v n 1 v n 2 + u n + 1 v n 2 ) 2 d if d > 0 ; λ n otherwise .
    Compute:
    v n + 1 = Prox λ n + 1 f ( v n , . ) u n + 1 = arg min y K { λ n + 1 f ( v n , y ) + 1 2 u n + 1 y 2 } .
  • Step 3: If u n + 1 = v n = t n , STOP; otherwise, set n : = n + 1 , and return back to Step 1.
Assumption 1.
Let a bifunction f : E × E R satisfy the following conditions:
(A1)
f is pseudomonotone on K with f ( u , u ) = 0 , for all u K ;
(A2)
f satisfies the Lipschitz-type condition on E with k 1 and k 2 ;
(A3)
lim n sup f ( u n , y ) f ( p , y ) for each y K and { u n } K with u n p ;
(A4)
f ( u , . ) is subdifferentiable and convex on E for each u E .
Lemma 6.
We have the following inequality from Algorithm 1.
λ n f ( v n , y ) λ n f ( v n , u n + 1 ) t n u n + 1 , y u n + 1 , y Π n .
Proof. 
By the value of u n + 1 with Lemma 1, we have:
0 2 λ n f ( v n , y ) + 1 2 t n y 2 ( u n + 1 ) + N Π n ( u n + 1 ) .
Thus, for ω 2 f ( v n , u n + 1 ) , there exists ω ¯ N Π n ( u n + 1 ) such that:
λ n ω + u n + 1 t n + ω ¯ = 0 ,
which gives:
t n u n + 1 , y u n + 1 = λ n ω , y u n + 1 + ω ¯ , y u n + 1 , y Π n .
By ω ¯ N Π n ( u n + 1 ) , we have ω ¯ , y u n + 1 0 , y Π n . This implies that:
t n u n + 1 , y u n + 1 λ n ω , y u n + 1 , y Π n .
From ω f ( v n , u n + 1 ) and by the subdifferential definition, we have:
f ( v n , y ) f ( v n , u n + 1 ) ω , y u n + 1 , y E .
Combining (4) and (5), we obtain:
λ n f ( v n , y ) λ n f ( v n , u n + 1 ) t n u n + 1 , y u n + 1 , y Π n .
Lemma 7.
We also have the following expression from Algorithm 1.
λ n + 1 f ( v n , y ) λ n + 1 f ( v n , v n + 1 ) u n + 1 v n + 1 , y v n + 1 , y K .
Proof. 
The proof is identical to the proof of Lemma 6. □
Remark 1.
By taking u n + 1 = v n = t n and u n + 1 = v n = v n + 1 in Algorithm 1, we can get v n E P ( f , K ) from Lemma 6 and Lemma 7, respectively.
Lemma 8.
We have the following relationship from Algorithm 1.
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) u n v n , u n + 1 v n .
Proof. 
Since u n + 1 Π n , this gives that u n λ n ω n v n , u n + 1 v n 0 . Thus, we get:
λ n ω n , u n + 1 v n u n v n , u n + 1 v n .
By ω n f ( v n 1 , v n ) and the definition of subdifferential, we get:
f ( v n 1 , y ) f ( v n 1 , v n ) ω n , y v n , y E .
Substituting y = u n + 1 in the above inequality, we get:
f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) ω n , u n + 1 v n , y E .
Combining (6) and (7), we have:
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) u n v n , u n + 1 v n .
Lemma 9.
Let f : E × E R satisfy Assumption 1. Then, for each p * E P ( f , K ) , we have:
u n + 1 p * 2   t n p * 2 1 2 μ λ n λ n + 1 u n v n 2 1 μ λ n λ n + 1 u n + 1 v n 2 + 2 μ λ n λ n + 1 u n v n 1 2 u n + 1 t n 2 + u n u n + 1 2 .
Proof. 
It follows from Lemma 6 with y = p * that we have:
λ n f ( v n , p * ) λ n f ( v n , u n + 1 ) t n u n + 1 , p * u n + 1 .
Thus, f ( p * , v n ) 0 , and due to Condition (A1), we have f ( v n , p * ) 0 . This implies that:
t n u n + 1 , u n + 1 p * λ n f ( v n , u n + 1 ) .
By the definition, λ n + 1 implies that:
f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) f ( v n , u n + 1 ) μ ( v n 1 v n 2 + u n + 1 v n 2 ) 2 λ n + 1
both sides have been multiplied with λ n > 0 , such that:
λ n f ( v n , u n + 1 ) λ n f ( v n 1 , u n + 1 ) λ n f ( v n 1 , v n ) λ n μ ( v n 1 v n 2 + u n + 1 v n 2 ) 2 λ n + 1
By combing (9) and (10), we have:
t n u n + 1 , u n + 1 p * λ n { f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) } μ λ n 2 λ n + 1 v n 1 v n 2 μ λ n 2 λ n + 1 u n + 1 v n 2 .
Since u n + 1 Π n , then from Lemma 8, we have:
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) u n v n , u n + 1 v n .
From (11) and (12), we obtain:
t n u n + 1 , u n + 1 p * u n v n , u n + 1 v n μ λ n 2 λ n + 1 v n 1 v n 2 μ λ n 2 λ n + 1 u n + 1 v n 2 .
We have the following facts:
2 t n u n + 1 , u n + 1 p * = t n p * 2 u n + 1 t n 2 u n + 1 p * 2 ,
2 v n u n , v n u n + 1 = u n v n 2 + u n + 1 v n 2 u n u n + 1 2 ,
and:
v n 1 v n 2 v n 1 u n + u n v n 2 2 v n 1 u n 2 + 2 u n v n 2 .
From above facts with Expression (13) complete the proof. □
Theorem 1.
The sequences { t n } , { v n } , and { u n } are generated by Algorithm 1 and weakly converge to p * , where:
0 < μ < 1 6 α 3 and 0 α n α < 1 6 .
Proof. 
Adding to both sides the value 2 μ λ n + 1 λ n + 2 u n + 1 v n 2 in Lemma 9, we have:
u n + 1 p * 2 + 2 μ λ n + 1 λ n + 2 u n + 1 v n 2   t n p * 2 1 2 μ λ n λ n + 1 u n v n 2 1 μ λ n λ n + 1 u n + 1 v n 2 + 2 μ λ n λ n + 1 u n v n 1 2 u n + 1 t n 2 + u n + 1 u n 2 + 2 μ λ n + 1 λ n + 2 u n + 1 v n 2
  t n p * 2 1 2 μ λ n λ n + 1 u n v n 2 1 μ λ n λ n + 1 2 μ λ n λ n + 2 u n + 1 v n 2 + 2 μ λ n λ n + 1 u n v n 1 2 u n + 1 t n 2 + u n + 1 u n 2
  t n p * 2 1 μ λ n λ n + 1 2 μ λ n λ n + 2 u n + 1 v n 2 + u n v n 2 + 2 μ λ n λ n + 1 u n v n 1 2 u n + 1 t n 2 + u n + 1 u n 2
  t n p * 2 1 2 1 μ λ n λ n + 1 2 μ λ n λ n + 2 u n + 1 v n + u n v n 2 + 2 μ λ n λ n + 1 u n v n 1 2 u n + 1 t n 2 + u n + 1 u n 2
  t n p * 2 1 2 1 μ λ n λ n + 1 2 μ λ n λ n + 2 u n + 1 u n 2 + 2 μ λ n λ n + 1 u n v n 1 2 u n + 1 t n 2 + u n + 1 u n 2 .
By the definition of t n in Algorithm 1, we get:
t n p * 2 = u n + α n ( u n u n 1 ) p * 2 = ( 1 + α n ) ( u n p * ) α n ( u n 1 p * ) 2 = ( 1 + α n ) u n p * 2 α n u n 1 p * 2 + α n ( 1 + α n ) u n u n 1 2 .
The value of u n + 1 with Lemma 2 gives:
u n + 1 t n 2 = u n + 1 u n α n ( u n u n 1 ) 2 = u n + 1 u n 2 + α n 2 u n u n 1 2 2 α n u n + 1 u n , u n u n 1 u n + 1 u n 2 + α n 2 u n u n 1 2 2 α n u n + 1 u n u n u n 1
u n + 1 u n 2 + α n 2 u n u n 1 2 α n u n + 1 u n 2 α n u n u n 1 2 ( 1 α n ) u n + 1 u n 2 + ( α n 2 α n ) u n u n 1 2 .
Thus, Expression (18) with (19) and (21) turns into:
u n + 1 p * 2 + 2 μ λ n + 1 λ n + 2 u n + 1 v n 2 ( 1 + α n ) u n p * 2 α n u n 1 p * 2 + α n ( 1 + α n ) u n u n 1 2 1 2 1 μ λ n λ n + 1 2 μ λ n λ n + 2 u n + 1 u n 2 + 2 μ λ n λ n + 1 u n v n 1 2 + u n + 1 u n 2 ( 1 α n ) u n + 1 u n 2 ( α n 2 α n ) u n u n 1 2
( 1 + α n ) u n p * 2 α n u n 1 p * 2 + 2 μ λ n λ n + 1 u n v n 1 2 1 2 μ λ n 2 λ n + 1 μ λ n λ n + 2 α n u n + 1 u n 2 + 2 α n u n u n 1 2
( 1 + α n ) u n p * 2 α n u n 1 p * 2 + 2 μ λ n λ n + 1 u n v n 1 2 κ n u n + 1 u n 2 + ξ n u n u n 1 2 ,
where κ n : = 1 2 μ λ n 2 λ n + 1 μ λ n λ n + 2 α n and ξ n : = 2 α n . By Substituting:
Γ n = u n p * 2 α n u n 1 p * 2 + 2 μ λ n λ n + 1 u n v n 1 2 + ξ n u n u n 1 2 .
Next, we compute:
Γ n + 1 Γ n = u n + 1 p * 2 α n + 1 u n p * 2 + 2 μ λ n + 1 λ n + 2 u n + 1 v n 2 + ξ n + 1 u n + 1 u n 2 u n p * 2 + α n u n 1 p * 2 2 μ λ n λ n + 1 u n v n 1 2 ξ n u n u n 1 2   u n + 1 p * 2 ( 1 + α n ) u n p * 2 + α n u n 1 p * 2 + 2 μ λ n + 1 λ n + 2 u n + 1 v n 2 + ξ n + 1 u n + 1 u n 2 ξ n u n u n 1 2 2 μ λ n λ n + 1 u n v n 1 2 .
From Expression (24) with (25), we obtain:
Γ n + 1 Γ n κ n u n + 1 u n 2 + ξ n + 1 u n + 1 u n 2 = ( κ n ξ n + 1 ) u n + 1 u n 2 ,
and we continue to evaluate:
κ n ξ n + 1 = 1 2 μ λ n 2 λ n + 1 μ λ n λ n + 2 α n 2 α n + 1 1 2 3 α μ λ n 2 λ n + 1 + λ n λ n + 2 .
Due to λ n λ , the above implies that:
1 2 3 α μ λ n 2 λ n + 1 + λ n λ n + 2 1 2 3 α 3 2 μ > 0 as n .
Due to our supposition and from the above, there exists an δ 0 and n 0 N such that:
κ n ξ n + 1 δ 0 , n n 0 .
Thus, Expression (26) with (28) implies that:
Γ n + 1 Γ n δ u n + 1 u n 2 0 .
The sequence { Γ n } is non-increasing for n n 0 . By the value of Γ n + 1 , we have:
Γ n + 1 = u n + 1 p * 2 α n + 1 u n p * 2 + 2 μ λ n + 1 λ n + 2 u n + 1 v n 2 + ξ n + 1 u n + 1 u n 2 α n + 1 u n p * 2 .
From the value Γ n , we have:
Γ n = u n p * 2 α n u n 1 p * 2 + 2 μ λ n λ n + 1 u n v n 1 2 + ξ n u n u n 1 2 u n p * 2 α n u n 1 p * 2 .
The expression (31) for n n 0 implies that:
u n p * 2 Γ n + α n u n 1 p * 2 Γ n 0 + α u n 1 p * 2 Γ n 0 ( α n n 0 + + 1 ) + α n n 0 u n 0 p * 2 Γ n 0 1 α + α n n 0 u n 0 p * 2 .
It follows from Expressions (30) and (32) that:
Γ n + 1 α n + 1 u n p * 2 α u n p * 2 α Γ n 0 1 α + α n n 0 + 1 u n 0 p * 2 .
It follows from Expressions (29) and (33) that:
δ n = n 0 k u n + 1 u n 2 Γ n 0 Γ n + 1 Γ n 0 + α Γ n 0 1 α + α n n 0 + 1 u n 0 p * 2 Γ n 0 1 α + u n 0 p * 2 ,
and letting k in Expression (34), we obtain:
n = 1 u n + 1 u n 2 < + , implies that u n + 1 u n 0 as n .
From Expressions (20) and (35), we have:
lim n u n + 1 t n 0 ,
and also to continue:
0 u n t n u n u n + 1 + u n + 1 t n 0 as n .
Let Δ n = u n p * 2 α n u n 1 p * 2 + 2 μ λ n λ n + 1 u n v n 1 2 . Expression (33) converts into:
Δ n + 1 α Γ n 0 1 α + α n n 0 + 1 u n 0 p * 2 + ξ n + 1 u n u n 1 2 .
By Expressions (16) and (19) with (38) for n n 0 , we have:
1 μ λ n λ n + 1 2 μ λ n λ n + 2 u n v n 2 + u n + 1 v n 2 Δ n Δ n + 1 + α n ( 1 + α n ) u n u n 1 2 + u n + 1 u n 2 Δ n Δ n + 1 + α ( 1 + α ) u n u n 1 2 + u n + 1 u n 2 .
Due to the assumption on μ for some ϵ > 0 ,
1 μ λ n λ n + 1 2 μ λ n λ n + 2 ϵ > 0 , as n 0 .
Let us fix k n 0 and Expressions (38) and (39) for n = n 0 , n 0 + 1 , , k . Summing them, we have:
ϵ n = n 0 k u n v n 2 + ϵ n = n 0 k u n + 1 v n 2 Δ n 0 Δ k + 1 + α ( 1 + α ) n = n 0 k u n u n 1 2 + n = n 0 k u n + 1 u n 2 Δ n 0 + α Δ n 0 1 α + α n n 0 + 1 u n 0 p * 2 + ξ k + 1 u k u k 1 2 + α ( 1 + α ) n = n 0 k u n u n 1 2 + n = n 0 k u n + 1 u n 2
and letting k in the above expression leads to:
n u n v n 2 < , n u n + 1 v n 2 < .
The above expression also implies that:
lim n u n v n = lim n u n + 1 v n = 0 .
By using the triangle inequality with Expression (42), we have:
0 v n v n 1 u n v n + u n v n 1 0 , as n .
Next, from Expression (14) with (19):
u n + 1 p * 2 ( 1 + α n ) u n p * 2 α n u n 1 p * 2 + α ( 1 + α ) u n u n 1 2 + 2 μ λ n λ n + 1 u n v n 1 2 + u n + 1 u n 2
and the above expression with (35), (41), and Lemma 3 implies that the limit of u n p * , t n p * and v n p * exists for each p * E P ( f , K ) , meaning that the sequences { u n } , { t n } , and { v n } are bounded. Next, we have to prove that each weak sequential limit point of the sequence { u n } belongs to E P ( f , K ) . Let z be any sequential weak cluster point of the sequence { u n } , i.e., a weak convergent subsequence { u n k } of { u n } converging to z , which also implies that { v n k } also converge weakly to z . Now, our aim to show that z E P ( f , K ) . Using Lemma 6, the definition of λ n + 1 (10), and Lemma 8, we obtain:
λ n k f ( v n k , y ) λ n k f ( v n k , u n k + 1 ) + t n k u n k + 1 , y u n k + 1 λ n k f ( v n k 1 , u n k + 1 ) λ n k f ( v n k 1 , v n k ) μ λ n k 2 λ n k + 1 v n k v n k 1 2 μ λ n k 2 λ n k + 1 v n k u n k + 1 2 + t n k u n k + 1 , y u n k + 1 u n k v n k , u n k + 1 v n k μ λ n k 2 λ n k + 1 v n k v n k 1 2 μ λ n k 2 λ n k + 1 v n k u n k + 1 2 + t n k u n k + 1 , y u n k + 1 ,
whereas y is an any element in Π n . As a result, with (36), (37), (42), and (43) and due to the boundness of { u n } , the above inequality goes to zero. By given λ > 0 , the assumption (A3), and v n k z , we get:
0 lim sup k f ( v n k , y ) f ( z , y ) , y Π n .
Since z K Π n , then f ( z , y ) 0 , y K . It gives that z belongs to E P ( f , K ) . By Lemma 4, we show that { t n } , { u n } , and { v n } converge weakly to p * as n .

4. An Algorithm for the Strongly Pseudomonotone Equilibrium Problem and Its Convergence Analysis

The second algorithm is also another variant of Algorithm 1 [34] that can solve the strongly pseudomonotone equilibrium problem. However, the advantage of this algorithm is that it provides strong convergence by using a diminishing stepsize sequence. Let { λ n } ( 0 , + ) be a sequence satisfying the following:
( S 1 ) : lim n λ n = 0 and ( S 2 ) : n = 1 λ n = + .
Assumption 2.
Let a bifunction f : E × E R satisfy the following conditions:
(B1)
f is strongly pseudomonotone on K with f ( u , u ) = 0 , for all u K ;
(B2)
f satisfies the Lipschitz-type condition on E with k 1 and k 2 ;
(B3)
f ( u , . ) is sub-differentiable and convex on E for each fixed u E .
The Algorithm 2 is described below.
Algorithm 2 Two-step proximal iterative method for strongly pseudomonotone EP.
  • Initialization: Choose u 1 , v 1 E , α n [ 0 , 1 6 ) with λ n satisfying (48). Set:
    u 0 = Prox λ 0 f ( v 1 , . ) u 1 = arg min y K { λ 0 f ( v 1 , y ) + 1 2 u 1 y 2 } ,
    and:
    v 0 = Prox λ 0 f ( v 1 , . ) u 0 = arg min y K { λ 0 f ( v 1 , y ) + 1 2 u 0 y 2 } .
  • Iterative steps: Given u n 1 , v n 1 , u n , and v n for n 0 , construct a half space:
    Π n = { z E : u n λ n ω n 1 v n , z v n 0 } ,
    where ω n 1 f ( v n 1 , v n ) and t n = u n + α n ( u n u n 1 ) .
  • Step 1: Compute:
    u n + 1 = Prox λ n f ( v n , . ) t n = arg min y Π n { λ n f ( v n , y ) + 1 2 t n y 2 } .
  • Step 2: Compute:
    v n + 1 = Prox λ n + 1 f ( v n , . ) u n + 1 = arg min y K { λ n + 1 f ( v n , y ) + 1 2 u n + 1 y 2 } .
  • Step 3: If u n + 1 = v n = t n , STOP; otherwise, set n : = n + 1 , and go back to Step 1.
Lemma 10.
Assume f : E × E R satisfying Assumption 2. For each p * E P ( f , K ) , we have:
u n + 1 p * 2   t n p * 2 ( 1 4 k 1 λ n ) u n v n 2 ( 1 2 k 2 λ n ) u n + 1 v n 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 u n + 1 t n 2 + u n + 1 u n 2 .
Proof. 
Follow the same step as in the proof of Lemma 9, and term 2 γ λ n v n p * 2 will appear due to the strongly pseudomonotonicity of a bifunction. □
Theorem 2.
Assume that a bifunction f : E × E R with Assumption 2. Let { u n } be sequences in E generated by Algorithm 2, where the sequence α n is non-decreasing and 0 α n α < 1 6 . Then, the sequence { u n } , { v n } , and { t n } strongly converges to p * in E P ( f , K ) .
Proof. 
By Lemma 10 and adding 4 k 1 λ n u n + 1 v n 2 on both sides:
u n + 1 p * 2 + 4 k 1 λ n u n + 1 v n 2   t n p * 2 ( 1 4 k 1 λ n ) u n v n 2 ( 1 2 k 2 λ n ) u n + 1 v n 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 u n + 1 t n 2 + u n + 1 u n 2 + 4 k 1 λ n u n + 1 v n 2
=   t n p * 2 ( 1 4 k 1 λ n ) u n v n 2 ( 1 2 k 2 λ n 4 k 1 λ n ) u n + 1 v n 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 u n + 1 t n 2 + u n + 1 u n 2
=   t n p * 2 1 2 ( 1 2 k 2 λ n 4 k 1 λ n ) 2 u n + 1 v n 2 + 2 u n v n 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 u n + 1 t n 2 + u n + 1 u n 2
  t n p * 2 1 2 ( 1 2 k 2 λ n 4 k 1 λ n ) u n + 1 u n 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 u n + 1 t n 2 + u n + 1 u n 2 .
From the value of t n in Algorithm 2, we have:
t n p * 2 = ( 1 + α n ) u n p * 2 α n u n 1 p * 2 + α n ( 1 + α n ) u n u n 1 2 .
By the definition of t n , we have:
u n + 1 t n 2 = u n + 1 u n 2 + α n 2 u n u n 1 2 2 α n u n + 1 u n , u n u n 1
( 1 α n ) u n + 1 u n 2 + ( α n 2 α n ) u n u n 1 2 , .
Thus, Expression (50) with (51) and (53) leads to:
u n + 1 p * 2 + 4 k 1 λ n u n + 1 v n 2 ( 1 + α n ) u n p * 2 α n u n 1 p * 2 + α n ( 1 + α n ) u n u n 1 2 1 2 ( 1 2 k 2 λ n 4 k 1 λ n ) u n + 1 u n 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 ( 1 α n ) u n + 1 u n 2 ( α n 2 α n ) u n u n 1 2 + u n + 1 u n 2
( 1 + α n ) u n p * 2 α n u n 1 p * 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 ( 1 2 k 2 λ n 2 k 1 λ n α n ) u n + 1 u n 2 + 2 α n u n u n 1 2
( 1 + α n ) u n p * 2 α n u n 1 p * 2 + 4 k 1 λ n u n v n 1 2 2 γ λ n v n p * 2 Q n u n + 1 u n 2 + R n u n u n 1 2 ,
where Q n = 1 2 k 2 λ n 2 k 1 λ n α n with R n = 2 α n . Substitute:
Φ n =   u n p * 2 α n u n 1 p * 2 + 4 k 1 λ n u n v n 1 2 .
From the above substitution, Expression (56) turns into:
Φ n + 1 Φ n 2 γ λ n v n p * 2 Q n u n + 1 u n 2 + R n u n u n 1 2
Next, we substitute:
Ψ n = Φ n + R n u n u n 1 2 .
By the above expression and (57):
Ψ n + 1 Ψ n = ( Q n R n + 1 ) u n + 1 u n 2 2 γ λ n v n p * 2 .
It follows that:
Q n R n + 1 = 1 2 k 2 λ n 2 k 1 λ n α n 2 α n + 1 1 2 k 2 λ n 2 k 1 λ n 3 α = 1 2 λ n ( k 2 + 2 k 1 ) 3 α
Since λ n 0 , then there exist an N 0 N , such that:
0 < λ n < 1 2 3 α k 2 + 2 k 1 , n N 0 .
Due to the above condition, we have:
Q n R n + 1 0 , for all n N 0 .
Thus, Expressions (58) and (60) imply that:
Ψ n + 1 Ψ n = δ u n + 1 u n 2 0 , n N 0 , for some δ 0 .
The above implies that the sequence { Ψ n } is nonincreasing for n N 0 . From the value of Ψ n , we have:
u n p * 2 Ψ n + α n u n 1 p * 2 Ψ N 0 + α u n 1 p * 2 Ψ N 0 ( α n N 0 + + 1 ) + α n N 0 u N 0 p * 2 Ψ N 0 1 α + α n N 0 u N 0 p * 2 .
Similarly, for the value of Ψ n + 1 with the above expression, we have:
Ψ n + 1 α n + 1 u n p * 2 α u n p * 2 α Ψ N 0 1 α + α n N 0 + 1 u N 0 p * 2 α Ψ N 0 1 α + u N 0 p * 2 .
From Expressions (61) and (63), such that
δ n = N 0 k u n + 1 u n 2 Ψ N 0 Ψ k + 1 Ψ N 0 + α Ψ N 0 1 α + u N 0 p * 2 Ψ N 0 1 α + u N 0 p * 2 ,
letting k in the above expression, we have:
n u n + 1 u n 2 < + , implies that lim n u n + 1 u n = 0 .
From Expressions (55) and (65), we obtain:
u n + 1 t n 0 as n .
Next, Expression (63) implies that:
Φ n + 1 α Ψ N 0 1 α + u N 0 p * 2 + R n + 1 u n + 1 u n 2 .
Further, for Equations (49) and (51) for n N 0 , we have:
1 2 k 2 λ n 4 k 1 λ n u n + 1 v n 2 + u n v n 2 Φ n Φ n + 1 + α ( 1 + α ) u n u n 1 2 + u n + 1 u n 2 .
Now, we fix a natural number k N 0 and consider the above inequality for all numbers N 0 , N 0 + 1 , , k . Summing them using (67), we obtain:
1 2 k 2 λ n 4 k 1 λ n n = N 0 k u n + 1 v n 2 + u n v n 2 Φ N 0 Φ k + 1 + α ( 1 + α ) n = N 0 k u n u n 1 2 + n = N 0 k u n + 1 u n 2 Φ N 0 + α Ψ N 0 1 α + u N 0 p * 2 + R k + 1 u k + 1 u k 2 + α ( 1 + α ) n = N 0 k u n u n 1 2 + n = N 0 k u n + 1 u n 2
and letting k in the above expression, we have:
n u n + 1 v n 2 < + and n u n v n 2 < + .
The above expression implies that:
lim n u n + 1 v n = lim n u n v n = 0 .
We can easily derive the following by using Expressions (65), (66), and (71) such that:
lim n u n v n = lim n u n t n = lim n v n 1 v n = 0 .
Furthermore, Expression (54) with (65), (70), and Lemma 3 implies that:
lim n u n p * = l , for some l 0 .
By Expression (72), we obtain:
lim n t n p * = lim n v n p * = l .
Now, we show that the sequence { u n } strongly converges to p * . From the condition on λ n for all n N 0 , the following still holds:
0 < λ n < 1 2 k 2 + 4 k 1 , n N 0 .
It follows from the above condition and Lemma 10 that:
2 γ λ n v n p * 2   t n p * 2 u n + 1 p * 2 + 4 k 1 λ n u n v n 1 2 + u n + 1 u n 2 , n N 0 .
From Expressions (51) and (75), we obtain:
2 γ λ n v n p * 2 u n + 1 p * 2 + ( 1 + α n ) u n p * 2 α n u n 1 p * 2 + α n ( 1 + α n ) u n u n 1 2 + 4 k 1 λ n u n v n 1 2 + u n + 1 u n 2 ( u n p * 2 u n + 1 p * 2 ) + 2 α u n u n 1 2 + u n + 1 u n 2 + ( α n u n p * 2 α n 1 u n 1 p * 2 ) + 4 k 1 λ n u n v n 1 2 .
From the above expression with (65) and (70), this implies that:
n = N 0 k 2 γ λ n v n p * 2 ( u N 0 p * 2 u k + 1 p * 2 ) + 2 α n = N 0 k u n u n 1 2 + n = N 0 k u n + 1 u n 2 + ( α k u k p * 2 α N 0 1 u N 0 1 p * 2 ) + 4 k 1 2 k 2 + 4 k 1 n = N 0 k u n v n 1 2   u N 0 p * 2 + α u k p * 2 + 2 α n = N 0 k u n u n 1 2 + 4 k 1 2 k 2 + 4 k 1 n = N 0 k u n v n 1 2 + n = N 0 k u n + 1 u n 2 M ,
for some M 0 . Now, letting k , in the above expression, we obtain:
n = 1 2 γ λ n v n p * 2 < + .
From Lemma 5 and Expression (78), this implies that:
lim inf v n p * = 0 .
Expressions (73) and (79) imply that lim n u n p * = 0 . This completes the proof. □

5. Application to Variational Inequality Problems

Now, we study the applications of our proposed methods to solve the pseudomonotone and strongly monotone variational inequality problems. An operator G : E E is considered to be:
(1)
strongly pseudomonotone on K if G ( u ) , y u 0 G ( y ) , u y γ u y 2 , u , y K ;
(2)
pseudomonotone on K if G ( u ) , y u 0 G ( y ) , u y 0 , u , y K ;
(3)
satisfying being L-Lipschitz continuous on K if G ( u ) G ( y ) L u y , u , y K .
The variational inequality problem is defined as:
Find p * K such that G ( p * ) , y p * 0 , y K .
Note: Let bifunction f ( u , v ) : = G ( u ) , v u for all u , v K . Then, the equilibrium problem converts into the above variational inequality problem with L = 2 k 1 = 2 k 2 . It follows from u n + 1 in Algorithm 1 and the above definition of bifunction f that:
u n + 1 = arg min y Π n λ n f ( v n , y ) + 1 2 t n y 2 = arg min y Π n λ n G ( v n ) , y v n + 1 2 t n y 2 + λ n 2 2 G ( v n ) 2 λ n 2 2 G ( v n ) 2 = arg min y Π n λ n G ( v n ) , y t n + λ n G ( v n ) , t n v n + 1 2 t n y 2 + λ n 2 2 G ( v n ) 2 λ n 2 2 G ( v n ) 2 = arg min y Π n 1 2 t n λ n G ( v n ) y 2 = P Π n ( t n λ n G ( v n ) ) .
The value of u n + 1 reduces to the following projection:
v n + 1 = P K ( u n + 1 λ n + 1 G ( v n ) ) .
Assumption 3.
Let G : K E satisfy the following conditions:
(G1)
G is strongly pseudomonotone on K, and V I ( G , K ) is a nonempty solution set;
(G2)
G is pseudomonotone on K, and V I ( G , K ) is a nonempty solution set;
(G3)
G is L-Lipschitz continuous upon K for positive constant L > 0 .
(G4)
lim sup n G ( u n ) , y u n G ( p ) , y p for every y K and { u n } K satisfying u n p .
Corollary 1.
Let G : K E satisfy ( G 2 , G 3 , G 4 ) as in Assumption 3. Assume { t n } , { u n } , and { v n } is the sequence generated in the following way:
(i)
Choose u 1 , v 1 K , λ 0 > 0 , and μ [ 0 , g ( α ) ) . Set:
u 0 = P K ( u 1 λ 0 G ( v 1 ) ) and v 0 = P K ( u 0 λ 0 G ( v 1 ) ) .
(ii)
Assume v n 1 , v n , u n 1 , u n K , and λ n are known for n 0 . Construct a half space:
Π n = { z E : u n λ n G ( v n 1 ) v n , z v n 0 } .
Compute:
u n + 1 = P Π n ( t n λ n G ( v n ) ) , where t n = u n + α n ( u n u 1 ) , v n + 1 = P K ( u n + 1 λ n + 1 G ( v n ) ) .
The stepsize sequence λ n + 1 is updated as follows:
λ n + 1 = min λ n , μ ( v n 1 v n 2 + u n + 1 v n 2 ) 2 G ( v n 1 ) G ( v n ) , u n + 1 v n i f G ( v n 1 ) G ( v n ) , u n + 1 v n > 0 ; λ n otherwise .
Thus, the sequence { t n } , { u n } , and { v n } converges weakly to p * of V I ( G , K ) .
Corollary 2.
Let G : K E satisfy ( G 1 , G 3 ) as in Assumption 3. Assume { t n } , { u n } and { v n } are the sequences generated as follows:
(i)
Choose u 1 , v 1 K , λ 0 > 0 , and α n [ 0 , 1 6 ) . Set:
u 0 = P K ( u 1 λ 0 G ( v 1 ) ) and v 0 = P K ( u 0 λ 0 G ( v 1 ) ) .
(ii)
Assume that v n 1 , v n , u n 1 , u n K and λ n are known for n 0 . Construct a half space:
Π n = { z E : u n λ n G ( v n 1 ) v n , z v n 0 } .
Compute:
u n + 1 = P Π n ( t n λ n G ( v n ) ) , where t n = u n + α n ( u n u 1 ) , v n + 1 = P K ( u n + 1 λ n + 1 G ( v n ) ) .
The stepsize sequence λ n satisfies the following hypothesis:
( S 1 ) : lim n λ n = 0 and ( S 2 ) : n = 1 λ n = + .
Thus, the sequence { t n } , { u n } , and { v n } converges strongly to p * of V I ( G , K ) .

6. Numerical Experiments

Now, we discuss two economy models to examine the efficiency of our proposed methods. For the numerical experiments, we wrote our algorithms using MATLAB programs (MATLAB R2018b) evaluated on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30 GHz 2.40 GHz, RAM 8.00 GB.

6.1. Nash–Cournot Equilibrium Model of Electricity Markets

We considered the equilibrium model of electricity markets [20]. We assumed that there were three companies that were generating electricity i ( i = 1 , 2 , 3 ) . Companies 1, 2, and 3 had generating units named as I 1 = { 1 } , I 2 = { 2 , 3 } , and I 3 = { 4 , 5 , 6 } , respectively. Suppose that u j denotes the generating power of the unit for j = { 1 , 2 , 3 , 4 , 5 , 6 } . We also assume that the price p of the electricity is defined as p = 378 . 4 2 j = 1 6 u j . The charge of the producing j unit is written as:
c j ( u j ) : = max { c j ( u j ) , c j ( u j ) } ,
with c j ( u j ) : = α j 2 u j 2 + β j u j + γ j and c j ( u j ) : = α j u j + β j β j + 1 γ j 1 β j ( u j ) ( β j + 1 ) β j . The values of α j , β j , γ j , α j , β j , and γ j are set in Table 1. Consider that the profit of the firm i is:
f i ( u ) : = p j I i u j j I i c j ( u j ) = 378 . 4 2 l = 1 6 u l j I i u j j I i c j ( u j ) ,
where u = ( u 1 , , u 6 ) T and feasible set K : = { u R 6 : u j min u j u j max } with u j min and u j max given in Table 2. First, we describe the f equilibrium function as:
f ( u , v ) : = i = 1 3 ϕ i ( u , u ) ϕ i ( u , v ) ,
where:
ϕ i ( u , v ) : = 378 . 4 2 j I i u j + j I i v j j I i v j j I i c j ( v j ) .
The above discussed model is viewed as the following equilibrium problem:
Find p * K such that f ( p * , y ) 0 , y K .
For Algorithm 1 (Algo1), we use u 1 = ( 10 , 10 , 20 , 17 , 8 , 14 ) T and v 1 = ( 48 , 48 , 30 , 28 , 18 , 24 ) T . Figure 1, Figure 2, Figure 3 and Figure 4 and Table 3 and Table 4 describe the numerical results for error term ( D n = u n + 1 v n 2 + t n v n 2 ) regarding different values of α n and λ 0 .

Algorithm 1 Comparison with Other Existing Methods

(i)
For the extragradient method (EgM) [33], u 0 = ( 48 , 48 , 30 , 28 , 18 , 24 ) T and D n = u n v n 2 .
(ii)
For Algorithm 1 (EgIA) [41], u 0 = ( 48 , 48 , 30 , 28 , 18 , 24 ) T and D n = u n v n 2 .
(iii)
For the two-step proximal algorithm (TSPA) [34], D n = u n + 1 v n 2 + u n v n 2 and u 0 = ( 10 , 10 , 20 , 17 , 8 , 14 ) T , v 0 = ( 48 , 48 , 30 , 28 , 18 , 24 ) T .
(iv)
For Algorithm 2 (ETSPA) [41], u 0 = ( 10 , 10 , 20 , 17 , 8 , 14 ) T , v 0 = ( 48 , 48 , 30 , 28 , 18 , 24 ) T , v 1 = ( 10 , 20 , 30 , 10 , 0 , 1 ) T and D n = u n + 1 v n 2 + u n v n 2 .
(v)
For Algorithm 1 (Algo1), u 1 = ( 10 , 10 , 20 , 17 , 8 , 14 ) T , v 1 = ( 48 , 48 , 30 , 28 , 18 , 24 ) T and the error term D n = u n + 1 v n 2 + t n v n 2 .
Figure 5 and Figure 6 and Table 5 illustrate the numerical findings for the stopping criterion.

6.2. Nash–Cournot Oligopolistic Equilibrium Model

We consider that there are n companies that are generating the same commodity. Suppose that u i in vector u represents the amount of commodity production corresponding to each company i . The price function depends on the value of S = i = 1 m u i such that P i ( S ) = ϕ i ψ i S where ϕ i > 0 and ψ i > 0 . This price function is affine and decreasing. The profit function F i ( u ) = P i ( S ) u i t i ( u i ) with t i ( u i ) is the tax and production charges for u i . Suppose that K : = K 1 × K 2 × × K n with K i = [ u i min , u i max ] is the set of possible actions or strategies corresponding to each company i . In particular, each company seeks to achieve its maximum profit by considering the subsequent level of production on the premise that the production of the other companies is an input parameter. The technique also deals with this form of model based on the well-known Nash equilibrium principle. A point p * K = K 1 × K 2 × × K n is the equilibrium point such that:
F i ( p * ) F i ( p * [ u i ] ) u i K i , for all i = 1 , 2 , , n .
where vector p * [ u i ] means that u i * is replaced with u i . Next, assume that f ( u , v ) : = φ ( u , v ) φ ( u , u ) with φ ( u , v ) : = i = 1 n F i ( u [ v i ] ) , and the problem of finding a Nash equilibrium point of the model can be considered as follows:
Find p * K such that f ( p * , v ) 0 , v K .
It follows from [33] that the bifunction f becomes as follows:
f ( u , v ) = M u + N v + r , v u ,
where:
N = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 0 0 0 0 1 1.5 0 0 0 0 0 2 and M = 3.1 2 0 0 0 3 3.6 0 0 0 0 0 3.5 2 0 0 0 2 3.3 0 0 0 0 0 3
with r = ( 1 , 2 , 1 , 2 , 1 ) T and K = { u R 5 : 2 u i 5 } .

Algorithm 2 Comparison with Other Existing Methods

  • For the extragradient method (EgMSP) [49], u 0 = ( 1 , 3 , 1 , 1 , 2 ) T and D n = u n + 1 u n .
  • For another method (EgIASP) [42], u 0 = ( 1 , 3 , 1 , 1 , 2 ) T , v 0 = ( 1 , 0 , 1 , 0 , 2 ) T and D n = u n + 1 u n .
  • For Algorithm 2 (Algo2), u 1 = ( 1 , 3 , 1 , 1 , 2 ) T , v 1 = ( 1 , 0 , 1 , 0 , 2 ) T and D n = u n + 1 u n .
Figure 7, Figure 8, Figure 9 and Figure 10 and Table 6 illustrate the numerical results for the stopping criterion.

7. Conclusions

In this study, two proximal-like algorithms were proposed to solve equilibrium problems involving a pseudomonotone and strongly pseudomonotone bifunction with the Lipschitz-type condition on a bifunction. We used a new step size rule that did not depend on the Lipschitz-type constant information. It was identified that the methods with an inertial term worked better than without an inertial term.

Author Contributions

Conceptualization, H.u.R. and P.K.; Methodology, H.u.R., P.K., I.K.A., M.S. and Z.S.; Investigation, H.u.R., P.K., I.K.A., M.S. and Z.S.; Writing–original draft preparation, H.u.R., P.K. and Z.S.; Writing–review and editing, H.u.R., P.K., I.K.A., M.S. and Z.S.; Visualization, H.u.R., P.K., I.K.A., M.S. and Z.S.; Software, I.K.A., M.S. and Z.S.; Supervision, P.K. and I.K.A.; Funding Acquisition, P.K. All authors have read and agree to the published version of the manuscript.

Funding

This research work was financially supported by King Mongkut’s University of Technology Thonburi through the ‘KMUTT 55th Anniversary Commemorative Fund’. Moreover, this project was supported by Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. In particular, Habib ur Rehman was financed by the Petchra Pra Jom Doctoral Scholarship Academic for Ph.D. Program at KMUTT [grant number 39/2560].

Acknowledgments

The first author would like to thank the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”. We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Long, G.; Liu, S.; Xu, G.; Wong, S.W.; Chen, H.; Xiao, B. A Perforation-Erosion Model for Hydraulic-Fracturing Applications. SPE Prod. Oper. 2018, 33, 770–783. [Google Scholar] [CrossRef]
  3. Xiao, B.; Wang, W.; Zhang, X.; Long, G.; Fan, J.; Chen, H.; Deng, L. A novel fractal solution for permeability and Kozeny-Carman constant of fibrous porous media made up of solid particles and porous fibers. Powder Technol. 2019, 349, 92–98. [Google Scholar] [CrossRef]
  4. Xiao, B.; Zhang, X.; Jiang, G.; Long, G.; Wang, W.; Zhang, Y.; Liu, G. Kozeny–carman constant for gas flow through fibrous porous media by fractal-monte carlo simulations. Fractals 2019, 27, 1950062. [Google Scholar] [CrossRef]
  5. Arrow, K.J.; Debreu, G. Existence of an Equilibrium for a Competitive Economy. Econometrica 1954, 22, 265. [Google Scholar] [CrossRef]
  6. Cournot, A.A. Recherches Sur Les Principes MathéMatiques de la Théorie des Richesses; L. Hachette: New York, NY, USA, 1838. [Google Scholar]
  7. Wernecke, S.J. Maximum Entropy Image Reconstruction. IEEE Trans. Comput. 1977, 26, 351–364. [Google Scholar] [CrossRef]
  8. Kundur, D.; Hatzinakos, D. Blind image deconvolution. IEEE Signal Process. Mag. 1996, 13, 43–64. [Google Scholar] [CrossRef] [Green Version]
  9. Gharehkhani, S.; Sadeghinezhad, E.; Kazi, S.N.; Yarmand, H.; Badarudin, A.; Safaei, M.R.; Zubir, M.N.M. Basic effects of pulp refining on fiber properties—A review. Carbohydr. Polym. 2015, 115, 785–803. [Google Scholar] [CrossRef]
  10. Nagurney, A. Network Economics–A Variational inequality approach 412. Adv. Comput. Econ. Kluwer Acad. Publ. 1993. [Google Scholar] [CrossRef]
  11. Patriksson, M. The Traffic Assignment Problem: Models and Methods; Courier Dover Publications: Chicago, IL, USA, 2015. [Google Scholar]
  12. Nash, J. Non-Cooperative Games. Ann. Math. 1951, 54, 286. [Google Scholar] [CrossRef]
  13. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  14. Nikaidô, H.; Isoda, K. Note on non-cooperative convex game. Pac. J. Math. 1955, 5, 807–815. [Google Scholar] [CrossRef]
  15. Gwinner, J. On the penalty method for constrained variational inequalities. Optim. Theory Algorithms 1981, 86, 197–211. [Google Scholar]
  16. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Springer Science & Business Media: Berlin, Germany, 2006; Volume 58. [Google Scholar]
  17. Konnov, I. Application of the Proximal Point Method to Nonmonotone Equilibrium Problems. J. Optim. Theory Appl. 2003, 119, 317–333. [Google Scholar] [CrossRef]
  18. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  19. Muu, L.D.; Quoc, T.D. Regularization Algorithms for Solving Monotone Ky Fan Inequalities with Application to a Nash–Cournot Equilibrium Model. J. Optim. Theory Appl. 2009, 142, 185–204. [Google Scholar] [CrossRef]
  20. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2011, 52, 139–159. [Google Scholar] [CrossRef]
  21. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  22. ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  23. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  24. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  25. ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  26. Reich, S.; Sabach, S. Three Strong Convergence Theorems Regarding Iterative Methods for Solving Equilibrium Problems in Reflexive Banach Spaces. Contemp. Math. 2012. [Google Scholar] [CrossRef]
  27. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis: Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
  28. Akbari, O.A.; Safaei, M.R.; Goodarzi, M.; Akbar, N.S.; Zarringhalam, M.; Shabani, G.A.S.; Dahari, M. A modified two-phase mixture model of nanofluid flow and heat transfer in a 3-D curved microtube. Adv. Powder Technol. 2016, 27, 2175–2185. [Google Scholar] [CrossRef]
  29. Karimipour, A.; Nezhad, A.H.; D’Orazio, A.; Esfe, M.H.; Safaei, M.R.; Shirani, E. Simulation of copper–water nanofluid in a microchannel in slip flow regime using the lattice Boltzmann method. Eur. J. Mech. B Fluids 2015, 49, 89–99. [Google Scholar] [CrossRef]
  30. Togun, H.; Safaei, M.; Sadri, R.; Kazi, S.; Badarudin, A.; Hooman, K.; Sadeghinezhad, E. Numerical simulation of laminar to turbulent nanofluid flow and heat transfer over a backward-facing step. Appl. Math. Comput. 2014, 239, 153–170. [Google Scholar] [CrossRef]
  31. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  32. Censor, Y.; Gibali, A.; Reich, S. The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  33. Tran, D.Q.; Dung, M.L.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  34. Lyashko, S.I.; Semenov, V.V. A New Two-Step Proximal Algorithm of Solving the Problem of Equilibrium Programming. In Optimization and Its Applications in Control and Data Sciences; Springer International Publishing: Berlin, Germany, 2016; pp. 315–325. [Google Scholar]
  35. Popov, L.D. A modification of the Arrow-Hurwicz method for search of saddle points. Math. Notes Acad. Sci. USSR 1980, 28, 845–848. [Google Scholar] [CrossRef]
  36. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  37. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  38. ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  39. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  40. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  41. Van Hieu, D.; Quy, P.K. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56, 11. [Google Scholar] [CrossRef]
  42. Van Hieu, D. Convergence analysis of a new algorithm for strongly pseudomonotone equilibrium problems. Numer. Algorithms 2018, 77, 983–1001. [Google Scholar] [CrossRef]
  43. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  44. Tiel, J.V. Convex Analysis; John Wiley: Hoboken, NJ, USA, 1984. [Google Scholar]
  45. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  46. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  47. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–598. [Google Scholar] [CrossRef] [Green Version]
  48. Ofoedu, E. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
  49. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2018, 97, 811–824. [Google Scholar] [CrossRef]
Figure 1. Section 6.1: Algorithm 1’s (Algo1) behavior with respect to different values of α n .
Figure 1. Section 6.1: Algorithm 1’s (Algo1) behavior with respect to different values of α n .
Mathematics 08 00822 g001
Figure 2. Section 6.1: Algorithm 1’s behavior with respect to different values of α n .
Figure 2. Section 6.1: Algorithm 1’s behavior with respect to different values of α n .
Mathematics 08 00822 g002
Figure 3. Section 6.1: Algorithm 1’s behavior with respect to different values of λ 0 .
Figure 3. Section 6.1: Algorithm 1’s behavior with respect to different values of λ 0 .
Mathematics 08 00822 g003
Figure 4. Section 6.1: Algorithm 1’s behavior with respect to different values of λ 0 .
Figure 4. Section 6.1: Algorithm 1’s behavior with respect to different values of λ 0 .
Mathematics 08 00822 g004
Figure 5. Section 6.1: Algorithm 1 comparison with existing methods.
Figure 5. Section 6.1: Algorithm 1 comparison with existing methods.
Mathematics 08 00822 g005
Figure 6. Section 6.1: Algorithm 1 comparison with existing methods.
Figure 6. Section 6.1: Algorithm 1 comparison with existing methods.
Mathematics 08 00822 g006
Figure 7. Section 6.2: Algorithm 2 comparison with existing methods.
Figure 7. Section 6.2: Algorithm 2 comparison with existing methods.
Mathematics 08 00822 g007
Figure 8. Section 6.2: Algorithm 2 comparison with existing methods.
Figure 8. Section 6.2: Algorithm 2 comparison with existing methods.
Mathematics 08 00822 g008
Figure 9. Section 6.2: Algorithm 2 comparison with existing methods.
Figure 9. Section 6.2: Algorithm 2 comparison with existing methods.
Mathematics 08 00822 g009
Figure 10. Section 6.2: Algorithm 2 comparison with existing methods.
Figure 10. Section 6.2: Algorithm 2 comparison with existing methods.
Mathematics 08 00822 g010
Table 1. The different values of the constants.
Table 1. The different values of the constants.
Unit j α j β j γ j α j β j γ j
10.04002.000.002.00001.000025.0000
20.03501.750.001.75001.000028.5714
30.12501.000.001.00001.00008.0000
40.01163.250.003.25001.000086.2069
50.05003.000.003.00001.000020.0000
60.05003.000.003.00001.000020.0000
Table 2. The constraint set values.
Table 2. The constraint set values.
j123456
u j m i n 000000
u j m a x 808050553040
Table 3. Numerical results for Figure 1 and Figure 2.
Table 3. Numerical results for Figure 1 and Figure 2.
A l g o . n a m e α n μ λ 0 un T i m e T o l e r a n c e
Algo10.160.01250 ( 46.0474 , 28.9092 , 18.4850 , 20.4036 , 12.5084 , 14.1096 ) T 61415.511285 ϵ = 10 5
Algo10.110.01250 ( 46.0568 , 28.4153 , 18.9677 , 20.2051 , 12.6083 , 14.2082 ) T 65917.624634 ϵ = 10 5
Algo10.060.01250 ( 46.0659 , 27.9750 , 19.3977 , 20.0281 , 12.6974 , 14.2959 ) T 70417.143282 ϵ = 10 5
Algo10.010.01250 ( 46.0717 , 27.5800 , 19.7846 , 19.8688 , 12.7777 , 14.3753 ) T 74818.846679 ϵ = 10 5
Algo10.0010.01250 ( 46.0730 , 27.5132 , 19.8449 , 19.8418 , 12.7913 , 14.3886 ) T 75619.411861 ϵ = 10 5
Table 4. Numerical results for Figure 3 and Figure 4.
Table 4. Numerical results for Figure 3 and Figure 4.
A l g o . n a m e α n μ λ 0 un T i m e T o l e r a n c e
Algo10.160.0120.1 ( 46.4147 , 20.5037 , 26.6003 , 21.2012 , 8.8592 , 16.9030 ) T 75623.649384 ϵ = 10 5
Algo10.160.0121 ( 46.3512 , 21.0528 , 26.0886 , 21.1505 , 9.0529 , 16.7769 ) T 69819.827807 ϵ = 10 5
Algo10.160.0125 ( 46.2222 , 23.2606 , 23.9720 , 21.0243 , 9.8077 , 16.1749 ) T 64718.198626 ϵ = 10 5
Algo10.160.01210 ( 46.1499 , 24.9832 , 22.3073 , 20.9005 , 10.4870 , 15.6306 ) T 63017.335370 ϵ = 10 5
Algo10.160.01250 ( 46.0474 , 28.9092 , 18.4850 , 20.4036 , 12.5084 , 14.1096 ) T 61416.943758 ϵ = 10 5
Table 5. Numerical results for Figure 5 and Figure 6. EgM, extragradient method; TSPA, two-step proximal algorithm.
Table 5. Numerical results for Figure 5 and Figure 6. EgM, extragradient method; TSPA, two-step proximal algorithm.
A l g o . n a m e α n μ λ 0 un T i m e T o l e r a n c e
EgM0.1 ( 46.6523 , 32.1467 , 15.0011 , 25.1431 , 10.8357 , 10.8357 ) T 3033151.699125 ϵ = 10 5
EgIA0.0120.1 ( 46.6523 , 32.1467 , 15.0011 , 25.1430 , 10.8358 , 10.8358 ) T 3025151.118686 ϵ = 10 5
TSPA0.1 ( 46.6523 , 32.1467 , 15.0011 , 25.1433 , 10.8356 , 10.8356 ) T 3065171.432413 ϵ = 10 5
ETSPA0.0120.1 ( 46.3995 , 20.4626 , 26.6506 , 19.9235 , 10.7905 , 16.2497 ) T 89927.432681 ϵ = 10 5
Algo10.120.0120.1 ( 46.4110 , 20.4926 , 26.6137 , 20.8384 , 9.4075 , 16.7174 ) T 79219.730552 ϵ = 10 5
Table 6. Numerical results for Figure 7, Figure 8, Figure 9 and Figure 10.
Table 6. Numerical results for Figure 7, Figure 8, Figure 9 and Figure 10.
A l g o . n a m e α n λ n u n T i m e T o l e r a n c e
EgMSP 1 log ( n + 3 ) 5 ( 0.7721 , 0.8680 , 0.6003 , 0.7293 , 0.2312 ) T 2074.835206 ϵ = 10 4
EgIASP 1 log ( n + 3 ) 5 ( 0.8155 , 0.8829 , 0.6237 , 0.7667 , 0.2182 ) T 1724.652686 ϵ = 10 4
Algo20.12 1 log ( n + 3 ) 5 ( 0.7590 , 0.8216 , 0.6333 , 0.7873 , 0.1945 ) T 1223.384679 ϵ = 10 4
EgMSP 1 n + 1 ( 0.7256 , 0.8033 , 0.7197 , 0.8664 , 0.2000 ) T 1022.427858 ϵ = 10 4
EgIASP 1 n + 1 ( 0.7256 , 0.8033 , 0.7198 , 0.8664 , 0.2000 ) T 882.239737 ϵ = 10 4
Algo20.12 1 n + 1 ( 0.7255 , 0.8032 , 0.7198 , 0.8665 , 0.2000 ) T 541.841815 ϵ = 10 4

Share and Cite

MDPI and ACS Style

Rehman, H.u.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics 2020, 8, 822. https://0-doi-org.brum.beds.ac.uk/10.3390/math8050822

AMA Style

Rehman Hu, Kumam P, Argyros IK, Shutaywi M, Shah Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics. 2020; 8(5):822. https://0-doi-org.brum.beds.ac.uk/10.3390/math8050822

Chicago/Turabian Style

Rehman, Habib ur, Poom Kumam, Ioannis K. Argyros, Meshal Shutaywi, and Zahir Shah. 2020. "Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models" Mathematics 8, no. 5: 822. https://0-doi-org.brum.beds.ac.uk/10.3390/math8050822

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop