Next Article in Journal
Analysis and Comparison of Fuzzy Models and Observers for DC-DC Converters Applied to a Distillation Column Heating Actuator
Previous Article in Journal
The Application of Stock Index Price Prediction with Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces

by
Safeer Hussain Khan
1,*,
Timilehin Opeyemi Alakoya
2 and
Oluwatosin Temitope Mewomo
2
1
Department of Mathematics, Statistics and Physics, Qatar University, Doha 2713, Qatar
2
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2020, 25(3), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/mca25030054
Submission received: 11 July 2020 / Revised: 28 July 2020 / Accepted: 31 July 2020 / Published: 24 August 2020
(This article belongs to the Section Natural Sciences)

Abstract

:
In each iteration, the projection methods require computing at least one projection onto the closed convex set. However, projections onto a general closed convex set are not easily executed, a fact that might affect the efficiency and applicability of the projection methods. To overcome this drawback, we propose two iterative methods with self-adaptive step size that combines the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in the setting of Banach spaces. The core of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space and this guarantees the easy implementation of our proposed methods. Moreover, the step size of each algorithm is self-adaptive. We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. Finally, we present some numerical examples in order to demonstrate the efficiency of our algorithms in comparison with some recent iterative methods.

1. Introduction

Let E be a real Banach space with norm | | · | | , and E * be the dual of E . For x E and f E * , let x , f be the value of f at x . Suppose that C is a nonempty, closed, and convex subset of E . The Variational Inequality Problem (VIP) that is associated with C and A is formulated, as follows: find a point x * C , such that
x x * , A x * 0 , x C ,
where A : E E * is a single-valued mapping. We denote the solution set of VIP (1) by V I ( C , A ) .
Let C be a nonempty, closed, and convex subset of a real Hilbert space H . A mapping P C is said to be the metric projection of H onto C if, for all x H , there exists a unique nearest point in C , denoted by P C x , such that
| | x P C x | | | | x y | | , y C .
Variational inequality was first introduced independently by Fichera [1] and Stampacchia [2]. The VIP is a useful mathematical model that unifies many important concepts in applied mathematics, such as necessary optimality conditions, complementarity problems, network equilibrium problems, and systems of nonlinear equations (see [3,4,5,6,7]). Many authors have proposed and analysed several iterative algorithms for solving the VIP (1) and related optimization problems, see [8,9,10,11,12,13,14,15,16,17,18,19,20,21] and references therein. One of the most famous of these methods is the extragradient method (EgM) proposed by Korpelevich [22], which is presented in Algorithm 1, as follows:
Algorithm 1: Extragradient Method (EgM)
x 0 C , y n = P C ( x n λ A x n ) , x n + 1 = P C ( x n λ A y n ) ,
where C R n , A : C R n is a monotone and L Lipschitz continuous operator with λ ( 0 , 1 L ) . If the solution set V I ( C , A ) is nonempty, then the sequence { x n } generated by EgM converges to an element in V I ( C , A ) . The EgM was further extended to infinite dimensional spaces by many authors; see, for instance, [11,23,24]. Observe that the EgM involves two projections onto the closed convex set C and two evaluations of A per iteration. Computing projection onto an arbitrary closed convex set is a difficult task, a drawback that may affect the efficiency of the EgM, as mentioned in [25]. Hence, a major improvement on the EgM is to minimize the number of evaluations of P C per iteration. Censor et al. [25] initiated an attempt in this direction, modifying the EgM by replacing the second projection with a projection onto a half-space. This new method only involves one projection onto C and it is called the subgradient extragradient method (SEgM). The S E g M algorithm is given in Algorithm 2, as follows:
Algorithm 2: Subgradient Extragradient Method (SEgM)
x 0 H , y n = P C ( x n λ A x n ) , Q n = { z H : x n λ A x n y n , z y n 0 } , x n + 1 = P Q n ( x n λ A y n ) .
where H is an Hilbert space.
Censor et al. [25] showed that if the solution set V I ( C , A ) is nonempty, the sequence { x n } generated by S E g M converges weakly to an element p V I ( C , A ) , where p = lim n P V I ( C , A ) ( x n ) .
Bauschke and Combettes [26] pointed out that, in solving optimization problems, strong convergence of iterative schemes are more desirable than their weak convergence counterparts. Hence, the need to develop algorithms that generate strong convergence sequence.
Let S : E E be a nonlinear mapping, a point x * E is called a fixed point of S if S x * = x * . We denote, by F ( S ) , the set of all fixed points of S , i.e.,
F ( S ) = { x * E : S x * = x * } .
If S is a multivalued mapping, i.e., S : E 2 E , then x * E is called a fixed point of S if
x * S x * .
Here, we consider the fixed point problem for an infinite family of multivalued relatively nonexpansive mappings. The fixed point theory for multivalued mappings can be utilized in various areas, such as game theory, control theory, mathematical economics, and differential inclusion (see [13,27,28] and the references therein). The existence of common fixed points for a family of mappings has been considered by many authors (see [13,28,29,30] and the references therein). Many optimization problems can be formulated as finding a point in the intersection of the fixed point sets of a family of nonlinear mappings. For instance, the well-known convex feasibility problem reduces to finding a point in the intersection of the fixed point sets of a family of nonexpansive mappings (see [31,32]). The problem of finding an optimal point that minimizes a given cost function over the common set of fixed points of a family of nonlinear mappings is of wide interdisciplinary interest and practical importance (see [33,34]). A simple algorithmic solution to the problem of minimizing a quadratic function over the common set of fixed points of a family of nonexpansive mappings is of extreme value in many applications including set theoretic signal estimation (see [34,35]).
In this article, we are interested in studying the problem of finding a common solution of both the VIP (1) and the common fixed point problem for multivalued mappings. The motivation for studying such problems lies in its potential application to mathematical models whose constraints can be expressed as fixed point problem and VIP. This happens, in particular, in practical problems, such as signal processing, network resource allocation, and image recovery. A scenario is in network bandwidth allocation problem for two services in a heterogeneous wireless access networks in which the bandwidth of the services are mathematically related (see, for instance, [36,37,38,39] and the references therein).
Observe that all of the above results about the extragradient method or subgradient extragradient method are all confined in Hilbert spaces. However, many important problems related to practical problems are generally defined in Banach spaces. For instance, Zhang et al. [40] pointed out that in machine learning, Banach spaces possess much richer geometric structures, which are potentially useful for developing learning algorithms. This is due to the fact that any two Hilbert spaces over C of the same dimension are isometrically isomorphic. Der and Lee [41] also pointed out that most data in machine learning do not come with any natural notion distance that can be induced from an inner-product. Zhang et al. [40] further argued that most data come with intrinsic structures that make them impossible to be embedded into a Hilbert space. Hence, it is more desirable to propose an iterative algorithm for finding a solution of VIP (1) in Banach spaces.
Very recently, Liu [42] extended the subgradient extragradient method from Hilbert spaces into Banach spaces and proposed the following strong convergence theorem.
Theorem 1.
Let E be a two-uniformly convex and uniformly smooth Banach space with the two-uniformly convexity constant c 1 and C be a nonempty closed convex subset of E . Let S : E E be a relatively nonexpansive mapping and A : E E * a monotone and L-Lipschitz mapping on C with L > 0 . Let { λ n } be a real number sequence satisfying 0 < inf n 1 λ n sup n 1 λ n < c 1 L . Suppose that V I ( C , A ) F ( S ) is nonempty. Let { x n } E be a sequence generated by Algorithm 3, as follows:
Algorithm 3:
x 0 E , y n = Π C J 1 ( J x n λ n A ( x n ) ) , T n = { w E : w y n , J x n λ n A ( x n ) J y n 0 } , w n = Π T n J 1 ( J x n λ n A ( y n ) ) , z n = J 1 ( α n J x 0 + ( 1 α n ) J w n ) , x n + 1 = J 1 ( β n J x n + ( 1 β n ) J S z n ) ,
where { β n } [ a , b ] [ 0 , 1 ] for some a , b ( 0 , 1 ) , { α n } ( 0 , 1 ) with lim n α n = 0 and n = 1 α n = . Subsequently, x n Π V I ( C , A ) F ( S ) x 0 .
We notice that Algorithm 3 has one projection made onto the closed convex set C . As earlier pointed out, computing projection onto an arbitrary closed convex set is a difficult task. Another shortcoming of Algorithm 3, the EgM and SEgM, is that the step size is defined by a constant (or a sequence) which is dependent on the Lipschitz constant of the monotone operator. The Lipschitz constant is typically assumed to be known, or at least estimated prior. However, in many cases, this parameter is unknown or difficult to estimate. Moreover, the step size that is defined by the constant is often very small and slows down the convergence rate. In practice, a larger step size can often be used and yields better numerical results. To overcome these drawbacks, in this article, motivated by the cited works and the ongoing research in this area, we propose some relaxed subgradient extragdient methods with self-adaptive step size for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive mappings in the setting of Banach spaces. In our algorithms, the two projections are made onto some half-space, which guarantees the easy implementation of the proposed methods. Moreover, the step size can be selected adaptively. We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. Finally, we present some numerical examples to demonstrate the efficiency of our algorithms.

2. Preliminaries

In what follows, we let N and R be the set of positive integers and real numbers, respectively. Let E be a Banach space, E * be the dual space of E , and let · , · denote the duality pairing of E and E * . When { x n } is a sequence in E , we denote the strong convergence of { x n } to x E by x n x and the weak convergence by x n x . An element z E is called a weak cluster point of { x n } if there exists a subsequence { x n j } of { x n } converging weakly to z . We write ω w ( x n ) to indicate the set of all weak cluster points of { x n } .
Next, we present some definitions and results that are employed in our subsequent analysis.
Definition 1.
A function f : E R is said to be weakly lower semicontinuous (w-lsc) at x E , if
f ( x ) lim inf n f ( x n )
holds for an arbitrary sequence { x n } n = 0 in E satisfying x n x .
Let g : E R be a function. The subdifferential of g at x is defined by
g ( x ) = { w E * : g ( y ) g ( x ) + y x , w , y E } .
If g ( x ) , then we say that g is subdifferentiable at x .
Definition 2.
An operator A : E E * is said to be
(i) 
monotone if
x y , A x A y 0 , x , y E ;
(ii) 
α-inverse-strongly-monotone if there exists a positive real number α, such that
x y , A x A y α | | A x A y | | 2 , x , y E ;
(iii) 
L-Lipschitz continuous if there exists a constant L > 0 such that
| | A x A y | | L | | x y | | , x , y E .
Clearly, an α-inverse-strongly-monotone mapping is monotone and 1 α -Lipschitz continuous. However, the converse is not always true.
A Banach space E is said to be strictly convex, if for all x , y E , such that | | x | | = | | y | | = 1 and x y implies | | ( x + y ) / 2 | | < 1 . E is said to be uniformly convex, if for each ε ( 0 , 2 ] , there exists δ > 0 such that for all x , y E , | | x | | = | | y | | = 1 and | | x y | | ε implies | | ( x + y ) / 2 | | < 1 δ . It is well known that a uniformly convex Banach space is strictly convex and reflexive. The modulus of convexity of E is defined as
δ E ( ε ) = inf 1 x + y 2 : x , y E , | | x | | = | | y | | = 1 , | | x y | | ε .
Subsequently, the Banach space E is uniformly convex if and only if δ E ( ε ) > 0 for all ε ( 0 , 2 ] . In particular, let H be a real Hilbert space, then δ H ( ε ) = 1 1 ( ε / 2 ) 2 . E is said to be p-uniformly convex if there exists a constant c > 0 such that δ E ( ε ) > c ε p for all ε [ 0 , 2 ] with p 2 . It is easy to see that a p-uniformly convex Banach space is uniformly convex. In particular, a Hilbert space is two-uniformly convex.
A Banach space E is said to be smooth, if the limit lim t 0 ( | | x + t y | | | | x | | ) / t exists for all x , y S E , where S E = { x E : | | x | | = 1 } . Moreover, if this limit is attained uniformly for x , y S E , then E is said to be uniformly smooth. It is obvious that a uniformly smooth space is smooth. In particular, a Hilbert space is uniformly smooth.
For p > 1 , the generalized duality mapping J p : E 2 E * is defined, as follows:
J p x = { x * E * : x , x * = | | x | | p , | | x * | | = | | x | | p 1 } x E .
In particular, J = J 2 is called the normalized duality mapping. If E = H , where H is a real Hilbert space, then J = I . The normalized duality mapping J has the following properties (see [43]):
(1)
if E is smooth, then J is single-valued;
(2)
if E is strictly convex, then J is one-to-one and strictly monotone;
(3)
if E is reflexive, then J is surjective; and,
(4)
if E is uniformly smooth, then J is uniformly norm-to-norm continuous on each bounded subset of E .
Let E be a smooth Banach space. The Lyapunov functional ϕ : E × E R (see [44]) is defined by
ϕ ( x , y ) = | | x | | 2 2 x , J y + | | y | | 2 x , y E .
From the definition, it is easy to see that ϕ ( x , x ) = 0 for every x E . If E is strictly convex, then ϕ ( x , y ) = 0 x = y . If E is a Hilbert space, it is easy to see that ϕ ( x , y ) = | | x y | | 2 for all x , y E . Moreover, for every x , y , z E and α ( 0 , 1 ) , the Lyapunov functional ϕ satisfies the following properties:
(P1)
0 ( | | x | | | | y | | ) 2 ϕ ( x , y ) ( | | x | | + | | y | | ) 2 ;
(P2)
ϕ ( x , J 1 ( α J z + ( 1 α ) J y ) ) α ϕ ( x , z ) + ( 1 α ) ϕ ( x , y ) ;
(P3)
ϕ ( x , y ) = ϕ ( x , z ) + ϕ ( z , y ) + 2 z x , J y J z ;
(P4)
ϕ ( x , y ) 2 y x , J y J x ;
(P5)
ϕ ( x , y ) = x , J x J y + y x , J y | | x | | | | J x J y | | + | | y x | | | | y | | .
Next, we define the functional V : E × E * [ 0 , + ) by
V ( x , x * ) = | | x | | 2 2 x , x * + | | x * | | 2 x E , x * E * .
It can be deduced from (2) that V is non-negative and
V ( x , x * ) = ϕ ( x , J 1 ( x * ) ) .
We have the following result in a reflexive strictly convex and smooth Banach space.
Lemma 1.
([45]) Let E be a reflexive strictly convex and smooth Banach space with E * as its dual. Subsequently,
V ( x , x * ) + 2 J 1 x * x , y * V ( x , x * + y * ) ,
for all x E and x * , y * E * .
Definition 3.
Let C be a nonempty closed convex subset of a real Banach space E . A point p C is called an asymptotic fixed point (see [46]) of T if C contains a sequence { x n } which converges weakly to p, such that lim n | | x n T x n | | = 0 . We denote the set of asymptotic fixed points of T by F ^ ( T ) . A mapping T : C C is said to be:
1. 
relatively nonexpansive (see [47]) if:
i. 
F ( T ) ;
ii. 
ϕ ( p , T x ) ϕ ( p , x ) p F ( T ) , x C ;
iii. 
F ^ ( T ) = F ( T ) ;
2. 
generalized nonspreading [48] if there are α , β , γ , δ R such that
α ϕ ( T x , T y ) + ( 1 α ) ϕ ( x , T y ) + γ [ ϕ ( T y , T x ) ϕ ( T y , x ) ] β ϕ ( T x , y ) + ( 1 β ) ϕ ( x , y ) + δ [ ϕ ( y , T x ) ϕ ( y , x ) ] .
The following result shows the relationship between generalized nonspreading mappings and relatively nonexpansive mappings.
Lemma 2. 
([48]) Let E be a strictly convex Banach space with a uniformly Gâteaux differentiable norm, C a nonempty closed convex subset of E and T a generalized nonspreading mapping of C into itself such that F ( T ) . Subsequently, T is relatively nonexpansive.
Let N ( C ) and C B ( C ) denote the family of nonempty subsets and nonempty closed bounded subsets of C, respectively. The Hausdorff metric on C B ( C ) is defined by
H ( A , B ) : = max { sup a A dist ( a , B ) , sup b B dist ( b , A ) } ,
for all A , B C B ( C ) , where dist ( a , B ) : = inf { | | a b | | : b B } .
Let T : C C B ( C ) be a multivalued mapping. An element p C is called a fixed point of T if p T p . A point p C is called an asymptotic fixed point of T , if there exists a sequence { x n } in C, which converges weakly to p, such that lim n dist ( x n , T x n ) = 0 .
A mapping T : C C B ( C ) is said to be relatively nonexpansive if:
  • F ( T ) ;
  • ϕ ( p , u ) ϕ ( p , x ) u T x , p F ( T ) ; and,
  • F ^ ( T ) = F ( T ) .
The class of relatively nonexpansive multivalued mappings contains the class of relatively nonexpansive single-valued mappings.
Remark 1.
(See [49]) Let E be a strictly convex and smooth Banach space, and C a nonempty closed convex subset of E . Suppose T : C N ( C ) is a relatively nonexpansive multi-valued mapping. If p F ( T ) , then T p = { p } .
Lemma 3.
([49]) Let E be a strictly convex and smooth Banach space and C be a nonempty closed convex subset of E . Let T : C N ( C ) be a relatively nonexpansive multi-valued mapping. Subsequently, F ( T ) is closed and convex.
Lemma 4.
([50]) Let E be a smooth and uniformly convex Banach space, and { x n } and { y n } be sequences in E, such that either { x n } or { y n } is bounded. If ϕ ( x n , y n ) 0 as n , then | | x n y n | | 0 as n .
Remark 2.
From Property (P4) of the Lyapunov functional, it follows that the converse of Lemma 4 also holds if the sequences { x n } and { y n } are bounded (see [51]).
Let C be a nonempty closed convex subset of a smooth, strictly convex and reflexive Banach space E , then generalized projection Π C defined by
Π C ( x ) : = arg m i n y C ϕ ( y , x ) , x E ,
is the unique minimizer of the Lyapunov functional ϕ . It is known that, if E is a real Hilbert space, then Π C coincides with the metric projection P C . The following results relating to the generalized projection are well known.
Lemma 5.
([52]) Let C be a nonempty closed convex subset of a reflexive, strictly convex, and smooth Banach space E . Given x E and z C . Then, z = Π C x implies
ϕ ( y , z ) + ϕ ( z , x ) ϕ ( y , x ) , y C .
Lemma 6.
([52,53]) Let C be a nonempty closed and convex subset of a smooth Banach space E and x E . Subsequently, x 0 = Π C x if and only if
x 0 y , J x J x 0 0 , y C .
Lemma 7.
([52]) Let p be a real number with p 2 . Then, E is p-uniformly convex if and only if there exists c ( 0 , 1 ] such that
1 2 ( | | x + y | | p + | | x y | | p ) | | x | | p + c p | | y | | p x , y E .
Here, the best constant 1 / c is called the p-uniformly convexity constant of E .
Lemma 8.
([54]) Let E be a two-uniformly convex and smooth Banach space. Then, for every x , y E , ϕ ( x , y ) c 2 2 | | x y | | 2 , where 1 c is the two-uniformly convexity constant of E .
Lemma 9.
([52,55]) Let E be a p-uniformly convex Banach space with p 2 . Subsequently,
x y , j p ( x ) j p ( y ) c p 2 p 2 p | | x y | | p x , y E , j p ( x ) J p x , j p ( y ) J p y ,
where 1 c is the p-uniformly convexity constant.
Lemma 10.
([56]) Let E be a uniformly convex Banach space, r > 0 be a positive number and B r ( 0 ) be a closed ball of E . Subsequently, for any given sequence { x i } i = 1 B r ( 0 ) and for any given sequence { λ i } i = 1 of positive number with n = 1 λ n = 1 , there exists a continuous, strictly increasing, and convex function g : [ 0 , 2 r ) [ 0 , ) with g ( 0 ) = 0 , such that, for any positive integer i , j with i < j ,
| | n = 1 λ n x n | | 2 n = 1 | | x n | | 2 λ i λ j g ( | | x i x j | | ) .
Lemma 11.
([57]) Let { a n } be a sequence of nonnegative real numbers, { α n } be a sequence in ( 0 , 1 ) with n = 1 α n = and { b n } be a sequence of real numbers. Assume that
a n + 1 ( 1 α n ) a n + α n b n for all n 1 .
If lim sup k b n k 0 for every subsequence { a n k } of { a n } satisfying lim inf k ( a n k + 1 a n k ) 0 , then lim n a n = 0 .
An operator A of C into E * is said to be hemicontinuous if for all x , y C , the mapping f on [ 0 , 1 ] into E * defined by f ( t ) = A ( t x + ( 1 t ) y ) is continuous with respect to the weak * -topology of E * .
Lemma 12.
([52]) Let C be a nonempty, closed, and convex subset of a Banach space E and A a monotone, hemicontinuous operator of C into E * . Subsequently,
V I ( C , A ) = { u C : v u , A v 0 v C } .
It is obvious from Lemma 12 that the set V I ( C , A ) is a closed and convex subset of C .

3. Main Results

In this section, we present our algorithms and prove some strong convergence results for the proposed algorithms. We establish the convergence of the algorithms under the following conditions:
  • Condition A:
    (A1)
    E is a 2-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant 1 c ;
    (A2)
    C is a nonempty closed convex set, which satisfies the following conditions:
    C = { x E : g ( x ) 0 } ,
    where g : E R is a convex function;
    (A3)
    g ( x ) is weakly lower semicontinuous on E ;
    (A4)
    For any x E , at least one subgradient ξ g ( x ) can be calculated (i.e., g is subdifferentiable on E), where g ( x ) is defined as follows:
    g ( x ) = { z E * : h ( y ) h ( x ) + y x , z , y E } .
    In addition, g ( x ) is bounded on bounded sets.
  • Condition B:
    (B1)
    The solution set that is denoted by Ω = V I ( C , A ) i = 1 F ( S i ) is nonempty, where S i : E C B ( E ) is an infinite family of multivalued relatively nonexpnsive mappings;
    (B2)
    The mapping A : E E * is monotone and Lipschitz continuous with Lipschitz constant L > 0 .
  • Condition C:
    (C1)
    { α n } ( 0 , 1 ) , lim n α n = 0 and n = 1 α n = ;
    (C2)
    { β n , i } [ a , b ] ( 0 , 1 ) for some a , b ( 0 , 1 ) , i = 0 β n , i = 1 , and lim inf n β n , 0 β n , i > 0 for all i 1 ;
    (C3)
    λ 1 > 0 , μ ( 0 , c 2 2 ) .
Now, we present our first algorithm in Algorithm 4, as follows:
Algorithm 4:
Step 0.
Select { α n } , { β n , i } , λ 1 , and μ such that Condition C holds. Choose x 1 E and set n = 1 .
Step 1.
Construct the half-space
C n = { w E : g ( x n ) + w x n , ξ n 0 } ,
where ξ n g ( x n ) , and compute
y n = Π C n J 1 ( J x n λ n A x n ) .
If x n y n = 0 , then set x n = w n = z n and go to Step 4. Else, go to Step 2.
Step 2.
Construct the half-space
T n = { w E : w y n , J x n λ n A x n J y n 0 } ,
and compute
w n = Π T n J 1 ( J x n λ n A y n ) .
Step 3.
Compute
z n = J 1 ( α n J x 1 + ( 1 α n ) J w n ) .
Step 4.
Compute
x n + 1 = J 1 ( β n , 0 J x n + i = 1 β n , i J u n , i ) , u n , i S i z n .
Step 5.
Compute
λ n + 1 = min μ ( | | x n y n | | 2 + | | w n y n | | 2 ) 2 A x n A y n , w n y n , λ n , if A x n A y n , w n y n > 0 λ n , otherwise .
Set n : = n + 1 and return to Step 1.
Remark 3.
From the construction of the half-spaces C n and T n , it can easily be verified that C C n and C T n .
Remark 4.
([58]) The sequence { λ n } is a monotonically decreasing sequence with lower bound min { μ L , λ 1 } , and, hence, the limit of { λ n } exists and is denoted by λ = lim n λ n . It is clear that λ > 0 .
Lemma 13.
Let { x n } be a sequence generated by Algorithm 4. Subsequently, the following inequality holds for all p Ω and n N :
ϕ ( p , w n ) ϕ ( p , x n ) 1 2 c 2 μ λ n λ n + 1 ϕ ( w n , y n ) + ϕ ( y n , x n ) .
In particular, there exists N 0 > 0 , such that, for all n > N 0 , we have
ϕ ( p , w n ) ϕ ( p , x n ) .
Proof. 
Let p Ω , then by applying Lemma 5 and Property (P3) of the Lyapunov functional together with the monotonicity of A , we have
ϕ ( p , w n ) = ( p , Π T n J 1 ( J x n λ n A y n ) ) ϕ ( p , J 1 ( J x n λ n A y n ) ) ϕ ( w n , J 1 ( J x n λ n A y n ) ) = ϕ ( p , x n ) + ϕ ( x n , J 1 ( J x n λ n A y n ) ) + 2 λ n p x n , A y n ϕ ( w n , x n ) ϕ ( x n , J 1 ( J x n λ n A y n ) ) 2 λ n w n x n , A y n = ϕ ( p , x n ) ϕ ( w n , x n ) + 2 λ n p w n , A y n = ϕ ( p , x n ) ϕ ( w n , x n ) + 2 λ n p y n , A y n A p + 2 λ n p y n , A p + 2 λ n y n w n , A y n ϕ ( p , x n ) ϕ ( w n , x n ) + 2 λ n y n w n , A y n = ϕ ( p , x n ) ϕ ( w n , y n ) ϕ ( y n , x n ) 2 w n y n , J y n J x n + 2 λ n y n w n , A y n = ϕ ( p , x n ) ϕ ( w n , y n ) ϕ ( y n , x n ) + 2 w n y n , J x n λ n A y n J y n .
By the definition of T n , we have that
w n y n , J x n λ n A x n J y n 0 .
Subsequently, by the definition of λ n + 1 , Lipschitz continuity of A , Lemma 8 and Cauchy–Schwartz inequality, we obtain
2 w n y n , J x n λ n A y n J y n = w n y n , J x n λ n A x n J y n + 2 λ n w n y n , A x n A y n 2 λ n w n y n , A x n A y n λ n λ n + 1 μ ( | | x n y n | | 2 + | | w n y n | | 2 ) 2 c 2 μ λ n λ n + 1 ( ϕ ( y n , x n ) + ϕ ( w n , y n ) ) .
Combining (5) and (6), we get
ϕ ( p , w n ) ϕ ( p , x n ) ϕ ( w n , y n ) ϕ ( y n , x n ) + 2 c 2 μ λ n λ n + 1 ( ϕ ( y n , x n ) + ϕ ( w n , y n ) ) = ϕ ( p , x n ) 1 2 c 2 μ λ n λ n + 1 ϕ ( w n , y n ) + ϕ ( y n , x n ) .
Now, consider the limit
lim n 1 2 c 2 μ λ n λ n + 1 = 1 2 c 2 μ > 0 , ( 0 < μ < c 2 2 ) .
Hence, there exists N 0 > 0 , such that, for all n > N 0 , we have that 1 2 c 2 μ λ n λ n + 1 > 0 . Thus, it follows that for all n > N 0 , we have
ϕ ( p , w n ) ϕ ( p , x n ) ,
as required.  □
Lemma 14.
The sequence generated { x n } generated by Algorithm 4 is bounded.
Proof. 
Let p Ω , then by applying Lemma 10 and (4), we have that for all n > N 0
ϕ ( p , x n + 1 ) = ϕ ( p , J 1 ( β n , 0 J x n + i = 1 β n , i J u n , i ) ) = | | p | | 2 2 p , β n , 0 J x n + i = 1 β n , i J u n , i + | | β n , 0 J x n + i = 1 β n , i J u n , i | | 2 | | p | | 2 2 β n , 0 p , J x n 2 i = 1 β n , i p , J u n , i + β n , 0 | | J x n | | 2 + i = 1 β n , i | | J u n , i | | 2 β n , 0 β n , j g ( | | J x n J u n , i | | ) = β n , 0 ϕ ( p , x n ) + i = 1 β n , i ϕ ( p , u n , i ) β n , 0 β n , j g ( | | J x n J u n , i | | ) β n , 0 ϕ ( p , x n ) + i = 1 β n , i ϕ ( p , z n ) β n , 0 β n , j g ( | | J x n J u n , i | | ) = β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) ϕ ( p , J 1 ( α n J x 1 + ( 1 α n ) J w n ) ) β n , 0 β n , j g ( | | J x n J u n , i | | ) β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) ( α n ϕ ( p , x 1 ) + ( 1 α n ) ϕ ( p , w n ) ) β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) ( α n ϕ ( p , x 1 ) + ( 1 α n ) ϕ ( p , x n ) ) = ( 1 α n ( 1 β n , 0 ) ) ϕ ( p , x n ) + α n ( 1 β n , 0 ) ϕ ( p , x 1 ) max { ϕ ( p , x n ) , ϕ ( p , x 1 ) } max { ϕ ( p , x N 0 ) , ϕ ( p , x 1 ) } .
Hence, the sequence { x n } is bounded. Consequently, { w n } , { y n } and { z n } are also bounded.  □
Lemma 15.
The following inequality holds for all p Ω and n > N 0 :
ϕ ( p , x n + 1 ) ( 1 α n ( 1 β n , 0 ) ) ϕ ( p , x n ) + 2 α n ( 1 β n , 0 ) z n p , J x 1 J p β n , 0 β n , j g ( | | J x n J u n , i | | ) .
Proof. 
Let p Ω , then, from (7) and by applying (4), we have
ϕ ( p , x n + 1 ) β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) ϕ ( p , J 1 ( α n J x 1 + ( 1 α n ) J w n ) ) β n , 0 β n , j g ( | | J x n J u n , i | | ) = β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) V ( p , α n J x 1 + ( 1 α n ) J w n ) β n , 0 β n , j g ( | | J x n J u n , i | | ) β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) { V p , α n J x 1 + ( 1 α n ) J w n α n ( J x 1 J p ) + 2 α n z n p , J x 1 J p } β n , 0 β n , j g ( | | J x n J u n , i | | ) = β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) V p , α n J p + ( 1 α n ) J w n + 2 α n z n p , J x 1 J p β n , 0 β n , j g ( | | J x n J u n , i | | ) β n , 0 ϕ ( p , x n ) + ( 1 β n , 0 ) α n V ( p , J p ) + ( 1 α n ) V ( p , J w n ) + 2 α n z n p , J x 1 J p β n , 0 β n , j g ( | | J x n J u n , i | | ) = β n , 0 ϕ ( p , x n ) + ( 1 α n ) ( 1 β n , 0 ) ϕ ( p , w n ) + 2 α n ( 1 β n , 0 ) z n p , J x 1 J p β n , 0 β n , j g ( | | J x n J u n , i | | ) β n , 0 ϕ ( p , x n ) + ( 1 α n ) ( 1 β n , 0 ) ϕ ( p , x n ) + 2 α n ( 1 β n , 0 ) z n p , J x 1 J p β n , 0 β n , j g ( | | J x n J u n , i | | ) = ( 1 α n ( 1 β n , 0 ) ) ϕ ( p , x n ) + 2 α n ( 1 β n , 0 ) z n p , J x 1 J p β n , 0 β n , j g ( | | J x n J u n , i | | ) .
 □
Lemma 16.
Assume that { x n } and { y n } are sequences generated by Algorithm 4, such that lim n | | x n y n | | = 0 . Let { x n k } be a subsequence of { x n } , which converges weakly to some x ^ E as k , then x ^ V I ( C , A ) .
Proof. 
Because x n k x ^ , then by the hypothesis of the lemma, we have that y n k x ^ . Because y n k C n k , it follows from the definition of C n that
g ( x n k ) + y n k x n k , ξ n k 0 .
Since { x n } is bounded, then, by Condition (A4), there exists a constant M > 0 , such that | | ξ n k | | M for all k 0 . Hence,
g ( x n k ) M | | x n k y n k | | 0 , k .
By Condition (A3), we have that
g ( x ^ ) lim inf k g ( x n k ) 0 .
Hence, it follows from Condition (A2) that x ^ C . From Lemma 6, we obtain
y n k z , J x n k λ n k A x n k J y n k 0 , z C C n k .
By the monotonicity of A , we obtain
0 y n k z , J x n k J y n k λ n k y n k z , A x n k = z y n k , J y n k J x n k + λ n k z y n k , A x n k = z y n k , J y n k J x n k + λ n k z x n k , A x n k + λ n k x n k y n k , A x n k z y n k , J y n k J x n k + λ n k z x n k , A z + λ n k x n k y n k , A x n k . | | z y n k | | | | J y n k J x n k | | + λ n k z x n k , A z + λ n k | | x n k y n k | | | | A x n k | | .
Letting k , and since A x n is bounded and lim n | | x n y n | | = 0 , we have
z x ^ , A z 0 z C .
By Lemma 12, it follows that x ^ V I ( C , A ) as required.  □
Lemma 17.
Let p Ω and suppose { x n k } is a subsequence of { x n } such that lim inf k ( ϕ ( p , x n k + 1 ) ϕ ( p , x n k ) ) 0 , then ω w ( x n ) Ω .
Proof. 
By the hypothesis of the lemma, the construction of { x n + 1 } , the properties of the Lyapunov functional, and applying (3), we obtain
0 lim inf k ϕ ( p , x n k + 1 ) ϕ ( p , x n k ) lim inf k ( β n k , 0 ϕ ( p , x n k ) + i = 1 β n k , i ϕ ( p , u n k , i ) ϕ ( p , x n k ) ) lim inf k β n k , 0 ϕ ( p , x n k ) + ( 1 β n k , 0 ) ϕ ( p , z n k ) ϕ ( p , x n k ) lim inf k β n k , 0 ϕ ( p , x n k ) + α n k ( 1 β n k , 0 ) ϕ ( p , x 1 ) + ( 1 α n k ) ( 1 β n k , 0 ) ϕ ( p , w n k ) ϕ ( p , x n k ) = lim inf k ( 1 β n k , 0 ) ϕ ( p , x n k ) + α n k ( 1 β n k , 0 ) ϕ ( p , x 1 ) + ( 1 α n k ) ( 1 β n k , 0 ) ϕ ( p , w n k ) = lim inf k ( 1 β n k , 0 ) α n k ϕ ( p , x 1 ) + ( 1 α n k ) ϕ ( p , w n k ) ϕ ( p , x n k ) = lim inf k ( 1 β n k , 0 ) ϕ ( p , w n k ) ϕ ( p , x n k ) ( 1 a ) lim inf k ϕ ( p , w n k ) ϕ ( p , x n k ) ( 1 a ) lim sup k ϕ ( p , w n k ) ϕ ( p , x n k ) 0 .
Therefore,
lim k ϕ ( p , w n k ) ϕ ( p , x n k ) = 0 .
Subsequently, it follows from Lemma 13, Lemma 8, and Lemma 16 that
lim k | | x n k y n k | | = 0 , lim k | | w n k y n k | | = 0 , and ω w ( x n ) V I ( C , A ) .
Next, we show that ω w ( x n ) i = 1 F ( S i ) . Suppose x n k x * ω w ( x n ) , then by applying Lemma 15, we obtain
0 lim inf k ϕ ( p , x n k + 1 ) ϕ ( p , x n k ) lim inf k ( α n k ( 1 β n k , 0 ) ϕ ( p , x n k ) + 2 α n k ( 1 β n k , 0 ) z n k p , J x 1 J p β n k , 0 β n k , j g ( | | J x n k J u n k , i | | ) ) = lim sup k β n k , 0 β n k , j g ( | | J x n k J u n k , i | | ) a 2 lim sup k g ( | | J x n k J u n k , i | | ) 0 .
Hence,
g ( | | J x n k J u n k , i | | ) 0 , k .
It follows from the property of g that
lim k | | J x n k J u n k , i | | = 0 , i 1 .
Since J 1 is uniformly norm-to-norm continuous on bounded sets, we have
lim k | | x n k u n k , i | | = 0 , i 1 .
From (8), we have that
lim k | | w n k x n k | | = 0 .
Because J is uniformly norm-to-norm continuous on bounded sets, we obtain
lim k | | J w n k J x n k | | = 0 .
By the definition of z n , applying (10) and the fact that lim n α n = 0 , we get
| | J z n k J x n k | | = | | α n k J x 1 + ( 1 α n k ) J w n k J x n k | | = | | α n k ( J x 1 J x n k ) + ( 1 α n k ) ( J w n k J x n k ) | | α n k | | J x 1 J x n k | | + ( 1 α n k ) | | J w n k J x n k | | 0 , k .
Because J 1 is uniformly norm-to-norm continuous on bounded sets, it follows that
| | z n k x n k | | 0 , k .
By applying (9) and (11), we have
| | z n k u n k , i | | | | z n k x n k | | + | | x n k u n k , i | | 0 , k , i 1 .
Hence, it follows that
lim k d ( z n k , S i z n k ) lim k | | z n k u n k , i | | 0 , i 1 .
From (11) and (12), and the definition of S i for all i 1 , we have that
x * S i x * i 1 ,
which implies that
x * i = 1 F ( S i ) .
Hence, it follows from (11) and (13) that
ω w ( x n ) = ω w ( z n ) i = 1 F ( S i ) .
This, together with (8), implies that ω w ( x n ) Ω , as required.  □
Theorem 2.
Let { x n } be a sequence generated by Algorithm 4 such that Conditions A–C are satisfied. Subsequently, the sequence { x n } converges strongly to x = Π Ω x 1 .
Proof. 
Let x = Π Ω x 1 . Afterwards, it follows from Lemma 15 that
ϕ ( x , x n + 1 ) ( 1 α n ( 1 β n , 0 ) ) ϕ ( x , x n ) + 2 α n ( 1 β n , 0 ) z n x , J x 1 J x .
Now, we claim that the sequence { ϕ ( x , x n ) } converges to zero. In order to establish this, in view of Lemma 11, it suffices to show that lim sup k z n k x , J x 1 J x 0 for every subsequence { ϕ ( x , x n k ) } of { ϕ ( x , x n ) } satisfying
lim inf k ( ϕ ( x , x n k + 1 ) ϕ ( x , x n k ) ) 0 .
Suppose { ϕ ( x , x n k ) } is a subsequence of { ϕ ( x , x n ) } such that
lim inf k ( ϕ ( x , x n k + 1 ) ϕ ( x , x n k ) ) 0 .
Subsequently, it follows from Lemma 17 that ω w ( x n ) Ω . Additionally, from (11) we have that ω w ( x n ) = ω w ( z n ) . Because { x n k } is bounded, there exists a subsequence { x n k j } of { x n k } , such that x n k j u and
lim j x n k j x , J x 1 J x = lim sup k x n k x , J x 1 J x = lim sup k z n k x , J x 1 J x .
Because x = Π Ω x 1 , it follows from Lemma 6 and (15) that
lim sup k x n k x , J x 1 J x = lim j x n k j x , J x 1 J x = u x , J x 1 J x 0 ,
which implies that
lim sup k z n k x , J x 1 J x 0 .
Applying Lemma 11 to (14), and, together with (16), we deduce that lim n ϕ ( x , x n ) = 0 , that is, lim n x n = x , and this completes the proof.  □
Next, we propose our second algorithm in Algorithm 5, which is a slight modification of Algorithm 4. Because the proof of this result is very similar to that of Theorem 2, the proof is left for the reader to verify.
Theorem 3.
Let { x n } be a sequence generated by Algorithm 5, as follows:
Algorithm 5:
Step 0. 
Select { α n } , { β n , i } , λ 1 , and μ, such that Condition C holds. Choose x 1 E and set n = 1 .
Step 1.
Construct the half-space
C n = { w E : g ( x n ) + w x n , ξ n 0 } ,
where ξ n g ( x n ) , and compute
y n = Π C n J 1 ( J x n λ n A x n ) ,
If x n y n = 0 , then set x n = w n = z n and go to Step 4. Else, go to Step 2.
Step 2. 
Construct the half-space
T n = { w E : w y n , J x n λ n A x n J y n 0 } ,
and compute
w n = Π T n J 1 ( J x n λ n A y n ) .
Step 3. 
Compute
z n = J 1 ( α n J x 1 + ( 1 α n ) J w n ) .
Step 4. 
Compute
x n + 1 = J 1 ( β n , 0 J z n + i = 1 β n , i J u n , i ) , u n , i S i z n .
Step 5. 
Compute
λ n + 1 = min μ ( | | x n y n | | 2 + | | w n y n | | 2 ) 2 A x n A y n , w n y n , λ n , if A x n A y n , w n y n > 0 λ n , otherwise .
Set n : = n + 1 and return to Step 1.
Suppose that all of the conditions of Theorem 2 are satisfied. Subsequently, the sequence { x n } generated by Algorithm 5 converges strongly to x = Π Ω x 1 .
Now, we obtain some results, which are consequences of Theorem 2.
If we take the multivalued relatively nonexpansive mappings S i , i N in Theorem 2 as single-valued relatively nonexpansive mappings, then we obtain the following result.
Corollary 1.
Let S i : E E , i N be an infinite family of single-valued relatively nonexpansive mappings and { x n } be a sequence generated by Algorithm 6, as follows:
Algorithm 6:
Step 0. 
Select { α n } , { β n , i } , λ 1 , and μ such that Condition C holds. Choose x 1 E and set n = 1 .
Step 1. 
Construct the half-space
C n = { w E : g ( x n ) + w x n , ξ n 0 } ,
where ξ n g ( x n ) , and compute
y n = Π C n J 1 ( J x n λ n A x n ) ,
If x n y n = 0 , then set x n = w n = z n and go to Step 4. Else, go to Step 2.
Step 2. 
Construct the half-space
T n = { w E : w y n , J x n λ n A x n J y n 0 } ,
and compute
w n = Π T n J 1 ( J x n λ n A y n ) .
Step 3. 
Compute
z n = J 1 ( α n J x 1 + ( 1 α n ) J w n ) .
Step 4. 
Compute
x n + 1 = J 1 ( β n , 0 J x n + i = 1 β n , i J S i z n ) .
Step 5. 
Compute
λ n + 1 = min μ ( | | x n y n | | 2 + | | w n y n | | 2 ) 2 A x n A y n , w n y n , λ n , if A x n A y n , w n y n > 0 λ n , otherwise .
Set n : = n + 1 and return to Step 1.
Suppose that the solution set Ω = V I ( C , A ) i = 1 F ( S i ) is nonempty and the remaining conditions of Theorem 2 are satisfied. Subsequently, the sequence { x n } generated by Algorithm 6 converges strongly to x = Π Ω x 1 .
Remark 5.
Corollary 1 improves and extends Theorem 1 [42] in the following senses:
(i) 
The projection onto the closed convex set C is replaced with a projection onto a half-space, which can easily be computed.
(ii) 
The step size is self-adaptive and independent on the Lipschitz constant of the monotone operator.
(iii) 
The result extends the fixed point problem from a single relatively nonexpansive mapping to an infinite family of relatively nonexpansive mappings.
The next result follows from Lemma 2 and Corollary 1.
Corollary 2.
Let S i : E E , i N be an infinite family of generalized nonspreading mappings and { x n } be a sequence generated by Algorithm 7, as follows:
Algorithm 7:
Step 0. 
Select { α n } , { β n , i } , λ 1 , and μ such that Condition C holds. Choose x 1 E and set n = 1 .
Step 1. 
Construct the half-space
C n = { w E : g ( x n ) + w x n , ξ n 0 } ,
where ξ n g ( x n ) , and compute
y n = Π C n J 1 ( J x n λ n A x n ) ,
If x n y n = 0 , then set x n = w n = z n and go to Step 4. Else, go to Step 2.
Step 2. 
Construct the half-space
T n = { w E : w y n , J x n λ n A x n J y n 0 } ,
and compute
w n = Π T n J 1 ( J x n λ n A y n ) .
Step 3. 
Compute
z n = J 1 ( α n J x 1 + ( 1 α n ) J w n ) .
Step 4. 
Compute
x n + 1 = J 1 ( β n , 0 J x n + i = 1 β n , i J S i z n ) .
Step 5. 
Compute
λ n + 1 = min μ ( | | x n y n | | 2 + | | w n y n | | 2 ) 2 A x n A y n , w n y n , λ n , if A x n A y n , w n y n > 0 λ n , otherwise .
Set n : = n + 1 and return to Step 1.
Suppose that the solution set Ω = V I ( C , A ) i = 1 F ( S i ) is nonempty and the remaining conditions of Theorem 2 are satisfied. Afterwards, the sequence { x n } generated by Algorithm 7 converges strongly to x = Π Ω x 1 .

4. Application

Constrained Convex Minimization and Fixed Point Problems

In this section, we present an application of our result to finding a common solution of constrained convex minimization problem [59,60,61] and fixed point problem in Banach spaces.
Let E be a real Banach space and C be a nonempty closed convex subset of E . The constrained convex minimization problem is defined as finding a point x * C , such that
f ( x * ) = min x C f ( x ) ,
where f is a real-valued convex function. Convex optimization theory is a powerful tool for solving many practical problems in operation research. In particular, it has been widely employed to solve practical minimization problems over complicated constraints [32,62], e.g., convex optimization problems with a fixed point constraint and a variational inequality constraint.
The lemma that follows will be required:
Lemma 18.
[63] Let E be a real Banach space and let C be a nonempty closed convex subset of E . Let f be a convex function of E into R . If f is Fréchet differentiable, then z is a solution of Problem (17) if and only if z V I ( C , f ) .
By applying Theorem 2 and Lemma 18, we can approximate a common solution of the constrained convex minimization Problem (17) and fixed point problem for an infinite family of multivalued relatively nonexpansive mappings.
Theorem 4.
Let E be a two-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant 1 c , and S i : E C B ( E ) , i N be an infinite family of multivalued relatively nonexpansive mappings. Let f : E R be a fréchet differentiable convex function and suppose f is L-Lipschitz continuous with L > 0 . Assume that Problem (17) is consistent. Let { x n } be a sequence generated by Algorithm 8, as follows:
Algorithm 8:
Step 0. 
Select { α n } , { β n , i } , λ 1 , and μ such that Condition C holds. Choose x 1 E and set n = 1 .
Step 1. 
Construct the half-space
C n = { w E : g ( x n ) + w x n , ξ n 0 } ,
where ξ n g ( x n ) , and compute
y n = Π C n J 1 ( J x n λ n f x n ) ,
If x n y n = 0 , then set x n = w n = z n and go to Step 4. Else, go to Step 2.
Step 2. 
Construct the half-space
T n = { w E : w y n , J x n λ n f x n J y n 0 } ,
and compute
w n = Π T n J 1 ( J x n λ n f y n ) .
Step 3. 
Compute
z n = J 1 ( α n J x 1 + ( 1 α n ) J w n ) .
Step 4. 
Compute
x n + 1 = J 1 ( β n , 0 J x n + i = 1 β n , i J u n , i ) , u n , i S i z n .
Step 5. 
Compute
λ n + 1 = min μ ( | | x n y n | | 2 + | | w n y n | | 2 ) 2 f x n f y n , w n y n , λ n , if f x n f y n , w n y n > 0 λ n , otherwise .
Set n : = n + 1 and return to Step 1.
Suppose that the solution set Ω = V I ( C , f ) i = 1 F ( S i ) is nonempty and the remaining conditions of Theorem 2 are satisfied. Subsequently, the sequence { x n } generated by Algorithm 8 converges strongly to x = Π Ω x 1 .
Proof. 
Because f is convex, then f is monotone [63]. The result then follows by letting A = f in Theorem 2 and applying Lemma 18.  □

5. Numerical Example

In this section, we present some numerical examples to demonstrate the efficiency of our methods, Algorithm 4 and Algorithm 5 in comparison with Algorithm 3. All of the numerical computations were carried out using Matlab version R2019 (b).
We choose β n , 0 = 1 2 , β n , i = 1 2 i + 1 , λ 1 = 0.7 , μ = 0.4 , α n = 1 n + 1 .
Example 1.
Consider a nonlinear operator A : R 2 R 2 defined by
A ( x , y ) = ( x + y + sin x , x + y + sin y ) ,
and let the feasible set C be a box defined by C = [ 1 , 1 ] × [ 1 , 1 ] . It can easily be verified that A is monotone and Lipschitz continuous with the constant L = 3 . Let B be a 2 × 2 matrix defined by
B = 1 0 0 2 .
Now, we consider the mapping S i : R 2 R 2 defined by S i z = | | B | | 1 B z , for all i , where z = ( x , y ) T . It is easily verified that each S i is quasi-nonexpansive (note that in a Hilbert space relatively nonexpansive mapping reduces to quasi-nonexpansive mapping). The solution of the problem is x = ( 0 , 0 ) T . We test the algorithms for three different starting points using | | x n + 1 x n | | < ϵ as stopping criterion, where ϵ = 10 9 . The numerical result is reported in Figure 1 and Table 1.
Case I: 
x 1 = ( 0 , 1 ) ;
Case II: 
x 1 = ( 1 , 0 ) ;
Case III: 
x 1 = ( 1 , 1 ) ;
Case IV: 
x 1 = ( 1 , 2 ) .
Example 2.
Suppose that E = L 2 ( [ 0 , 1 ] ) with inner product
x , y : = 0 1 x ( t ) y ( t ) d t , x , y E ,
and induced norm
| | x | | : = ( 0 1 | x ( t ) | 2 d t ) 1 2 , x E .
Let C [ 0 , 1 ] denote the continuous function space defined on the interval [ 0 , 1 ] and choose an arbitrary fixed φ C [ 0 , 1 ] . Let C : = { x E : | | φ x | | 1 } . It can easily be verified that C is a nonempty closed convex subset of E . Define an operator A : E E * by
( A x ) ( t ) = max ( 0 , x ( t ) ) , x E .
Subsequently, A is two-Lipschitz continuous and monotone on E (see [64]). With these given C and A , the solution set to the VIP (1) is given by V I ( C , A ) = { 0 } . Define g : E R by
g ( x ) = 1 2 ( | | φ x | | 2 1 ) , x E ,
then g is a convex function and C is a level set of g , i.e., C = { x E : g ( x ) 0 } . Also, g is differentiable on E and g ( x ) = φ 2 x , x E (see [65]). In this numerical example, we choose φ ( t ) = e t , t [ 0 , 1 ] . Let S i : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) be defined by
( S i x ) ( t ) = 0 1 t i x ( s ) d s , t [ 0 , 1 ] .
Observe that F ( S i ) since 0 F ( S i ) for each i N . Moreover, S i are nonexpansive for each i N , and, hence, quasi-nonexpansive. Therefore, the solution set of the problem is x ( t ) = { 0 } . We test the algorithms for three different starting points using | | x n x | | < ϵ as stopping criterion, where ϵ = 10 9 . The numerical result is reported in Figure 2 and Table 2.
Case I: 
x 1 = t 3 + 3 t 2 2 ;
Case II: 
x 1 = exp ( 2 t ) ;
Case III: 
x 1 = 3 sin ( 2 π t ) .

6. Conclusions

In this paper, we studied a classical monotone and Lipschitz continuous variational inequality and fixed point problems defined on a level set of a convex function in the setting of Banach spaces. We proposed two iterative methods with self-adaptive step size that combine the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive mappings in Banach spaces. The main advantage of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space which guarantees easy implementation of our proposed methods. Moreover, the step size can be adaptively selected. We obtained strong convergence results for the proposed methods without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. The obtained results improve and extend several existing results in the current literature in this direction.

Author Contributions

Conceptualization of the article was given by S.H.K., methodology by O.T.M., software by T.O.A., validation by S.H.K., O.T.M. and T.O.A., formal analysis, investigation, data curation, and writing–original draft preparation by T.O.A., resources by S.H.K., O.T.M. and T.O.A., writing–review and editing by S.H.K. and O.T.M., visualization by S.H.K. and O.T.M., project administration by O.T.M., Funding acquisition by S.H.K. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

The authors sincerely thank the anonymous reviewers for their careful reading, constructive comments and fruitful suggestions that substantially improved the manuscript. The third author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei VIII Ser. Rend. Cl. Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  2. Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258, 4413–4416. [Google Scholar]
  3. Abass, H.A.; Aremu, K.O.; Jolaoso, L.O.; Mewomo, O.T. An inertial forward-backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Funct. Anal. 2020. [Google Scholar] [CrossRef]
  4. Gibali, A.; Reich, S.; Zalas, R. Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 2017, 66, 417–437. [Google Scholar] [CrossRef]
  5. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. Inertial extragradient method via viscosity approximation approach for solving Equilibrium problem in Hilbert space. Optimization 2020. [Google Scholar] [CrossRef]
  6. Kassay, G.; Reich, S.; Sabach, S. Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 2011, 21, 1319–1344. [Google Scholar] [CrossRef]
  7. Mewomo, O.T.; Ogbuisi, F.U. Convergence analysis of an iterative method for solving multiple-set split feasibility problems in certain Banach spaces. Quaest. Math. 2018, 41, 129–148. [Google Scholar] [CrossRef]
  8. Jolaoso, L.O.; Alakoya, T.; Taiwo, A.; Mewomo, O.T. A parallel combination extragradient method with Armijo line searching for finding common solution of finite families of equilibrium and fixed point problems. Rend. del Circ. Mat. di Palermo Ser. 2 2019. [Google Scholar] [CrossRef]
  9. Abbas, M.; Ibrahim, Y.; Khan, A.R.; De la Sen, M. Strong Convergence of a System of Generalized Mixed Equilibrium Problem, Split Variational Inclusion Problem and Fixed Point Problem in Banach Spaces. Symmetry 2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
  10. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. A general iterative method for finding common fixed point of finite family of demicontractive mappings with accretive variational inequality problems in Banach spaces. Nonlinear Stud. 2020, 27, 1–24. [Google Scholar]
  11. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  12. Jolaoso, L.O.; Ogbuisi, F.U.; Mewomo, O.T. An iterative method for solving minimization, variational inequality and fixed point problems in reflexive Banach spaces. Adv. Pure Appl. Math. 2017, 9, 167–184. [Google Scholar] [CrossRef]
  13. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces. Demonstr. Math. 2019, 52, 183–203. [Google Scholar] [CrossRef]
  14. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem. Comput. Appl. Math. 2019. [Google Scholar] [CrossRef]
  15. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Strong convergence theorem for solving pseudo-monotone variational inequality problem using projection method in a reflexive Banach space. J. Optim. Theory Appl. 2020. [Google Scholar] [CrossRef]
  16. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  17. Ogwo, G.N.; Izuchukwu, C.; Aremu, K.O.; Mewomo, O.T. A viscosity iterative algorithm for a family of monotone inclusion problems in an Hadamard space. Bull. Belg. Math. Soc. Simon Stevin 2020, 27, 125–152. [Google Scholar] [CrossRef]
  18. Oyewole, O.K.; Abass, H.A.; Mewomo, O.T. A Strong convergence algorithm for a fixed point constrainted split null point problem. Rend. Circ. Mat. Palermo II 2020. [Google Scholar] [CrossRef]
  19. Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. General alternative regularization method for solving split equality common fixed point problem for quasi-pseudocontractive mappings in Hilbert spaces. Ric. Mat. 2019. [Google Scholar] [CrossRef]
  20. Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert Spaces. J. Ind. Manag. Optim. 2020. [Google Scholar] [CrossRef]
  21. Xu, Y. Ishikawa and Mann iterative processes with errors for nonlinear strongly accretive operator equations. J. Math. Anal. Appl. 1998, 224, 91–101. [Google Scholar] [CrossRef] [Green Version]
  22. Korpelevich, G.M. An extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 1976, 12, 747–756. [Google Scholar]
  23. Apostol, R.Y.; Grynenko, A.A.; Semenov, V.V. Iterative algorithms for monotone bilevel variational inequalities. J. Comput. Appl. Math. 2012, 107, 3–14. [Google Scholar]
  24. Ceng, L.C.; Hadjisavas, N.; Weng, N.C. Strong convergence theorems by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46, 635–646. [Google Scholar] [CrossRef]
  25. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
  27. Okeke, G.A.; Abbas, M.; De La Sen, M. Approximation of the Fixed Point of Multivalued Quasi-Nonexpansive Mappings via a Faster Iterative Process with Applications. Discret. Dyn. Nat. Soc. 2020. [Google Scholar] [CrossRef]
  28. Wang, Y.; Fang, X.; Guan, J.L.; Kim, T.H. On split null point and common fixed point problems for multivalued demicontractive mappings. Optimization 2020, 1–20. [Google Scholar] [CrossRef]
  29. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
  30. Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan J. Math. 2001, 5, 387–404. [Google Scholar] [CrossRef]
  31. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  32. Combettes, P.L. A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 2003, 51, 1771–1782. [Google Scholar] [CrossRef] [Green Version]
  33. Bauschke, H.H. The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 1996, 202, 150–159. [Google Scholar] [CrossRef] [Green Version]
  34. Youla, D.C. Mathematical theory of image restoration by the method of convex projections. In Image Recovery: Theory and Applications; Stark, H., Ed.; Academic Press: New York, NY, USA, 1987. [Google Scholar]
  35. Iusem, A.N.; De Pierro, A.R. On the convergence of Han’s method for convex programming with quadratic objective. Math. Program. Ser. B 1991, 52, 265–284. [Google Scholar] [CrossRef]
  36. Iiduka, H. Acceleration method for convex optimization over the fixed point set of a nonexpansive mappings. Math. Prog. Ser. A 2015, 149, 131–165. [Google Scholar] [CrossRef]
  37. Iiduka, H. Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 2012, 236, 1733–1742. [Google Scholar] [CrossRef] [Green Version]
  38. Luo, C.; Ji, H.; Li, Y. Utility-based multi-service bandwidth allocation in the 4G heterogeneous wireless networks. IEEE Wirel. Commun. Netw. Conf. 2009. [Google Scholar] [CrossRef]
  39. Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  40. Zhang, H.; Xu, Y.; Zhang, J. Reproducing Kernel Banach Spaces for Machine Learning. J. Mach. Learn. Res. 2009, 10. [Google Scholar] [CrossRef] [Green Version]
  41. Der, R.; Lee, D. Large-Margin Classification in Banach Spaces. Available online: http://proceedings.mlr.press/v2/der07a/der07a.pdf (accessed on 6 August 2020).
  42. Liu, Y. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Banach spaces. J. Nonlinear Sci. Appl. 2017, 10, 395–409. [Google Scholar] [CrossRef]
  43. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  44. Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces. Numer. Algorithms 2020. [Google Scholar] [CrossRef]
  45. Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Springer: London, UK, 2006. [Google Scholar]
  46. Reich, S. A weak convergence theorem for the alternating method with Bregman distances. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Marcel Dekker: New York, NY, USA, 1996; pp. 313–318. [Google Scholar]
  47. Kohsaka, F.; Takahashi, W. Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces. SIAM J. Optim. 2008, 19, 824–835. [Google Scholar] [CrossRef]
  48. Hsu, M.H.; Takahashi, W.; Yao, J.C. Generalized hybrid mappings in Hilbert spaces and Banach spaces. Taiwan. J. Math. 2012, 16, 129–149. [Google Scholar] [CrossRef]
  49. Homaeipour, S.; Razani, A. Weak and strong convergence theorems for relatively nonexpansive multi-valued mappings in Banach spaces. Fixed Point Theory Appl. 2011, 73. [Google Scholar] [CrossRef] [Green Version]
  50. Kamimura, S.; Takahashi, W. Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 2003, 13, 938–945. [Google Scholar] [CrossRef]
  51. Xu, H.K. Strong convergence of approximating fixed point sequences for nonexpansive mappings. Bull. Aust. Math. Soc. 2006, 74, 143–151. [Google Scholar] [CrossRef] [Green Version]
  52. Iiduka, H.; Takahashi, W. Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl. 2008, 339, 668–679. [Google Scholar] [CrossRef]
  53. Matsushita, S.Y.; Takahashi, W. A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 2005, 134, 257–266. [Google Scholar] [CrossRef] [Green Version]
  54. Nakajo, K. Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 2015, 271, 251–258. [Google Scholar] [CrossRef]
  55. Zǎlinescu, C. On uniformly convex functions. J. Math. Anal. Appl. 1983, 95, 344–374. [Google Scholar] [CrossRef] [Green Version]
  56. Chang, S.S.; Kim, J.K.; Wang, X.R. Modified block iterative algorithm for solving convex feasibility problems in Banach spaces. J. Inequal. Appl. 2010, 869684. [Google Scholar] [CrossRef] [Green Version]
  57. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization 2020. [Google Scholar] [CrossRef]
  58. Ma, F. A subgradient extragradient algorithm for solving monotone variational inequalities in Banach spaces. J. Inequal. Appl. 2020, 26. [Google Scholar] [CrossRef]
  59. Aremu, K.O.; Abass, H.A.; Izuchukwu, C.; Mewomo, O.T. A viscosity-type algorithm for an infinitely countable family of (f,g)-generalized k-strictly pseudononspreading mappings in CAT(0) spaces. Analysis 2020, 40, 19–37. [Google Scholar] [CrossRef]
  60. Aremu, K.O.; Izuchukwu, C.; Ogwo, G.N.; Mewomo, O.T. Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 2020. [Google Scholar] [CrossRef] [Green Version]
  61. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  62. Panyanak, B. Ishikawa iteration processes for multi-valued mappings in Banach Spaces. Comput. Math. Appl. 2007, 54, 872–877. [Google Scholar] [CrossRef] [Green Version]
  63. Tian, M.; Jiang, B. Inertial Haugazeau’s hybrid subgradient extragradient algorithm for variational inequality problems in Banach spaces. Optimization 2020. [Google Scholar] [CrossRef]
  64. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
  65. He, S.; Dong, Q.; Tian, H. Relaxed projection and contraction methods for solving Lipschitz continuous monotone variational inequalities. Rev. Real Acad. Cienc. Exatc. Fis. Nat. Ser. A Mat. 2019, 113, 2773–2791. [Google Scholar] [CrossRef]
Figure 1. Top: Case I; Next: Case II; Next: Case III; Bottom: Case IV.
Figure 1. Top: Case I; Next: Case II; Next: Case III; Bottom: Case IV.
Mca 25 00054 g001
Figure 2. Top: Case I; Middle: Case II; Bottom: Case III.
Figure 2. Top: Case I; Middle: Case II; Bottom: Case III.
Mca 25 00054 g002
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Algorithm 3Algorithm 4
Case ICPU Time (s)0.02430.0241
No of Iter.6532
Case IICPU Time (s)0.01200.0171
No. of Iter.6631
Case IIICPU Time (s)0.01170.0153
No of Iter.6731
Case IVCPU Time (s)0.17700.1990
No of Iter.6831
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
Algorithm 3Algorithm 4Algorithm 5
Case ICPU Time (s)1.92201.99793.8269
No of Iter.301831
Case IICPU Time (s)5.96738.136713.9678
No. of Iter.291730
Case IIICPU Time (s)2.47253.63325.8903
No of Iter.311832

Share and Cite

MDPI and ACS Style

Khan, S.H.; Alakoya, T.O.; Mewomo, O.T. Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Math. Comput. Appl. 2020, 25, 54. https://0-doi-org.brum.beds.ac.uk/10.3390/mca25030054

AMA Style

Khan SH, Alakoya TO, Mewomo OT. Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Mathematical and Computational Applications. 2020; 25(3):54. https://0-doi-org.brum.beds.ac.uk/10.3390/mca25030054

Chicago/Turabian Style

Khan, Safeer Hussain, Timilehin Opeyemi Alakoya, and Oluwatosin Temitope Mewomo. 2020. "Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces" Mathematical and Computational Applications 25, no. 3: 54. https://0-doi-org.brum.beds.ac.uk/10.3390/mca25030054

Article Metrics

Back to TopTop