Next Article in Journal
Monitoring the Variability in the Process Using Neutrosophic Statistical Interval Method
Next Article in Special Issue
Bochner-Like Transform and Stepanov Almost Periodicity on Time Scales with Applications
Previous Article in Journal
IoT Application-Layer Protocol Vulnerability Detection using Reverse Engineering
Previous Article in Special Issue
Some Globally Stable Fixed Points in b-Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Analysis of an Inexact Three-Operator Splitting Algorithm

1
Department of Mathematics, NanChang University, Nanchang 330031, China
2
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
3
Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea
*
Author to whom correspondence should be addressed.
Submission received: 12 October 2018 / Revised: 20 October 2018 / Accepted: 28 October 2018 / Published: 1 November 2018
(This article belongs to the Special Issue Fixed Point Theory and Fractional Calculus with Applications)

Abstract

:
The three-operator splitting algorithm is a new splitting algorithm for finding monotone inclusion problems of the sum of three maximally monotone operators, where one is cocoercive. As the resolvent operator is not available in a closed form in the original three-operator splitting algorithm, in this paper, we introduce an inexact three-operator splitting algorithm to solve this type of monotone inclusion problem. The theoretical convergence properties of the proposed iterative algorithm are studied in general Hilbert spaces under mild conditions on the iterative parameters. As a corollary, we obtain general convergence results of the inexact forward-backward splitting algorithm and the inexact Douglas-Rachford splitting algorithm, which extend the existing results in the literature.
AMS Subject Classification:
47H05; 65K05; 65K15; 90C25

1. Introduction

Operator splitting algorithms have been widely applied for solving various convex optimization problems in signal and image processing, medical image reconstruction, machine learning and others. In particular, the forward-backward splitting algorithm [1], the Douglas-Rachford splitting algorithm [2,3] and Tseng’s forward-backward-forward splitting algorithm [4] are classical operator splitting algorithms for solving monotone inclusion problems. Many popular optimization algorithms can be derived from them, such as the proximal gradient algorithm [5,6], the primal-dual fixed point algorithm [7,8], the alternating directions method of multipliers (ADMM) [9,10,11], the primal-dual splitting algorithm [12,13,14,15,16] and many others (for more results, see [17,18,19] for a comprehensive review). Recently, to solve large-scale optimization problems, new operator splitting algorithms were proposed, that is, algorithms including the generalized forward-backward splitting algorithm [20,21], the variable metric forward-backward splitting algorithm [22], the asymmetric forward-backward-adjoint splitting algorithm [23] and the inertial forward-backward splitting algorithm [24].
In particular, Davis and Yin [25] proposed a three-operator splitting algorithm to solve the following monotone inclusion problem,
Find x H such that 0 A x + B x + C x ,
where H is a real Hilbert space, A : H 2 H and B : H 2 H are maximally monotone operators, and C : H H is a cocoercive operator. They pointed out that the three-operator splitting algorithm includes not only the forward-backward operator splitting algorithm [26] and the Douglas-Rachford operator splitting algorithm [27], but also the forward-Douglas-Rachford operator splitting algorithm [28]. Also, they proved some of the convergence theorems of the three-operator splitting algorithm under mild conditions on the parameters and studied the convergence rates of the iterative algorithm. It is worth mentioning that Raguet et al. [20] introduced a generalized forward-backward splitting (GFBS) algorithm for solving monotone inclusion of the sum of a finite family of maximally monotone operators and a cocoercive operator. It is known that the sum of a finite family of maximally monotone operators can be represented by the sum of two maximally monotone operators in a suitable product space, where one of them is the normal cone of a closed vector subspace. Therefore, the GFBS algorithm could be recovered by the three-operator splitting algorithm. Conversely, it is not clear how to deduce the three-operator splitting algorithm from the GFBS algorithm. Besides, a preconditioning extension of the GFBS algorithm was developed by Raguet and Landrieu [21].
On the other hand, it is worth mentioning that Vũ [15] introduced a primal-dual splitting algorithm to solve the monotone inclusion with the sum of mixtures of maximally monotone operators, parallel sums of maximally monotone operators with linear operators, and cocoercive operators. Although this general monotone inclusion includes the three-operator monotone inclusion (1) studied in Davis and Yin [25], the obtained primal-dual splitting algorithm introduced additional dual variables. Therefore, the primal-dual splitting algorithm is different from the three-operator splitting algorithm.
The corresponding convex optimization problem related with the three-operator inclusion problem (1) is as follows:
min x H f ( x ) + g ( x ) + h ( x ) ,
where g , h : H ( , + ] are proper, lower-semicontinuous convex functions and f : H R is a convex continuous differentiable function, and its gradient f is L-Lipschitz continuous, for some L ( 0 , + ) . Under the assumptions that the proximity operators of g and h have an explicit closed-form solution, the three-operator splitting algorithm [25] can be applied directly to solve the convex minimization problem (2) by letting A = g , B = h and C = f , where g and h denote the subdifferentials of g and h, respectively. The convex optimization problem (2) includes many real problems that have appeared in signal and image processing, material sciences and medical image reconstruction, etc. See, for example [6,29,30,31].
Since the three-operator splitting algorithm [25] is a new algorithm, there exists relatively few works directly related to it. Cevher et al. [32] extended the three-operator splitting algorithm [25] from deterministic setting to stochastic setting for solving the monotone inclusion problem (1). Further, Yurtsever et al. [33] proposed a stochastic three-composite minimization algorithm (S3CM) for solving the convex minimization problem of the sum of three convex functions (2). Besides, Pedregosa and Gidel [34] proposed a novel adaptive three-operator splitting algorithm, which choses the step-size without knowing the Lipschitz constant of the gradient operator. However, in these works, they did not consider errors that appeared in the computation of proximity operators or gradient operators.
As we have mentioned before, operator splitting algorithms provide a simple way to construct an effective iterative algorithm for solving many structure convex optimization problems, for example (2). It is sufficient to require that the proximity operator of the corresponding function has an explicit closed-form solution. However, there are also many convex functions which exist that do not have a closed-form solution of their proximity operators. Although it is difficult to compute the exact proximity operator for these functions, it can be accurately calculated under certain error conditions. In general, the operator splitting algorithm that allows the calculation of the proximity operator with error is called an inexact operator splitting algorithm. These were developed together with the exact operator splitting algorithms. We will briefly review some existing works for inexact operator splitting schemes. The well-known inexact proximal point algorithm was first studied in Rockafellar [35]. In [27], Eckstein and Bertsekas proposed a relaxed inexact proximal point algorithm and a relaxed inexact Douglas-Rachford splitting algorithm.
In the context of convex minimization problem, Combettes and Wajs [26] introduced an inexact forward-backward splitting algorithm. They analyzed the convergence of the algorithm, which was based on the fixed point theoretic framework in [36]. He and Yuan [37] and Salzo and Villa [38] studied an accelerated inexact proximal point algorithm ( f = 0 and g = 0 in (2)). Subsequently, an accelerated inexact forward-backward splitting algorithm was proposed by Villa et al. [39], who not only proved that the objective function values have a convergence rate 1 / k 2 when the allowable error is a certain type, but also presented a global analysis of iteration-complexity. Schmidt et al. [40] analyzed the convergence rates of an accelerated proximal-gradient algorithm with an inexact proximity operator. Inspired by a special case of the hybrid proximal outer gradient method of Solodov and Svaiter [41], Eckstein and Yao [42] derived an inexact Douglas-Rachford operator splitting algorithm and an inexact ADMM operator algorithm. Recently, the algorithm was further developed by Alves and Geremia [43], who proved complexity of the inexact Douglas-Rachford algorithm (for more results on other inexact operator splitting algorithms, see [43,44,45,46,47,48,49,50,51,52]).
The purpose of this paper is to introduce an inexact three-operator splitting algorithm (Algorithm 1) to solve the monotone inclusion (1). The corresponding resolvent operators and cocoercive operator are allowed to be computed with errors. Under mild conditions with the parameters and errors, we investigate the convergence behavior of the inexact three-operator splitting algorithm. Furthermore, we recover the inexact forward-backward splitting algorithm and the inexact Douglas-Rachford splitting algorithm as corollaries.
The rest of this paper is organized as follows. In Section 2 we review some background on monotone operators and convex analysis. In Section 3, we present the inexact three-operator splitting algorithm and its convergence theorem. Finally, we give some conclusions and future works.
Algorithm 1: An inexact three-operator splitting algorithm
Input: For arbitrary z 0 H , choose γ and λ k .
 For each k = 0 , 1 , 2 , , compute
1:
x B k = J γ B ( z k ) + e B k ;
2:
x A k = J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) + e A k ;
3:
z k + 1 = z k + λ k ( x A k x B k ) .
 Stop when a given stopping criterion is met.
Output: x B k , x A k and z k + 1 .

2. Preliminaries

In this paper, let H be a real Hilbert space. The inner product and the associated norms of H are denoted by , and · , respectively. Let Γ 0 ( H ) denotes the class of proper, lower semicontinuous and convex functions from H to ( , + ] . Let F i x ( T ) denotes the fixed points set of an operator T. We use the symbols ⇀ and → to denote weak and strong convergence, respectively.
The following definitions and properties are mostly found in [53].
Definition 1. 
(Zeros, Domain, Range, Graph and Resolvent) Let A : H 2 H be a set-valued operator, where 2 H denotes the power set of H. Let I be the identity operator on H. Then,
(1) 
The set of zeros of A is z e r A : = { x H : 0 A x } ;
(2) 
The domain of A is d o m A : = { x H : A x } ;
(3) 
The range of A is r a n A : = { y H : x H : y A x } ;
(4) 
The graph of A is g r a A : = { ( x , y ) H × H : y A x } ;
(5) 
The resolvent of A is J A = ( I + A ) 1 .
Definition 2.
(Maximal monotone operator) Let A : H 2 H be a set-valued operator. Then A is said to be monotone if
x y , u v 0 , ( x , u ) g r a A , ( y , v ) g r a A .
A is said to be maximally monotone if there exists no monotone operator B : H 2 H such that the graph of B properly contains g r a A .
Definition 3.
(Nonexpansive and α-averaged) Let T : H H be an operator, T is said to be nonexpansive if
T x T y x y , x , y H .
Let α ( 0 , 1 ) , T is said to be α-averaged if there exists a nonexpansive operator R such that T = ( 1 α ) I d + α R . If α = 1 2 , then T is called a firmly nonexpansive operator.
The following two lemmas give some useful characterizations of firmly nonexpansive operators and α -averaged operators.
Lemma 1.
Let T : H H be an operator. Then, the following statements are equivalent:
(1) 
T is firmly nonexpansive.
(2) 
2 T I d is nonexpansive.
(3) 
For all x , y H , T x T y 2 T x T y , x y .
Lemma 2.
Let T : H H is nonexpansive, and let α ( 0 , 1 ) . Then the following are equivalent:
(1) 
T is α-averaged.
(2) 
( 1 1 α ) I d + ( 1 α ) T is nonexpansive.
(3) 
For all x , y H , T x T y 2 x y 2 1 α α ( I d T ) x ( I d T ) y 2 .
Lemma 3.
Let γ ( 0 , + ) . Let A : H 2 H be a maximally monotone operator. Then J γ A : H H and I d J γ A : H H are firmly nonexpansive and maximally monotone.
Definition 4.
(Cocoercive operator) An operator B : H H is said to be β-cocoercive with β ( 0 , + ) if
β B x B y 2 B x B y , x y , x , y H .
A cocoercive operator is also called an inverse strongly monotone operator (see, for example, [54]).
It is easy to see that a β -cocoercive operator is 1 / β -Lipschitz continuous.
Let’s recall the definition of a uniformly monotone operator.
Definition 5.
(Uniformly monotone operator) A set-valued operator A : H 2 H is said to be uniformly monotone of a modulus ϕ : [ 0 , + ] [ 0 , + ] if ϕ is a nondecreasing function with ϕ ( 0 ) = 0 such that
u v , x y ϕ ( x y ) , ( x , u ) g r a A , ( y , v ) g r a A .
If ϕ β ( · ) 2 > 0 , then A is said to be strongly monotone.
Definition 6.
(Demiregular) A set-valued operator A : H 2 H is said to be demiregular at x d o m A , if for all u A x and for all sequences ( x k , u k ) g r a A with x k x and u k u , we have x k x .
We shall make full use of the following lemma to prove the weak convergence of the iterative sequence.
Lemma 4.
Let sequence { x n } H . Then we say that { x n } converges weakly to a point x H if and only if it is bounded and the weak sequential cluster point is unique.
The following lemma will be needed in the next section.
Lemma 5.
Let x , y H and λ R . Then we have
λ x + ( 1 λ ) y 2 + λ ( 1 λ ) x y 2 = λ x 2 + ( 1 λ ) y 2 .
We recall the following lemma, which is crucial to our convergence analysis (See Lemma 5.1 of Combettes [36]):
Lemma 6.
(Inexact Krasnosel’skiĭ-Mann algorithm) Let T : H H be nonexpansive, { λ n } be a sequence in ( 0 , 1 ) and { e n } be a sequence in H. Suppose that F i x ( T ) , n = 0 + λ n ( 1 λ n ) = + and n = 0 + λ n e n < + . Let x 0 H , and set
x n + 1 = x n + λ n ( T x n + e n x n ) , n 0 ,
Then the following hold:
(1) 
{ x n } is Fejér monotone with respect to F i x ( T ) .
(2) 
{ T x n x n } converges strongly to 0.
(3) 
{ x n } converges weakly to a point in F i x ( T ) .
When e n 0 , the inexact Krasnosel’skiĭ-Mann algorithm (iKM) (8) reduces to the classical Krasnosel’skiĭ-Mann algorithm (KM). The algorithms (KM) and (iKM) are useful for finding fixed points of nonexpansive mappings. They provide a unified way for analyzing the convergence of various operator splitting algorithms and convex optimization algorithms.
Lemma 6 could be easily extended to the setting of α -averaged operators.
Lemma 7.
Let T : H H be α-averaged, { λ n } be a sequence in ( 0 , 1 α ) and { e n } be a sequence in H. Suppose that F i x ( T ) , n = 0 + λ n ( 1 α λ n ) = + and n = 0 + λ n e n < + . Let x 0 H and set
x n + 1 = x n + λ n ( T x n + e n x n ) , n 0 ,
Then the following hold:
(1) 
{ x n } is Fejér monotone with respect to F i x ( T ) .
(2) 
{ T x n x n } converges strongly to 0.
(3) 
{ x n } converges weakly to a point in F i x ( T ) .
Proof. 
Set T = ( 1 α ) I + α S , where S is nonexpansive. Then the iterative algorithm (9) can be rewritten as
x n + 1 = x n + α λ n ( S x n + 1 α e n x n )
for all n 0 . Since F i x ( T ) = F i x ( S ) , it is easy to check that all the conditions of Lemma 6 are satisfied. Thus the results of Lemma 7 follow directly from Lemma 6. This completes the proof. □

3. An Inexact Three-Operator Splitting Algorithm

In this section, first, we present an inexact three-operator splitting algorithm. Second, we prove the convergence of it.
Now, we are ready to prove the main convergence results of Algorithm 1. Theorem 1 is parallel to the convergence results of the three-operator splitting algorithm [25], in which the error terms e A k , e B k and e C k are equal to zeros in Algorithm 1.
Theorem 1.
Let A : H 2 H and B : H 2 H be maximally monotone operators. Let C : H H be a β-cocoercive operator, for some β > 0 . Assume that z e r ( A + B + C ) is nonempty. Let γ > 0 and define an operator T : H H as follows,
T : = I J γ B + J γ A ( 2 J γ B I γ C J γ B ) .
Let α = 1 2 ε , where ε ( 0 , 1 ) . Assume that γ ( 0 , 2 β ε ) and λ k ( 0 , 1 α ) such that k = 0 + λ k ( 1 α λ k ) = + . Let { z k } , { x B k } and { x A k } be the iterative sequences generated by Algorithm 1. Assume that
k = 0 + e A k < + , k = 0 + e B k < + , k = 0 + e C k < + .
Then the following hold:
(1) 
{ z k } is Fejér-monotone with respect to F i x ( T ) .
(2) 
{ T z k z k } converges strongly to zero.
(3) 
{ z k } converges weakly to a fixed point of T.
(4) 
If x * z e r ( A + B + C ) , then there exists a constant M > 0 such that, for any λ k ( 0 , 1 α ) ,
k = 0 + λ k C J γ B z k C x * 2 1 γ ( 2 β γ ε ) z z * 2 + k = 0 + M α e k .
In addition, we have
k = 0 + λ k C x B k C x * 2 1 γ ( 2 β γ ε ) z z * 2 + S ,
where
S = M α γ ( 2 β γ ε ) k = 0 + e k + 1 α β 2 k = 0 + e B k ( e B k + 2 z z * ) .
(5) 
If λ k λ ̲ > 0 , then there exists z * F i x ( T ) such that the iterative sequence { x B k } converges weakly to J γ B z * z e r ( A + B + C ) .
(6) 
If λ k λ ̲ > 0 , then there exists z * F i x ( T ) such that the iterative sequence { x A k } converges weakly to J γ B z * z e r ( A + B + C ) .
(7) 
Let λ k λ ̲ > 0 and assume that there exists z * F i x ( T ) . Suppose that one of the following conditions hold:
(a) 
A is uniformly monotone on every nonempty bounded subset of d o m A ;
(b) 
B is uniformly monotone on every nonempty bounded subset of d o m B ;
(c) 
C is demiregular at every point x z e r ( A + B + C ) .
Then { x A k } and { x B k } converge strongly to J γ B z * z e r ( A + B + C ) .
Proof. 
First, the iterative sequences { z k + 1 } of Algorithm 1 can be written equivalently as
z k + 1 = z k + λ k ( x A k x B k ) = z k + λ k ( J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) + e A k J γ B z k e B k ) = z k + λ k ( T z k z k + J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) J γ A ( 2 J γ B z k z k γ C J γ B z k ) + e A k e B k ) = z k + λ k ( T z k z k + e k ) ,
where
e k = J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) J γ A ( 2 J γ B z k z k γ C J γ B z k ) + e A k e B k .
Since C is 1 β -Lipschitz continuous and J γ A is nonexpansive, it follows that
e k = J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) J γ A ( 2 J γ B z k z k γ C J γ B z k ) + e A k e B k J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) J γ A ( 2 J γ B z k z k γ C J γ B z k ) + e A k + e B k 2 x B k z k γ ( C x B k + e C k ) 2 J γ B z k + z k + γ C J γ B z k + e A k + e B k = 2 e B k γ ( C x B k + e C k ) + γ C J γ B z k + e A k + e B k γ C ( J γ B z k + e B k ) C J γ B z k + 3 e B k + e A k + γ e C k ( γ β + 3 ) e B k + e A k + γ e C k .
Since k = 0 + e A k < + , k = 0 + e B k < + , k = 0 + e C k < + and γ ( 0 , 2 β ε ) , we have k = 0 + e k < + . Thus, with the help of Proposition 2.1 of [25], T is α -averaged and we can obtain the conclusions (1), (2) and (3) by Lemma 7.
(4) Let x * z e r ( A + B + C ) . Then, by Lemma 2.2 of [25], there exists z * F i x ( T ) such that x * = J γ B ( z * ) . By (1) and k = 0 + e k < + , we know that { z k z * } and { e k } are bounded. In view of (14), Lemma 5 and the Cauchy-schwartz inequality, we have
z k + 1 z * 2 = z k + λ k ( T z k z k + e k ) z * 2 = ( 1 λ k ) ( z k z * ) + λ k ( T z k z * ) + λ k e k 2 = ( 1 λ k ) ( z k z * ) + λ k ( T z k z * ) 2 + λ k 2 e k 2 + 2 λ k ( 1 λ k ) ( z k z * ) + λ k ( T z k z * ) , e k ( 1 λ k ) ( z k z * ) + λ k ( T z k z * ) 2 + 2 λ k ( 1 λ k ) ( z k z * ) + λ k ( T z k z * ) e k + λ k 2 e k 2 ( 1 λ k ) ( z k z * ) + λ k ( T z k z * ) 2 + M λ k e k = ( 1 λ k ) z k z * 2 + λ k T z k z * 2 λ k ( 1 λ k ) T z k z k 2 + M λ k e k ,
where M = sup k N ( 2 z k z * + λ k e k ) . On the other hand, it follows from Remark 2.1 of [25] that
T z k z * 2 z k z * 2 1 α α T z k z k 2 γ 2 β γ ε C J γ B z k C J γ B z * 2 .
Substituting (17) into (16), we get
z k + 1 z * 2 z k z * 2 λ k 1 λ k + 1 α α T z k z k 2 γ λ k 2 β γ ε C J γ B z k C J γ B z * 2 + M λ k e k ,
which implies that
z k + 1 z * 2 z k z * 2 γ λ k 2 β γ ε C J γ B z k C x * 2 + M α e k .
Then we have
γ λ k 2 β γ ε C J γ B z k C x * 2 z k z * 2 z k + 1 z * 2 + M α e k .
Summing from zero to infinity, we obtain
k = 0 + λ k C J γ B z k C x * 2 1 γ ( 2 β γ ε ) z z * 2 + k = 0 + M α e k .
Further, we have
C x B k C x * 2 = C x B k C J γ B z k + C J γ B z k C x * 2 = C x B k C J γ B z k 2 + C J γ B z k C x * 2 + 2 C x B k C J γ B z k , C J γ B z k C x * C ( J γ B z k + e B k ) C J γ B z k 2 + C J γ B z k C x * 2 + 2 C x B k C J γ B z k C J γ B z k C x * 1 β 2 e B k 2 + C J γ B z k C x * 2 + 2 β e B k C J γ B z k C x * C J γ B z k C x * 2 + 1 β 2 e B k ( e B k + 2 z 0 z * ) .
Therefore, summing (22) from zero to infinity, we obtain
k = 0 + λ k C x B k C x * 2 k = 0 + λ k C J γ B z k C x * 2 + 1 β 2 e B k ( e B k + 2 z 0 z * ) 1 γ ( 2 β γ ε ) z z * 2 + k = 0 + M α e k + 1 α β 2 k = 0 + e B k ( e B k + 2 z z * ) .
(5) Since the resolvent operator is firmly nonexpansive, it is also nonexpansive. Then we have
x B k J γ B z * = J γ B z k + e B k J γ B z * J γ B z k J γ B z * + e B k z k z * + e B k .
Notice that { z k } and { e B k } are bounded, { x B k } is also bounded. Let x ¯ be a weak sequential cluster point of { x B k } , say x B k n x ¯ . From Algorithm 1, it follows that
x B k e B k = J γ B z k , x A k e A k = J γ A ( 2 x B k z k γ ( C x B k + e C k ) ) .
Let
u B k : = 1 γ ( z k x B k + e B k ) B ( x B k e B k ) ,
and
u A k : = 1 γ ( 2 x B k z k γ ( C x B k + e C k ) x A k + e A k ) A ( x A k e A k ) .
Then we have ( x B k e B k , u B k ) g r a B and ( x A k e A k , u A k ) g r a A . By (4), we have C x B k C x * , where x * z e r ( A + B + C ) . Since C is cocoercive, it is maximal monotone and so its graph is closed in H w e a k × H s t r o n g , which, together with x B k n x ¯ , yields C x ¯ = C x * . Consequently, we have C x B k n C x ¯ . By (2), we have x A k x B k = T z k z k + e k 0 . Then x A k n x ¯ . Further, by (3), we obtain
u B k n 1 γ ( z * x ¯ ) , u A k n 1 γ ( x ¯ z * γ C x ¯ ) .
From Corollary 25.5 of [53], we obtain
x ¯ z e r ( A + B + C ) , x ¯ , 1 γ ( z * x ¯ ) g r a B , x ¯ , 1 γ ( x ¯ z * γ C x ¯ ) g r a A .
Hence x ¯ = J γ B z * . Thus J γ B z * is the unique weak sequential cluster point of { x B k } . By Lemma 4, we can conclude that { x B k } converges weakly to J γ B z * z e r ( A + B + C ) .
(6) Notice that x A k x B k 0 as k + . By (5), it follows that { x A k } also converges weakly to J γ B z * z e r ( A + B + C ) .
(7) Let x * = J γ B ( z * ) . By Lemma 2.2 of [25], we have x * z e r ( A + B + C ) . Let u B * : = 1 γ ( z * x * ) . Then u B * B x * . Let u A * : = 1 γ ( x * z * ) C x * . It follow from x * z e r ( A + B + C ) and u B * B x * that u A * A x * .
(a) From (5), we have
( x A k e A k , u A k ) g r a A , ( x B k e B k , u B k ) g r a B .
Since B + C is monotone, we obtain
0 x B k e B k x * , u B k + C ( x B k e B k ) ( u B * + C x * ) .
Define S : = { x * } { x A k e A k } . By (6), { x A k } is bounded and so S is also bounded. Since A is uniformly monotone, there exists an increasing function Φ A : R + [ 0 , + ) with Φ A ( 0 ) = 0 such that
Φ A ( x A k e A k x * ) x A k e A k x * , u A k u A * .
Adding (28) and (29) yields
γ Φ A ( x A k e A k x * ) γ x A k e A k x * , u A k u A * + γ x B k e B k x * , u B k + C ( x B k e B k ) ( u B * + C x * ) = γ x A k e A k ( x B k e B k ) , u A k u A * + γ x B k e B k x * , u A k u A * + γ x B k e B k x * , u B k + C ( x B k e B k ) ( u B * + C x * ) = γ x A k e A k ( x B k e B k ) , u A k u A * + γ x B k e B k x * , u A k + u B k + C ( x B k e B k ) = x A k e A k x B k + e B k , γ u A k γ u A * + x B k e B k x * , x B k x A k + e A k + e B k + γ C ( x B k e B k ) γ C x B k γ e C k = x A k e A k x B k , γ u A k γ u A * + e B k , γ u A k γ u A * + x B k e B k x * , x B k x A k + e A k + x B k e B k x * , e B k + x B k e B k x * , γ C ( x B k e B k ) γ C x B k γ e C k = x B k x A k + e A k , x B k e B k x * γ u A k + γ u A * + e B k , x B k e B k x * + γ u A k γ u A * + x B k e B k x * , γ C ( x B k e B k ) γ C x B k γ e C k = x B k x A k + e A k , z k z * + γ C x B k γ C x * + x A k x B k e A k e B k + γ e C k + e B k , 3 x B k z k γ ( C x B k + e C k ) x A k + e A k e B k 2 x * + z * + γ C x * + x B k e B k x * , γ C ( x B k e B k ) γ C x B k γ e C k .
By (2) and (4), we have x B k x A k 0 and C x B k C x * 0 as k + . Then we have
x B k x A k + e A k , z k z * + γ C x B k γ C x * + x A k x B k e A k e B k + γ e C k 0 ,
as k + . By (3), (5) and (6), we know that { x B k } , { x A k } and { z k } are bounded. Notice that k = 0 + e B k < + and k = 0 + e A k < + , we have
e B k , 3 x B k z k γ ( C x B k + e C k ) x A k + e A k e B k 2 x * + z * + γ C x * 0 ,
as k + . Since C is 1 β -Lipschitz continuous, we have
γ C ( x B k e B k ) γ C x B k γ e C k γ C ( x B k e B k ) C x B k + γ e C k γ 1 β e B k + γ e C k 0 ,
as k + . Then we have
x B k e B k x * , γ C ( x B k e B k ) γ C x B k γ e C k 0 ,
as k + . From (31), (32) and (34), we obtain
x A k e A k x * 0 ,
as k + .
Now, we prove that x A k x * as k + . In fact, by (35), we have
x A k x * x A k x * e A k + e A k 0 ,
as k + . Since x A k x B k 0 as k + . Then we have x B k x * as k + .
(b) Since A + C is monotone, we have
0 x A k e A k x * , u A k + C ( x A k e A k ) ( u A * + C x * ) .
It follows from that B is uniformly monotone, then there exists an increasing function Φ B : R + [ 0 , + ) with Φ B ( 0 ) = 0 such that
Φ B ( x B k e B k x * ) x B k e B k x * , u B k u B * .
Multiple (37) and (38) by γ , respectively, and add together, we obtain
γ Φ B ( x B k e B k x * ) γ x B k e B k x * , u B k u B * + γ x A k e A k x * , u A k + C ( x A k e A k ) ( u A * + C x * ) .
Next, we prove that x B k e B k x * 0 as k + . The technical proof is similar to (a). We start from estimating the right side of (39). In fact, we have
γ x B k e B k x * , u B k u B * + γ x A k e A k x * , u A k + C ( x A k e A k ) ( u A * + C x * ) = γ x B k e B k ( x A k e A k ) , u B k u B * + γ x A k e A k x * , u B k u B * + γ x A k e A k x * , u A k + C ( x A k e A k ) ( u A * + C x * ) = γ x B k e B k ( x A k e A k ) , u B k u B * + γ x A k e A k x * , u B k + u A k + C ( x A k e A k ) = x B k x A k + e A k , γ u B k γ u B * e B k , γ u B k γ u B * + x A k e A k x * , x B k x A k + e A k + e B k γ ( C x B k + e C k ) + γ C ( x A k e A k ) = x B k x A k + e A k , γ u B k γ u B * + x A k e A k x * + e B k , γ u B k + γ u B * + x A k e A k x * + x A k e A k x * , γ C ( x A k e A k ) γ ( C x B k + e C k ) .
For the first term of the right side (40), we have
x B k x A k + e A k , γ u B k γ u B * + x A k e A k x * = x B k x A k + e A k , x A k x B k + e B k e A k + z k z * 0 ,
as k + . For the second term of the right side (40), we have
e B k , γ u B k + γ u B * + x A k e A k x * = e B k , x A k + x B k e A k e B k z k + z * 2 x * 0 ,
as k + . Since C is 1 β -Lipschitz continuous, we have
γ C ( x A k e A k ) γ ( C x B k + e C k ) γ 1 β x A k e A k x B k + γ e C k 0 ,
as k + . From (39)–(43), it follows that
x B k e B k x * 0 ,
as k + . Therefore, we have x B k x * 0 as k + . In fact, we have
x B k x * x B k e B k x * + e B k 0 ,
as k + . Finally, x A k x * 0 as k + is due to the fact that x A k x B k 0 as k + .
(c) By (4) and (5), we know that C x B k C x * and x B k x * as k + . Since C is demiregular, x B k x * as k + . Also, we have x A k x * as k + . This completes the proof. □
Let B 0 and e B k = 0 . Then the inexact three-operator splitting algorithm (Algorithm 1) reduces to the inexact forward-backward splitting algorithm as follows:
x A k = J γ A ( z k γ ( C z k + e C k ) ) + e A k , z k + 1 = z k + λ k ( x A k z k ) ,
which is equivalent to
z k + 1 = ( 1 λ k ) z k + λ k ( J γ A ( z k γ ( C z k + e C k ) ) + e A k ) .
From Theorem 1, we obtain the following convergence results of the inexact forward-backward splitting algorithm immediately.
Corollary 1.
Let A : H 2 H be a maximally monotone operator. Let C : H H be a β-cocoercive operator, for some β > 0 . For any z 0 H , let { z k } be defined by (46). Assume that γ ( 0 , 2 β ) and λ k ( 0 , 4 β γ 2 β ) such that k = 0 + λ k ( 4 β γ 2 β λ k ) = + . Let { e A k } and { e C k } be absolutely summable sequences in H. Then the following hold:
(1) 
{ z k } converges weakly to a solution x z e r ( A + C ) ;
(2) 
C z k C x 0 as k + for λ k λ ̲ > 0 ;
(3) 
J γ A ( z k γ C z k ) z k 0 as k + ;
(4) 
Let λ k λ ̲ > 0 and let z * z e r ( A + C ) . Suppose that one of the following conditions holds:
(a) 
A is uniformly monotone on every nonempty bounded subset of d o m A ;
(b) 
C is demiregular at each point x z e r ( A + C ) .
Then the sequence { x A k } converge strongly to a solution of z e r ( A + C ) .
Remark 1.
Corollary 1 provides a larger choice for the relaxation parameters { λ k } than Corollary 6.5 of Combettes [36] when the stepsizes remain constant in [36]. In addition, Corollary 1 also generalizes Theorem 3.4 of Combettes and Wajs [26] for solving the convex minimization probelm of the sum of two convex functions (i.e., h ( x ) = 0 in (2)) to the general monotone inclusion problem (i.e., B = 0 in (1)).
Let C 0 and e C k = 0 . Then the inexact three-operator splitting algorithm reduces to the inexact Douglas-Rachford splitting algorithm as follows:
x B k = J γ B ( z k ) + e B k , x A k = J γ A ( 2 x B k z k ) + e A k , z k + 1 = z k + λ k ( x A k x B k ) ,
which is equivalent to
z k + 1 = z k + λ k ( J γ A ( 2 ( J γ B ( z k ) + e B k ) z k ) + e A k ( J γ B ( z k ) + e B k ) ) .
Now, we have the convergence results of the inexact Douglas-Rachford splitting algorithm from Theorem 1 as follows:
Define an operator T : = I J γ B + J γ A ( 2 J γ B I ) , where the operator T is usually called the Douglas-Rachford splitting operator.
Corollary 2.
Let A : H 2 H and B : H 2 H be maximally monotone operators. For any z 0 H , let { z k } be defined by (48). Assume that γ > 0 , and λ k ( 0 , 2 ) such that k = 0 + λ k ( 2 λ k ) = + . Let { e A k } and { e B k } be absolutely summable sequences in H. Then the following hold:
(1) 
{ z k } converges weakly to a fixed point of T;
(2) 
J γ A ( 2 J γ B ( z k ) z k ) J γ B ( z k ) 0 as k + ;
(3) 
Let λ k λ ̲ > 0 and z * be a fixed point of T. Then the iterative sequence { x B k } converges weakly to J γ B z * z e r ( A + B ) ;
(4) 
Let λ k λ ̲ > 0 and z * be a fixed point of T. Then the iterative sequence { x A k } converges weakly to J γ B z * z e r ( A + B ) ;
(5) 
Let λ k λ ̲ > 0 and let z * z e r ( A + B ) . Suppose that one of the following conditions holds:
(a) 
A is uniformly monotone on every nonempty bounded subset of d o m A ;
(b) 
B is uniformly monotone on every nonempty bounded subset of d o m B .
Then the sequence { x A k } and { x B k } converge strongly to a solution of z e r ( A + B ) .
Remark 2.
(I) 
(1) of Corollary 2 recovers Corollary 5.2 of [36].
(II) 
(2)–(5) of Corollary 2 generalize Theorem 25.6 of Bauschke and Combettes [53] from the exact Douglas-Rachford splitting algorithm to the inexact Douglas-Rachford splitting algorithm.

4. Conclusions

In this paper, we generalized the three-operator splitting algorithm proposed by Davis and Yin [25] from exact to inexact. The theoretical convergence of the inexact three-operator splitting algorithm was studied under mild conditions on the parameters. In the forthcoming works, we will discuss the convergence rates of the inexact three-operator splitting algorithm including the fixed point residual and the ergodic and the nonergodic convergence rates of the function values in the context of convex optimization problems.

Author Contributions

C.Z., Y.T. and Y.J.C. contributed equally in this work.

Funding

This research was funded by the National Natural Science Foundations of China grant number 11661056,11401293.

Acknowledgments

We would like to thank the associate editor and the two reviewers for their helpful comments to improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef]
  2. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  3. Combettes, P.L.; Pesquet, J.C. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
  4. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  5. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed]
  6. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  7. Loris, I.; Verhoeven, C. On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty. Inverse Probl. 2011, 27, 125007. [Google Scholar] [CrossRef] [Green Version]
  8. Chen, P.J.; Huang, J.G.; Zhang, X.Q. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  9. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distrituted optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  10. Yang, J.F.; Yuan, X.M. Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 2013, 82, 301–329. [Google Scholar] [CrossRef]
  11. He, B.S.; Liu, H.; Wang, Z.R.; Yuan, X.M. A strictly contractive Peaceman-Rachford splitting method for convex programming. SIAM J. Optim. 2014, 24, 1011–1040. [Google Scholar] [CrossRef] [PubMed]
  12. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  13. He, B.S.; Yuan, X.M. Convergence analysis of primal-dual algorithms for a saddle-point problem: From contraction perspective. SIAM J. Imaging Sci. 2012, 5, 119–149. [Google Scholar] [CrossRef]
  14. Condat, L. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013, 158, 460–479. [Google Scholar] [CrossRef] [Green Version]
  15. Vũ, B.C. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 2013, 38, 667–681. [Google Scholar] [CrossRef]
  16. Yu, Y.C.; Peng, J.G. A modified primal-dual method with applications to some sparse recovery problems. Appl. Math. Comput. 2018, 333, 76–94. [Google Scholar] [CrossRef]
  17. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithm for Inverse Problems in Science and Engineering; Bauschke, H.H., Burachik, R., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H., Eds.; Springer: New York, NY, USA, 2010; pp. 185–212. [Google Scholar]
  18. Komodakis, N.; Pesquet, J.C. Playing with duality: An overview of recent primal-dual approaches for solving large-scale optimization problems. IEEE Signal Process. Mag. 2015, 32, 31–54. [Google Scholar] [CrossRef]
  19. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 123–231. [Google Scholar] [CrossRef]
  20. Raguet, H.; Fadili, J.; Peyré, G. A generalized forward-backward splitting. SIAM J. Imaging Sci. 2013, 6, 1199–1226. [Google Scholar] [CrossRef]
  21. Raguet, H.; Landrieu, L. Preconditioning of a generalized forward-backward splitting and application to optimization on graphs. SIAM J. Imaging Sci. 2015, 8, 2706–2739. [Google Scholar] [CrossRef]
  22. Combettes, P.L.; Vũ, B.C. Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 2014, 63, 1289–1318. [Google Scholar] [CrossRef]
  23. Latafat, P.; Patrinos, P. Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators. Comput. Optim. Appl. 2017, 68, 57–93. [Google Scholar] [CrossRef]
  24. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  25. Davis, D.; Yin, W.T. A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef]
  26. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  27. Eckstein, J.; Bertsekas, D. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  28. Briceño-Arias, L.M. Forward-Douglas-Rachford splitting and forward-partial inverse method for solving monotone inclusions. Optimization 2015, 64, 1239–1261. [Google Scholar] [CrossRef]
  29. Marin, M. Weak solutions in elasticity of dipolar porous materials. Math. Probl. Eng. 2008, 2008, 158908. [Google Scholar] [CrossRef]
  30. Sidky, E.Y.; Jørgensen, J.H.; Pan, X.C. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm. Phys. Med. Biol. 2012, 57, 3065–3091. [Google Scholar] [CrossRef] [PubMed]
  31. Chirilǒ, A.; Marin, M. The theory of generalized thermoelasticity with fractional order strain for dipolar materials with double porosity. J. Mater. Sci. 2018, 53, 3470–3482. [Google Scholar] [CrossRef]
  32. Cevher, V.; Vũ, B.C.; Yurtsever, A. Stochastic Forward-Douglas-Rachford Splitting for Monotone Inclusions. 2018. Available online: https://infoscience.epfl.ch/record/215759/files/CVY2016_preprint.pdf (accessed on 25 August 2018).
  33. Yurtsever, A.; Vũ, B.C.; Cevher, V. Stochastic three-composite convex minimization. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 4322–4330. [Google Scholar]
  34. Pedregosa, F.; Gidel, G. Adaptive three operator splitting. In Proceedings of the 35th International Conference on Machine Learning, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018; pp. 4085–4094. [Google Scholar]
  35. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  36. Combettes, P.L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53, 475–504. [Google Scholar] [CrossRef]
  37. He, B.S.; Yuan, X.M. An accelerated inexact proximal point algorithm for convex minimization. J. Optim. Theory Appl. 2012, 154, 536–548. [Google Scholar] [CrossRef]
  38. Salzo, S.; Villa, S. Inexact and accelerated proximal point algorithm. J. Convex Anal. 2012, 19, 1167–1192. [Google Scholar]
  39. Villa, S.; Salzo, S.; Baldassarre, L.; Verri, A. Accelerated and inexact forward-backward algorithms. SIAM J. Optim. 2013, 23, 1607–1633. [Google Scholar] [CrossRef]
  40. Schmidt, M.; Roux, N.L.; Bach, F. Convergence rates of inexact proximal-gradient methods for convex optimization. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 12–17 December 2011; pp. 1458–1466. [Google Scholar]
  41. Solodov, M.V.; Svaiter, B.F. An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 2000, 25, 214–230. [Google Scholar] [CrossRef]
  42. Eckstein, J.; Yao, W. Relative-error approximate versions of Douglas-Rachford splitting and special cases of the ADMM. Math. Program. 2018, 170, 417–444. [Google Scholar] [CrossRef]
  43. Alves, M.M.; Geremia, M. Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng’s F-B four-operator splitting method for solving monotone inclusions. arXiv, 2017; arXiv:1711.11551. [Google Scholar]
  44. Solodov, M.V.; Svaiter, B.F. A unified framework for some inexact proximal point algorithms. Numer. Funct. Anal. Optim. 2001, 22, 1013–1035. [Google Scholar] [CrossRef]
  45. Iusem, A.N.; Pennanen, T.; Svaiter, B.F. Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 2003, 13, 1080–1097. [Google Scholar] [CrossRef]
  46. Han, D. Inexact operator splitting methods with selfadaptive strategy for variational inequality problems. J. Optim. Theory Appl. 2007, 132, 227–243. [Google Scholar] [CrossRef]
  47. Parente, L.A.; Lotito, P.A.; Solodov, M.V. A class of inexact variable metric proximal point algorithms. SIAM J. Optim. 2008, 19, 240–260. [Google Scholar] [CrossRef]
  48. Chouzenoux, E.; Pesquet, J.C.; Repetti, A. Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 2014, 162, 107–132. [Google Scholar] [CrossRef] [Green Version]
  49. Chancelier, J.P. Auxiliary problem principle and inexact variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function. arXiv, 2015; arXiv:1508.02994. [Google Scholar]
  50. Kang, M.; Kang, M.; Jung, M. Inexact accelerated augmented Lagrangian methods. Comput. Optim. Appl. 2015, 62, 373–404. [Google Scholar] [CrossRef]
  51. Li, J.Y.; Wu, Z.Y.; Wu, C.Z.; Long, Q.; Wang, X.Y. An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints. J. Optim. Theory Appl. 2016, 168, 153–171. [Google Scholar] [CrossRef]
  52. Reem, D.; De Pierro, A. A new convergence analysis and perturbation resilience of some accelerated proximal forward-backward algorithms with errors. Inverse Probl. 2017, 33, 044001. [Google Scholar] [CrossRef]
  53. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  54. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zong, C.; Tang, Y.; Cho, Y.J. Convergence Analysis of an Inexact Three-Operator Splitting Algorithm. Symmetry 2018, 10, 563. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10110563

AMA Style

Zong C, Tang Y, Cho YJ. Convergence Analysis of an Inexact Three-Operator Splitting Algorithm. Symmetry. 2018; 10(11):563. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10110563

Chicago/Turabian Style

Zong, Chunxiang, Yuchao Tang, and Yeol Je Cho. 2018. "Convergence Analysis of an Inexact Three-Operator Splitting Algorithm" Symmetry 10, no. 11: 563. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10110563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop