Next Article in Journal
Of Coordinators and Dictators: A Public Goods Experiment
Previous Article in Journal
Multidimensional Screening with Complementary Activities: Regulating a Monopolist with Unknown Cost and Unknown Preference for Empire Building
Previous Article in Special Issue
An Evolutionary Theory of Suicide
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Population Games, Stable Games, and Passivity

School of Electrical and Computer Engineering, Georgia Institute of Technology, 777 Atlantic Drive NW, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
Submission received: 4 April 2013 / Revised: 3 September 2013 / Accepted: 26 September 2013 / Published: 7 October 2013
(This article belongs to the Special Issue Advances in Evolutionary Game Theory and Applications)

Abstract

:
The class of “stable games”, introduced by Hofbauer and Sandholm in 2009, has the attractive property of admitting global convergence to equilibria under many evolutionary dynamics. We show that stable games can be identified as a special case of the feedback-system-theoretic notion of a “passive” dynamical system. Motivated by this observation, we develop a notion of passivity for evolutionary dynamics that complements the definition of the class of stable games. Since interconnections of passive dynamical systems exhibit stable behavior, we can make conclusions about passive evolutionary dynamics coupled with stable games. We show how established evolutionary dynamics qualify as passive dynamical systems. Moreover, we exploit the flexibility of the definition of passive dynamical systems to analyze generalizations of stable games and evolutionary dynamics that include forecasting heuristics as well as certain games with memory.

1. Introduction

Evolutionary game theory (e.g., [1]), along with the related topic of learning in games, explores the dynamics of interacting players or populations. The framework entails introducing an “evolutionary dynamic” (or behavioral rule) and proceeding to analyze the resulting trajectories that a game induces. One of the goals of evolutionary game theory is to understand how rational solution concepts, such as Nash equilibrium, can emerge and be selected through simplistic evolutionary interactions. Oftentimes, an evolutionary dynamic for a specific game need not induce any stable equilibrium points. Indeed, different evolutionary dynamics can produce outcomes ranging from chaos to convergence (cf., [2] and [3]). Furthermore, games have been constructed that can be shown to never exhibit stable Nash equilibria under only very mild conditions on the dynamics themselves [4]. In contrast to such specific examples, researchers also have sought to identify broad classes of games for which correspondingly broad classes of dynamics converge to equilibrium, one example of which is potential games [5]. For a broader discussion of these and related issues, see [6,7,8].
We focus here on the recently proposed class of stable games [9]—a generalization of a number of earlier classes of games ranging from concave potential games to symmetric normal form games with an interior evolutionarily stable strategy (ESS). The appealing property of stable games is that their Nash equilibria comprise a convex set that many dynamics are guaranteed to reach [9].
In this paper, we show that stable games can be identified as a special case of the feedback-system-theoretic notion of passive input–output systems. Passivity is an abstraction of energy conservation and dissipation in mechanical and electrical systems [10] that has become a standard tool in the design and analysis of nonlinear feedback systems [10,11,12,13]. It provides conditions under which particular system interconnections will be stable.
The connection to evolutionary games is that an evolutionary dynamic can be viewed as a dynamical system in feedback with a game. Accordingly, after we identify stable games as passive systems, we are guaranteed that play by any admissible passive evolutionary dynamic will result in stability and convergence—provided that one indeed can define an analogous notion of passivity for evolutionary dynamics. It turns out that the various dynamics that guarantee global convergence in stable games do indeed satisfy a natural notion of passivity. In particular, passivity of an evolutionary dynamic can be interpreted as long run correlation between the time derivative of payoffs and the direction of motion (see Equation (54)). While passivity techniques have been used in analysis of game theoretic learning dynamics employed in certain specific engineering models [14,15], the notion of passivity capturing a class of dynamics or games is novel.
The contributions of this paper are summarized as follows:
  • We show that stable games can be formulated as dynamical systems exhibiting an appropriate form of passivity. We develop a complementary notion of passivity for learning dynamics, resulting in sufficient conditions for stability.
  • We show that learning dynamics known to exhibit convergence in stable games do indeed satisfy a relevant passivity condition.
  • The passivity conditions we introduce also apply to more general forms of games and dynamics. We provide stability results for certain games with memory as well as for learning dynamics that employ forecasting heuristics.
  • Finally, we extend our methods to games and dynamics that depend on finite histories of strategy change and proceed to analyze learning techniques that can achieve convergence in certain games with lags or time delays.
In pursuing a broad class of evolutionary dynamics that converges for the class of stable games, the analysis in [9] considers a particular evolutionary dynamic called “excess payoff/target” (EPT) dynamics that generalizes various other evolutionary dynamics. Certain EPT dynamics exhibit a property known as “positive correlation” (cf., its variant, termed virtual positive correlation [16]). In particular, there is positive correlation between between the instantaneous payoffs and the instantaneous direction of motion. It turns out that positive correlation alone need not imply convergence for stable games [9]. This realization motivated the introduction of additional properties such as “integrability” or the stronger “separability” specific to EPT dynamics. We will see that, rather than positive correlation, the aforementioned long run correlations provide the essential characteristic for convergence under stable games. Furthermore, these results suggest an interpretation of passive learning that complements the interpretation of stable games as strategic environments exhibiting self-defeating externalities.
An immediate benefit of our characterization, beyond providing a unifying framework to assess stability, is the novel generalizations it enables. Evolutionary game theory has historically placed particular emphasis on the study of memoryless games, in which payoffs are a function of current population levels rather than histories of population levels. Likewise, evolutionary game theory has historically only considered evolutionary dynamics of restricted dimension, in particular with order equal to the total number of strategies across all players. While our definitions include such settings, they are not restricted to them. Dynamic learning schemes that utilize additional, auxiliary states in reckoning strategy changes also can be analyzed using passivity. In particular, we identify games that preserve the convergence properties of passive learning dynamics when they are combined with prevalent forecasting heuristics like smoothing and trend following. Alternatively, certain dynamic games, that is, strategic environments where payoffs can depend on the entire action trajectory, can be shown to exhibit passivity.
Lastly, we probe the limits of the class of passive learning dynamics by suggesting an evolutionary dynamic in which players attempt to update strategies in a contrarian manner. Specifically, they discount payoffs to actions that have seen a rise in popularity over a defined lookback period. This scheme leads to an infinite-dimensional system. We find that this predisposition has no consequences for convergence of passive dynamics in stable games and all other passive strategic environments.
The remainder of this paper is organized as follows. Section 2 presents background material on population games and stable games. Section 3 presents a brief tutorial on passivity analysis for feedback systems. Section 4 contains the main results that define the notion of passivity for evolutionary dynamics and establish stability when coupled with a stable game. This section goes on to present various generalizations. Finally, Section 5 contains concluding remarks.
This paper expands and develops on the work reported by the authors in [17].

2. Background

2.1. Population Games

We begin with a description of single population games. The main results herein are extendable to multi-population games considered in [9] in a straightforward manner, and so the restriction to single populations simplifies notation.
A single population has a set of available strategies S = 1 , 2 , . . . , n . The set of strategy distributions is X = { x R + n : i S x i = 1 } . Since strategies lie in the simplex, admissible changes in strategy are restricted to the tangent space T X = { z R n : i S z i = 0 } .
The payoff function F : X R n is a continuous map associating each strategy distribution with a payoff vector so that F i : X R n is the payoff to strategy i S .
A state x X is a Nash equilibrium, denoted x NE ( F ) , if each strategy in the support of x receives the maximum payoff available to the population, i.e.,
x NE ( F ) [ x i > 0 F i ( x ) F j ( x ) ] , i , j S .

2.2. Stable Games

We say that F : X R n is a stable game if
( y x ) T ( F ( y ) F ( x ) ) 0 x , y X .
For a detailed discussion with examples of stable games, see [9]. Many evolutionary dynamics are well-behaved when restricted to the stable games. The above definition has the following equivalence when F is continuously differentiable (i.e., C 1 ).
Theorem 2.1 [9] Suppose the population game F is continuously differentiable. Then F is a stable game if and only if the Jacobian matrix D F ( x ) is negative semidefinite with respect to T X for all x X , i.e., z T D F ( x ) z 0 for all z T X and x X .
If D F ( x ) is negative definite for all x X , then we say that F is strictly stable.
Reference [9] gives the definition of stable games the interpretation of self-defeating externalities. That is, the payoff improvements to strategies being adopted are dominated by the payoff improvements to strategies being abandoned. This is easy to see by letting z = e j e i T X , the difference between two unit vectors, and noting that (by definition) z T D F ( x ) z 0 .
Many games are known to be stable games including zero sum games, symmetric multi/network zero sum games, and concave potential games. For a thorough discussion see [9].

3. A Primer on Passivity in Feedback Systems

This section presents a brief tutorial on passivity analysis for feedback systems. For further discussion, see [10,11,12,13].

3.1. Circuit Origins

The term “passivity” is derived from circuit theory. Figure 1a illustrates a circuit network with two terminals. Let v ( t ) denote the time-varying voltage across the terminals and i ( t ) denote the corresponding time-varying current into the network. The relationship between v ( · ) and i ( · ) is determined by the specifics of the network. Let E ( t ) denote the energy stored in the network. The instantaneous power delivered to the network at time t is v ( t ) · i ( t ) . If the network is passive, it does not have any internal “active” or “power generating” components. Accordingly, over any time interval [ t 0 , t ] ,
E ( t ) E ( t 0 ) + t 0 t v ( τ ) i ( τ ) d τ .
Figure 1. (a) One-port circuit network. (b) Interconnection of two circuit networks.
Figure 1. (a) One-port circuit network. (b) Interconnection of two circuit networks.
Games 04 00561 g001
That is, the change in stored energy is less than the supplied energy (“less than” because energy may have dissipated, e.g., through resistors).
Figure 1b illustrates an interconnection of two networks. If both networks are passive, i.e., have no power generation, then one would expect that the stored energy, which can be transferred from network to network, does not increase overall.
Mathematically, passivity of the networks individually means
E 1 ( t ) E 1 ( t 0 ) + t 0 t v 1 ( τ ) i 1 ( τ ) d τ
E 2 ( t ) E 2 ( t 0 ) + t 0 t v 2 ( τ ) i 2 ( τ ) d τ
where v j ( · ) , i j ( · ) , and E j ( · ) , j 1 , 2 , are the voltage, current, and energy of the two networks. Furthermore, the interconnection implies that
v 1 ( t ) = v 2 ( t ) and i 1 ( t ) = i 2 ( t ) .
Combining these relations with the above passivity conditions implies that
E 1 ( t ) + E 2 ( t ) E 1 ( t 0 ) + E 2 ( t 0 ) ,
as expected. Passivity analysis extends these ideas from circuits to feedback interconnections of dynamical systems.

3.2. Input–Output Operators and Feedback Interconnections

A dynamical system can be viewed as a (nonlinear) operator defined on function spaces. Let L 2 denote the usual Hilbert space of square integrable functions1 mapping [ 0 , ) : = R + to R n with inner product · , · and norm · , i.e., for f , g L 2 ,
f , g = 0 f ( t ) T g ( t ) d t ,
f = 0 f ( t ) T f ( t ) d t 1 / 2 = f , f 1 / 2 .
Let L 2 , e denote the “extended” space of functions that are square integrable over finite intervals, i.e.,
L 2 , e = f : R + R n : 0 T f ( t ) T f ( t ) d t < for all T R + .
For U , Y L 2 , e subsets of functions, an input–output operator is a mapping S : U Y .
Figure 2. (a) Feedback interconnection with negative feedback. (b) Feedback interconnection with positive feedback.
Figure 2. (a) Feedback interconnection with negative feedback. (b) Feedback interconnection with positive feedback.
Games 04 00561 g002
Figure 2a represents a feedback interconnection of two input–output operators, S 1 : U Y and S 2 : Y U . The feedback interconnection makes the “output” of S 1 becomes the “input” of S 2 . Likewise, the negative output of S 2 becomes the input of S 1 after summation with the external r U . Traditionally in control theory, feedback interconnections are with negative feedback as in Figure 2a rather than positive feedback as in Figure 2b. However, we will use positive feedback interconnections for games in the following analysis .
Figure 2a graphically represents the equations
u 1 = r y 2 y 1 = S 1 u 1 y 2 = S 2 u 2 u 2 = y 1 ,
which imply
u 1 = r S 2 S 1 u 1 .
We assume that for any r U , there exists a solution u 1 U . Given this solution, u 1 , one can go on to construct y 1 , u 2 , and y 2 accordingly.

3.3. Passivity with input–output Operators

We now define passivity for input–output operators and present a basic theorem on the stability of feedback interconnections of passive operators. First, for T R + , define the truncated inner-product
f , g T = 0 T f ( t ) T g ( t ) d t
and truncated norm
f T = f , f T 1 / 2 .
The input–output operator S : U Y is passive if there exists a constant α such that
S u , u T α , for all u U , T R + ,
and input strictly passive if there exist β > 0 and α such that
S u , u T α + β u , u T , for all u U , T R + .
Theorem 3.1 Consider the feedback interconnection of S 1 and S 2 defined by Equation (11). Assume S 1 is passive and S 2 is input strictly passive. Then r L 2 implies y 1 L 2 .
In words, Theorem 3.1 can be interpreted as the stability of a feedback interconnection between a passive and strictly passive system. The proof is straightforward. Using Equation (11),
r , y 1 T = u 1 + y 2 , y 1 T = u 1 , S 1 u 1 T + S 2 u 2 , u 2 T .
Therefore
r , y 1 T α 1 + α 2 + β 2 u 2 , u 2 T = α 1 + α 2 + β 2 y 1 , y 1 T ,
where α 1 , α 2 , and β 2 are the passivity constants associated with S 1 and S 2 . By the Schwarz inequality,
r T y 1 T α 1 + α 2 + β 2 y 1 T 2 .
By assumption, r L 2 , and so r T is uniformly bounded over T 0 . Since β 2 > 0 , it follows that y 1 T is also uniformly bounded over T 0 .

3.4. Passivity with State Space Systems

The previous discussion was in terms of general input–output operators and illustrates that passivity methods need not be restricted to dynamical systems described by differential equations. We now present specialized results when the input–output operators are, in fact, represented by a system of differential equations. Consider the state space system
x ˙ = f ( x , u ) , x ( 0 ) = x 0 , y = g ( x , u ) ,
where u ( t ) U R m , y ( t ) R m , and x 0 M R n . Let us assume that for some classes of functions, U and Y , and for any initial condition in M , there exists a solution for all u U resulting in y Y and x ( t ) M for all t 0 (i.e., M is invariant). These equations define a family of input–output operators (indexed by the initial condition, x 0 ), mapping u ( · ) to y ( · ) . Accordingly, one can apply input–output operator notions of passivity for each initial condition.
For state space systems, however, the traditional approach is to define passivity irrespective of initial conditions as follows. A state-space system is passive if there exists a storage function L : M R + such that for all x 0 M , u U , and t R + ,
L ( x ( t ) ) L ( x 0 ) + 0 t u ( τ ) T y ( τ ) d τ .
The parallels with the discussion of circuits are that the “input” is identified as the voltage and “output” is identified as the current (or vice versa), which in turns motivate the terminology of a “storage” function.
We will focus on C 1 storage functions, in which case the above definition may be written as
L ˙ : = D L ( x ) f ( x , u ) u T y = u T g ( x , u ) .
for all x M and u U . Inequalities such as (22) are referred to as “dissipation inequalities”.
We now consider a feedback interconnection of two state space systems as in Figure 2a, in which each system, S i , is represented by state space equations
x ˙ i = f i ( x , u ) x i ( 0 ) = x i 0 M i , i 1 , 2 y i = g i ( x i , u i ) , i 1 , 2 u 1 = r y 2 u 2 = y 1
Again, we assume the existence of solutions.
Theorem 3.2 Consider the feedback interconnection of S 1 and S 2 defined by Equation (23). Assume each S i is passive with storage function L i . Then for all x 1 ( 0 ) M 1 , x 2 ( 0 ) M 2 , and t R + ,
L 1 ( x 1 ( t ) ) + L 2 ( x 2 ( t ) ) L 1 ( x 1 ( 0 ) ) + L 2 ( x 2 ( 0 ) ) + 0 t r ( τ ) T y 1 ( τ ) d τ .
An interpretation is that the closed-loop system mapping r to y 1 is passive with storage function L 1 + L 2 . The derivation is an immediate consequence of the definitions.
In case r = 0 , Theorem 3.2 implies that L 1 ( x 1 ( t ) ) + L 2 ( x 2 ( t ) ) is non-increasing along solutions of state space (23). Compare to the previous discussion on circuits, in which energy was non-increasing for interconnected circuits. This monotonicity can have stability implications depending on the underlying specifics. For example, if both L 1 and L 2 are positive definite with L i ( 0 ) = 0 , and if the passivity inequalities of (22) are strict for x i 0 , then the origin is locally asymptotically stable. These details are omitted here. The main point for now is that, as before, there are stability implications associated with the feedback interconnection of two passive systems.

4. Main Results

4.1. Preview

Traditionally, we think of games as memoryless mappings from strategy x X to payoff F ( x ) . In a dynamic setting, we can extend this viewpoint to a mapping of strategy trajectories x ( · ) to payoff trajectories π ( · ) = F ( x ( · ) ) . Likewise, evolutionary dynamics (see below) can be viewed as mappings from payoff trajectories to strategy trajectories. Accordingly, we can view evolutionary games as a feedback interconnection. In terms of the positive feedback digram Figure 2b, we will set r = 0 and associate S 1 with an evolutionary dynamic process and S 2 with a game. Accordingly, u 1 = y 2 represent payoff trajectories and y 1 = u 2 represent strategy trajectories.
Here we find the first departure from standard passivity analysis. Namely, we are dealing with positive feedback as in Figure 2b instead of the traditional negative feedback of Figure 2a, and so we will need to make suitable (simple) modifications to the analysis.
There is a more significant departure from standard passivity analysis that is at the heart of associating stable games with passive systems. As mentioned above, let us think of a passive game as an input–output operator mapping strategy trajectories to payoff trajectories. Then for C 1 games,
π ˙ ( t ) = D F ( x ( t ) ) x ˙ ( t ) .
By the definition of stable games,
x ˙ ( t ) T π ˙ ( t ) = x ˙ ( t ) T D F ( x ( t ) ) x ˙ ( t ) 0
pointwise in time. Consequently,
0 T x ˙ ( t ) T π ˙ ( t ) d t = π ˙ , x ˙ T 0 .
While this inequality resembles a passivity condition, we find two differences. First, the direction of the inequality has changed. This reversal suits the shift to positive feedback interconnections and will result in a definition of “anti-passivity”.
Second, and more significantly, the inner product involves the derivatives of the input and output. The approach taken in the precursor paper [17] was to represent a C 1 game as an input–output system through these derivatives as follows:
x ˙ = u
y = π ˙ = D F ( x ) u .
Here, we see that the “input” u is actually x ˙ , and the “output” y is π ˙ . Using such associations enables the application of standard passivity theory at the cost of associating time derivatives as inputs and outputs.
In this paper, we take an alternative and more direct (but essentially equivalent) approach. We will continue to view evolutionary dynamics as mappings from payoff trajectories (not their derivatives) to strategy trajectories. However, we will introduce a notion of “differential” or simply δ-passivity (see also [18]) which mimics standard passivity, but uses derivatives of inputs and outputs as the integrand for passivity conditions.
Our notion of differential passivity can be interpreted as a local version of “incremental passivity”, defined in [11] as
S u S u , u u T 0 ,
which closely resembles the original definition of Equation (2).

4.2. δ-Passivity and δ-Anti-Passivity

We begin with definitions for input–output operators. Let S : U Y be an input–output operator. We assume both U and Y are subsets of locally Lipschitz functions over [ 0 , ) . This assumption implies a requisite differentiability (for almost all t R + ) as well as certain required boundedness properties as well.
The input–output operator S : U Y is
  • δ-passive if there exists a constant α such that
    ( S u ) ˙ , u ˙ T α , for all u U , T R + .
  • input strictly δ-passive if there exist β > 0 and α such that
    ( S u ) ˙ , u ˙ T α + β u ˙ , u ˙ T , for all u U , T R + .
  • δ-anti-passive if S is passive.
  • input strictly δ-anti-passive if S is input strictly δ-passive.
We now present a stability theorem for δ-passivity analogous to Theorem 3.1 for the positive feedback interconnection Figure 2b, where S 1 : U Y and S 2 : Y U . The feedback equations are:
u 1 = r + y 2 y 1 = S 1 u 1 y 2 = S 2 u 2 u 2 = y 1 .
We assume existence of solutions and that r is such that u 1 U (which is satisfied, in particular, for r = 0 ).
Theorem 4.1 Consider the feedback interconnection of S 1 and S 2 defined by Equation (33). Assume S 1 is δ-passive and S 2 is input strictly δ-anti-passive. Then r ˙ L 2 implies y ˙ 1 L 2 .
The proof parallels that of Theorem 3.1 and is omitted.
We continue with a definition of δ-passivity for state space systems as in Equation (20). We assume that g ( · , · ) is C 1 and input functions in U are locally Lipschitz continuous. We assume further that functions in U are U valued, with U R m , and denote T U to be the associated tangent space. Throughout this paper, we will have T U as either R m or T X .
A state space system is δ-passive if there exists a C 1 storage function L : M × R m R + such that for all x M , u U , and u ˙ T U ,
x L ( x , u ) f ( x , u ) + u L ( x , u ) u ˙ u ˙ T x g ( x , u ) f ( x , u ) + u g ( x , u ) u ˙ ,
or more succinctly,
L ˙ u ˙ T y ˙ .
It is δ-anti-passive if
L ˙ u ˙ T y ˙ .
In case the state space system has no state, as in y = g ( u ) , then we take L = 0 in the above definitions.
We now state a stability theorem that is tailored for the forthcoming discussion. We are concerned with a special case of positive feedback depicted in Figure 2b. Each system, S i , is represented by state space equations, as in
x ˙ i = f i ( x , u ) x i ( 0 ) = x i 0 M i , i 1 , 2 y 1 = g 1 ( x 1 ) , y 2 = g 2 ( x 2 , u 2 ) , i 1 , 2 u 1 = y 2 u 2 = y 1 .
In particular, we have set r = 0 and eliminated dependence of y 1 on u 1 to avoid any pathological issues with algebraic loops. Again, we assume the existence of solutions.
Theorem 4.2 Consider the feedback interconnection of S 1 and S 2 defined by Equation (37). Assume S 1 is δ-passive and S 2 is δ-anti-passive, with storage functions L 1 and L 2 , respectively.
  • For all x 1 ( 0 ) M 1 , x 2 ( 0 ) M 2 , and for t R + ,
    L ˙ 1 + L ˙ 2 0 .
  • Furthermore, if the level set
    ( x 1 , x 2 ) : L 1 ( x 1 , g 2 ( x 2 , g 1 ( x 1 ) ) ) + L 2 ( x 2 , g 1 ( x 1 ) )
    with
    = L 1 ( x 1 ( 0 ) , g 2 ( x 2 ( 0 ) , g 1 ( x 1 ( 0 ) ) ) ) + L 2 ( x 2 ( 0 ) , g 1 ( x 1 ( 0 ) ) )
    is compact, and
    L ˙ 1 + L ˙ 2 ψ ( h ˙ ) ,
    for some positive definite ψ ( · ) and C 1 function h : M 1 × M 2 R k , then lim t h ˙ ( t ) = 0 .
Proof. Statement 1 is a direct consequence of the definitions of δ-passive and δ-anti-passive. Expanding the definition of L 1 and L 2 , we see that
L 1 ( x 1 , u 1 ) + L 2 ( x 2 , u 2 ) = L 1 ( x 1 , g 2 ( x 2 , g 1 ( x 1 ) ) ) + L 2 ( x 2 , g 1 ( x 1 ) )
is non-increasing. Statement 2 assumes some “strictness” in passivity expressed through ψ ( h ˙ ) . The conclusion follows from an application of LaSalle’s invariance theorem. ☐
We conclude this section with a discussion relating passivity and δ-passivity. Starting from an original state space system:
x ˙ = f ( x , u ) , x ( 0 ) = x 0 y = g ( x , u ) ,
we can construct an extended system defined by
u ˙ = u , u ( 0 ) = u 0 x ˙ = f ( x , u ) , x ( 0 ) = x 0 y = x g ( x , u ) f ( x , u ) + u g ( x , u ) u .
The extended system has as an “input” u which equals u ˙ of the original system. Similarly, the extended system has as an “output” y which equals y ˙ of the original. Also, note that the state of the expanded system, x is ( x , u ) .
The condition for a storage function, L ( x , u ) , for δ-passivity for the original system is
L ˙ u ˙ T y ˙
which is expanded as
x L ( x , u ) f ( x , u ) + u L ( x , u ) u ˙ u ˙ T x g ( x , u ) f ( x , u ) + u g ( x , u ) u ˙ .
The condition for a storage function, L , for standard passivity of the extended system is
L ˙ u T y .
Since the state of the extended system is ( x , u ) , the resulting expansion is
x L ( x , u ) f ( x , u ) + u L ( x , u ) u u T x g ( x , u ) f ( x , u ) + u g ( x , u ) u .
Identifying u with u ˙ we see that δ-passivity for the original system corresponds to standard passivity for the extended system.
This correspondence was used in the precursor paper [17] by defining an extended system for both the evolutionary dynamic and population. That approach resulted in a duplication of states in the two extended systems. The present approach, by defining δ-passivity, avoids such difficulties and is more direct.

4.3. Application to Stable Games and Evolutionary Dynamics

4.3.1. Stable Games

We now state formally the motivating connection between stable games and passivity. Let X denote locally Lipschitz X-valued functions over R + , and P denote locally Lipschitz R n -valued functions over R + .
Theorem 4.3 A C 1 stable game mapping X to P is δ-anti-passive, i.e.,
( F ( x ) ) ˙ , x ˙ T 0 , f o r a l l x ( · ) X , T R + .
Furthermore, if F is strictly stable, then the mapping is input strictly δ-anti-passive, i.e., for some β > 0 ,
( F ( x ) ) ˙ , x ˙ T β x ˙ , x ˙ T , f o r a l l x ( · ) X , T R + .
Proof. By definition,
( F ( x ) ) ˙ , x ˙ T = 0 T x ˙ ( τ ) T D F ( x ( τ ) ) x ˙ ( τ ) d τ .
δ-anti-passivity follows from negative definiteness of the integrand. Furthermore, if F is strictly stable, there exists a β > 0 such that
x ˙ ( τ ) T D F ( x ( τ ) ) x ˙ ( τ ) < β x ˙ ( τ ) T x ˙ ( τ ) ,
which implies the desired result. ☐
In identifying stable games as δ-anti-passive systems, we see that evolutionary dynamics that are δ-passive are the complement of stable games in that the stability Theorems 4.1–4.2 are applicable for any δ-passive dynamic. Inspecting the definition for δ-passivity, we will require that for some α,
0 T x ˙ ( t ) T p ˙ ( t ) d t α .
Since this equation holds for all T 0 , it implies a long run correlation between the flow of population state with the flow of payoffs, namely,
lim inf T 1 T 0 T x ˙ ( t ) T p ˙ ( t ) d t 0 .

4.3.2. Passive Evolutionary Dynamics

We now examine evolutionary dynamics from the perspective of δ-passivity. A general form for evolutionary dynamics is
x ˙ = V ( x , F ( x ) )
which describes the evolution of the strategy state, x X , for the population game F. From a feedback interconnection perspective, an evolutionary dynamic describes how strategy trajectories evolve in response to payoff trajectories. Accordingly, we will remove any explicit game description and write
x ˙ = V ( x , p ) .
In terms of previous discussions, the payoff vector p is an “input” and the strategy x is the output. Again, in establishing that an evolutionary dynamic is δ-passive, we do not assert that p = F ( x ) . Rather, p ( · ) is drawn from a class of trajectories. Since stable games are δ-anti-passive as mappings from X to P , we are interested in conditions for an evolutionary dynamic to be δ-passive as a mapping from P to X . Specializing the definition of δ-passivity for state space systems to the current setup, we seek to find a storage function L : X × R n R + such that
x L ( x , p ) V ( x , p ) + p L ( x , p ) p ˙ p ˙ T V ( x , p )
for all admissible p ˙ . Once the above equality is established, then one can employ Theorems 4.1 and 4.2 to make conclusions about stability.
We will focus specifically on so-called excess payoff target (EPT) dynamics [9,19]. These dynamics form a class of evolutionary dynamics that contain several well studied cases.
First, define the excess payoff function ξ : X × R n R n by
ξ ( x , p ) = p ( x T p ) · 1 ,
where 1 is a vector of ones. EPT dynamics take the form
x ˙ = V EPT ( x , p ) : = τ ( ξ ( x , p ) ) 1 T τ ( ξ ( x , p ) ) · x ,
where τ : R n R + n is called the revision protocol (see [9] for a thorough discussion.). Following [9], we make the following assumptions:
  • positive correlation: V EPT ( x , p ) 0 p T V EPT ( x , p ) > 0 .
  • integrability: τ = γ for some C 1 function γ : R n R .
Finally, let P ρ denote the subset
p P : p ( t ) ρ , for all t R + ,
where · denotes the euclidean norm on R n .
Theorem 4.4 For any ρ > 0 , EPT dynamics are δ-passive as a mapping from P ρ to X with storage function γ ( ξ ( x , p ) ) + C for some constant C.
Proof. Following [9], take as a candidate storage function,
L ( x , p ) = γ ( ξ ( x , p ) ) .
From integrability of τ,
x L ( x , p ) x ˙ + p L ( x , p ) p ˙ = τ ( ξ ( x , p ) ) T x ξ ( x , p ) x ˙ + τ ( ξ ( x , p ) ) T p ξ ( x , p ) p ˙ = τ ( ξ ( x , p ) ) T 1 · p T V EPT ( x , p ) + τ ( ξ ( x , p ) ) T ( I 1 · x T ) p ˙ = ( 1 T τ ( ξ ( x , p ) ) ) ( p T V E P T ( x , p ) ) + ( τ ( ξ ( x , p ) ) T ( ( τ ( ξ ( x , p ) ) T 1 ) · x T ) p ˙ = ( 1 T τ ( ξ ( x , p ) ) ) ( p T V E P T ( x , p ) ) + p ˙ T V EPT ( x , p ) p ˙ T V EPT ( x , p ) = p ˙ T x ˙ ,
where the last inequality is due to non-negativity of τ and positive correlation.
The remainder of the proof resolves a technicality that defines storage functions to be positive. Set
C = min γ ( ξ ( x , p ) ) : x X , p ρ .
Then γ ( ξ ( x , p ) ) + C is non-negative for all x X and p ρ . ☐
The proof closely follows the stability proof in [9] that establishes γ ( ξ ( x , F ( x ) ) ) as a Lyapunov function. This resemblance should not be surprising, in light of Theorem 4.2, which establishes that the sum of the storage functions in a feedback interconnection is non-increasing.
An important difference here is that the proof does not specify the origins of the payoff trajectory. That is, we do not presume that p ( t ) = F ( x ( t ) ) for some stable game. Accordingly, the established δ-passivity will have stability implications for “generalized” stable games (see forthcoming sections).
The above proposition also establishes that EPT dynamics are δ-passive as an input–output operator. In particular, since p is the “input” and x is the “output”,
γ ( ξ ( x ( T ) , p ( T ) ) ) γ ( ξ ( x ( 0 ) , p ( 0 ) ) ) x ˙ , p ˙ T .
One can construct a suitable passivity constant α by maximizing over x ( 0 ) , x ( T ) , p ( 0 ) , and p ( t ) , as in the construction of C.
It also is possible to establish δ-passivity for other learning dynamics considered in [9]. As in the case with EPT dynamics, the proofs parallel previous stability proofs in [9], but with new interpretations with broader implications. The only subtlety is that, as before, we do not associate p = F ( x ) in the process of establishing passivity.
We illustrate this argument for so-called impartial pairwise comparison dynamics, which are not of the EPT form [9]. First, define Lipschitz continuous switch rates
ϕ j : R R +
with the property that
ϕ j ( δ ) > 0 δ > 0 .
Impartial pairwise comparison dynamics are defined as
x ˙ i = j = 1 n x j ϕ i ( p i p j ) j = 1 n x i ϕ j ( p j p i ) .
The interpretation from [9] is that the flow from strategy i to j depends on the relative payoffs, p i and p j . Furthermore, impartiality means that the flow rate, ϕ j ( · ) , only depends on the destination (and not origin) strategy.
Theorem 4.5 Impartial pairwise comparison dynamics defined by Equation (67) are δ-passive as a mapping from P to X .
Proof. Following [9], take as a candidate storage function
L ( x , p ) = i = 1 n x i j = 1 n 0 p j p i ϕ j ( s ) d s .
The derivative is
L ˙ = i = 1 n x ˙ i j = 1 n 0 p j p i ϕ j ( s ) d s + x i j = 1 n ϕ j ( p j p i ) ( p ˙ j p ˙ i ) .
Note that we do not take p ˙ = F ( x ) x ˙ .
Arguments in [9] establish that the summation of the first term satisfies
i = 1 n x ˙ i j = 1 n 0 p j p i ϕ j ( s ) d s 0 .
Rearranging the summation of the second term,
i = 1 n x i j = 1 n ϕ j ( p j p i ) ( p ˙ j p ˙ i )
= j = 1 n p ˙ j i = 1 n x i ϕ j ( p j p i ) i = 1 n p ˙ i j = 1 n x i ϕ j ( p j p i )
= i = 1 n p ˙ i j = 1 n x j ϕ i ( p i p j ) j = 1 n x i ϕ j ( p j p i )
= i = 1 n p ˙ i x ˙ i .
Therefore,
L ˙ p ˙ T x ˙ ,
as desired. ☐

4.3.3. Dynamically Modified Payoffs

We now illustrate how passivity methods can be used for the analysis of evolutionary dynamics with auxiliary states. In this section, we consider evolutionary dynamics acting on dynamically modified payoffs. These modifications can be interpreted in two ways: (i) dynamic modification as part of an evolutionary process coupled with a static game, or (ii) a game with dynamic dependencies coupled with a standard evolutionary dynamic. Figure 3 illustrates these two perspectives. In either case, the interconnection, and hence analysis, remains the same.
A consequence of dynamic modifications is the introduction of auxiliary states other than the strategy states. This setting is a departure from much of the literature on evolutionary games, which, almost exclusively, considers evolutionary dynamics whose dimension equals the number of strategies. Likewise, game payoffs typically are static functions of strategies.
Throughout this section, the stable games we consider are affine functions the state, i.e.,
p = A x + b .
where A is symmetric negative definite.
Smoothed payoff modification: Payoffs are subject to an exponentially weighted moving average. Given a payoff stream p ( t ) , the smoothed payoffs are
p ˜ ( t ) = e λ t p ( 0 ) + 0 t e λ ( t τ ) p ( τ ) d τ .
Figure 3. (a) Static game with dynamically modified evolution. (b) Standard evolution with dynamically modified payoffs.
Figure 3. (a) Static game with dynamically modified evolution. (b) Standard evolution with dynamically modified payoffs.
Games 04 00561 g003
An effect of the averaging is to smooth out short term fluctuations in order to isolate longer term trends.
In state space form, the mapping from strategies to modified payoffs is described by
p ˜ ˙ = λ ( A x + b p p ˜ ) .
Theorem 4.6 The state space system Equation (77) is δ-anti-passive as a mapping from X to P .
Proof. Take as a candidate storage function
L ( p ˜ , x ) = λ 2 ( A x + b p ˜ ) T A 1 ( A x + b p ˜ ) .
Then
L ˙ = p ˜ L ( p ˜ , x ) p ˜ ˙ + x L ( p ˜ , x ) x ˙
= λ ( A x + b p ˜ ) T A 1 ( A x ˙ p ˜ ˙ )
= p ˜ ˙ T x ˙ + p ˜ ˙ T A 1 p ˜ ˙
p ˜ ˙ T x ˙ ,
where the last inequality is due to the negative definiteness of A. ☐
An implication of Theorem 4.6 is that Theorems 4.1 and 4.2 are now applicable for any δ-passive evolutionary dynamic coupled with smoothed payoffs of an affine stable game.
Anticipatory payoff modification: In anticipatory payoff modification, payoff streams are used to construct myopic forecasts of payoffs. Evolution (or learning) then acts on these myopic forecasts rather than the instantaneous payoffs. The concept is inspired by classical methods in feedback control as well as the psychological tendency to extrapolate from past trends. Anticipatory learning was utilized in [3,20,21], where it was shown how anticipatory learning can alter the convergence to both mixed and pure equilibria.
The state space equations for anticipatory payoff modification are
q ˙ = λ ( A x + b q )
p ˜ = ( A x + b ) + k λ ( A x + b q ) .
Here, the modified payoff is a combination of the original payoff and an estimate of its derivative, i.e.,
p ˜ p + k p ˙ est .
The specific estimate of p ˙ here is q ˙ (see the discussion in [20]), which can be constructed from payoff measurements. The scalar k reflects the weighting on the derivative estimate.
Theorem 4.7 The state space system Equation (83) is δ-anti-passive as a mapping from X to P .
Proof. Take as a candidate storage function
L ( q , x ) = λ 2 ( A x + b q ) T A 1 ( A x + b q ) .
Then
L ˙ = λ ( A x + b q ) T A 1 ( A x ˙ q ˙ ) .
By definition,
p ˜ ˙ = A x ˙ + k λ ( A x ˙ q ˙ ) .
Therefore,
L ˙ = λ ( A x + b q ) T A 1 ( p ˜ ˙ A x ˙ ) 1 k λ
= q ˙ T A 1 ( p ˜ ˙ A x ˙ ) 1 k λ .
Using the above equation for p ˜ ˙ results in
L ˙ = 1 k λ ( p ˜ ˙ A x ˙ ) A x ˙ T A 1 ( p ˜ ˙ A x ˙ ) 1 k λ
= 1 ( k λ ) 2 ( p ˜ ˙ A x ˙ ) T A 1 ( p ˜ ˙ A x ˙ ) + 1 k λ x ˙ T A x ˙ 1 k λ x ˙ T p ˜ ˙
1 k λ x ˙ T p ˜ ˙ ,
where the last inequality is due to the negative definiteness of A. We see that rescaling L new ( q , x ) = k λ L ( q , x ) leads to the desired result. ☐
The proof of Theorem 4.7 reveals that the associated dissipation inequality is satisfied strictly because of the two terms involving A. In particular, the anticipatory payoff modification of Equation (83) defines an input strictly δ-anti-passive system. The following representative theorem is then an immediate consequence of Theorem 4.1.
Theorem 4.8 In the positive feedback interconnection defined by Equation (33) (as in Figure 2b), let S 1 be any δ-passive evolutionary dynamic mapping payoff trajectories p ( · ) P to state trajectories x ( · ) X , and let S 2 be anticipatory payoff modification defined by Equation (83). Then x ˙ ( · ) L 2 .
In case the evolutionary dynamic has a state space description (e.g., EPT), one can use Theorem 4.2 to conclude lim t x ˙ ( t ) = 0 .
Implicit in the above discussion is that convergence of x ˙ has implications about convergence to Nash equilibrium. Any such conclusions are specific to the underlying evolutionary dynamic.

4.3.4. Contrarian Effect Payoffs

In this section, we illustrate the use of passivity methods in the presence of lags or time delays. In this model, players perceive advantages in avoiding strategies that have seen net increase in recent usage. In particular, for a fixed lag > 0 , payoffs are given by “contrarian effect” payoffs, defined by
p ˜ ( t ) = F ( x ( t ) ) Λ ( x ( t ) x ( t ) ) ,
where Λ > 0 is a diagonal scaling matrix. We assume that strategies are initialized by some Lipschitz continuous x 0 : [ , 0 ] X , so that for t 0 ,
x ( t ) : = x 0 ( t ) .
As intended, an increase in the usage of a strategy diminishes the perceived payoff derived from that strategy. While such a contrarian effect defined here may seem simplistic, our main interest is to illustrate the analysis of delays using passivity methods.
In this section, we will deal exclusively with the input–output operator formulation of passivity. We begin by establishing the following variant of δ-anti-passivity of contrarian effect payoffs. First, define X x o to be the restriction of X to functions with x ( 0 ) = x o .
Proposition 4.1 Let F be a strictly stable game with strict passivity constant, β 2 > 0 , i.e.,
z ˙ T D F ( x ) z ˙ β 2 z ˙ T z ˙ , z T X , x X .
Let x 0 : [ , 0 ] X be Lipschitz continuous. Let S : X x 0 ( 0 ) P be the (contrarian effect payoff) input–output operator defined by Equation (93). Then
p ˜ ˙ , x ˙ T β 2 x ˙ , x ˙ T α 2 x ˙ ,
with α 2 = ( 0 x ˙ 0 ( t ) T Λ 2 x ˙ 0 ( t ) d t ) 1 / 2 .
Proof. See appendix.
Proposition 4.1 states that contrarian effect payoffs satisfy a version of passivity that deviates slightly from the usual definition of δ-anti-passivity, since the associated passivity lower bound (associated with α) is α 2 x ˙ and depends on the input signal, but in a bounded manner.
The following theorem now follows from arguments similar to those for Theorem 4.1.
Theorem 4.9 In the positive feedback interconnection defined by Equation (33) (as in Figure 2b), let S 1 be any δ-passive evolutionary dynamic mapping payoff trajectories p ( · ) P to state trajectories x ( · ) X , and let S 2 be contrarian effect payoffs defined by Equation (93) with F strictly stable. Then x ˙ ( · ) L 2 .
Proof. Let α 1 be the passivity constant of the δ-passive evolutionary dynamic, so that
p ˜ ˙ , x ˙ T α 1 .
By Proposition 4.1, contrarian effect payoffs satisfy
p ˜ ˙ , x ˙ T β 2 x ˙ , x ˙ T α 2 x ˙ .
Summing these inequalities leads to
α 1 + α 2 x ˙ β 2 x ˙ , x ˙ T ,
which then implies that x ˙ ( · ) L 2 , as desired. ☐

5. Concluding Remarks

This paper has proposed passivity theory as a unifying and extending framework for the study of evolutionary games. In particular, the passivity condition property of long run correlated payoff flows and population flows in Equation (54) appears to be a natural complement to the class of stable games. The methods are applicable to generalizations of stable games with dynamic payoffs or evolutionary dynamics that include auxiliary states. We believe that there is significant potential in bringing in related methods of feedback control theory, such as generalized dissipativity, multipliers, and loop transformations, to complement the more traditional analytical approaches to evolutionary games. A lingering question is to understand the extent to which passive evolutionary dynamics and stable games are complementary. We conjecture that if an evolutionary dynamic is not passive then one can construct a (generalized) stable game that results in instability. Stated differently, an evolutionary dynamic results in stability for all generalized stable games if and only if it is passive.

Acknowledgments

Research supported by ONR project #N00014-09-1-0751. We thank William H. Sandholm, Georgios Piliouras, Nikhil Chopra, Jason Marden, Georgios Chasparis, Ozan Candogan, and Georgios Kotsalis for helpful discussions.

A. Proof of Proposition 4.1

Contrarian effect payoffs in Equation (93) can be written as the sum of two terms,
p ˜ ( t ) = p ˜ SG ( t ) p ˜ CE ( t ) ,
i.e., the “stable game” portion,
p ˜ SG ( t ) = F ( x ( t ) )
and the “contrarian effect” portion,
p ˜ CE ( t ) = Λ ( x ( t ) x ( t ) ) .
Accordingly,
p ˜ ˙ , x ˙ T = p ˜ ˙ SG , x ˙ T + p ˜ ˙ CE , x ˙ T
= 0 T x ˙ ( τ ) T D F ( x ( τ ) ) x ˙ ( τ ) d τ + p ˜ ˙ CE , x ˙ T
β 2 x ˙ , x ˙ T + p ˜ ˙ CE , x ˙ T .
It then remains to show that
p ˜ ˙ CE , x ˙ T α 2 x ˙ .
We will need the following lemma, whose proof relies on standard arguments from systems theory (e.g., [11]).
Lemma A.1 For any v L 2 and 0 ,
0 ( v ( t ) v ( t ) ) v ( t ) d t 0 ,
where v ( t ) : = 0 for t < 0 .
Proof. Define the extensions v ¯ : ( , ) R n
v ¯ ( t ) = 0 t < 0 ; v ( t ) t 0 .
Let v ^ be the Fourier transform of v ¯ , i.e.,
v ^ ( j ω ) : = v ¯ ( t ) e j ω t d t .
Likewise, let
w ¯ ( t ) = v ¯ ( t ) v ¯ ( t )
and let w ^ be the Fourier transform of w ¯ . Parseval’s Theorem states that
w ¯ ( t ) T v ¯ ( t ) d t = 1 2 π w ^ ( j ω ) * v ^ ( j ω ) d ω ,
where superscript “*” denotes complex conjugate transform. Using that
w ^ ( j ω ) = v ^ ( j ω ) e j ω v ^ ( j ω )
results in
w ¯ ( t ) T v ¯ ( t ) d t = 1 2 π v ^ ( j ω ) * ( I e j ω I ) v ^ ( j ω ) d ω ,
= 1 2 π v ^ ( j ω ) * ( I 1 2 ( e j ω + e j ω ) I ) v ^ ( j ω ) d ω ,
= 1 2 π ( 1 cos ( ω ) ) v ^ ( j ω ) * v ^ ( j ω ) d ω
0 ,
as desired. ☐
Lemma A.1 almost provides the desired conclusion, except that
p ˜ ˙ CE , x ˙ T = Λ 1 / 2 ( x ˙ ( t ) x ˙ ( t ) ) , Λ 1 / 2 x ˙ T
involves terms due to the initialization x 0 ( · ) over the interval [ , 0 ] . These terms can be bounded as follows. For any T > 0 , define
v ( t ) = 0 , t < 0 ; x ˙ ( t ) , 0 t T ; 0 t T .
and
v 0 ( t ) = 0 , t < ; x ˙ 0 ( t ) , t 0 ; 0 , t 0 .
and set
w ( t ) = Λ ( v ( t ) v ( t ) v 0 ( t ) )
for t ( , ) . Then
p ˜ ˙ CE , x ˙ T = w , v T = 0 T w ( t ) T v ( t ) d t = 0 T ( v ( t ) v ( t ) ) T Λ v ( t ) d t 0 T v 0 ( t ) T Λ v ( t ) d t = T ( v ( t ) v ( t ) ) T Λ v ( t ) d t 0 T v 0 ( t ) T Λ v ( t ) d t , ( since v ( t ) = 0 for t < 0 ) = ( v ( t ) v ( t ) ) T Λ v ( t ) d t 0 T v 0 ( t ) T Λ v ( t ) d t , ( since v ( t ) = 0 for t > T )
Using Lemma A.1, the first term above is positive, and the second term is bounded from below via
0 T v 0 ( t ) T Λ v ( t ) v α 2 .

References

  1. Sandholm, W.H. Population Games and Evolutionary Dynamics (Economic Learning and Social Evolution); The MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  2. Sato, Y.; Akiyama, E.; Farmer, J.D. Chaos in learning a simple two-person game. Proc. Natl. Acad. Sci. U.S.A. 2002, 99, 4748–4751. [Google Scholar] [CrossRef] [PubMed]
  3. Arslan, G.; Shamma, J.S. Anticipatory Learning in General Evolutionary Games. In Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, 13–15 December, 2006; pp. 6289–6294.
  4. Hart, S.; Colell, A.M. Uncoupled dynamics do not lead to Nash equilibrium. The American Economic Review 2003, 93, 1830–1836. [Google Scholar] [CrossRef]
  5. Monderer, D.; Shapley, L. Potential games. Games Econ. Behav. 1996, 14, 124–143. [Google Scholar] [CrossRef]
  6. Hart, S. Adaptive Heuristics. Econometrica 2005, 73, 1401–1430. [Google Scholar] [CrossRef]
  7. Young, H.P. Strategic Learning and its Limits; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  8. Fudenberg, D.; Levine, D. Learning and equilibrium. Annu. Rev. Econom. 2009, 1, 385–420. [Google Scholar] [CrossRef]
  9. Sandholm, W.H.; Hofauer, J. Stable games and their dynamics. J. Econ. Theory 2009, 144, 1665–1693. [Google Scholar]
  10. Willems, J.C. Dissipative dynamical systems - Part I, Part II. Arch. Ration. Mech. Anal. 1972, 45, 321–393. [Google Scholar] [CrossRef]
  11. Desoer, C.; Vidyasagar, M. Feedback Systems: Input-Output Properties; Academic Press, Inc.: New York, NY, USA, 1975. [Google Scholar]
  12. Khalil, H. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, New Jersey, 2002. [Google Scholar]
  13. van der Schaft, A. L2-Gain and Passivity Techniques in Nonlinear Control; Springer Verlag: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  14. Ramrez-Llanos, E.; Quijano, N. A population dynamics approach for the water distribution problem. Int. J. Control 2010, 83, 1947–1964. [Google Scholar] [CrossRef]
  15. Fan, X.; Alpcan, T.; Arcak, M.; Wen, T.J.; Basar, T. A passivity approach to game-theoretic CDMA power control. Automatica 2006, 42, 1837–1847. [Google Scholar] [CrossRef]
  16. Hofbauer, J.; Sandholm, W.H. Evolution in games with randomly disturbed payoffs. J. Econ. Theory 2007, 132, 47–69. [Google Scholar] [CrossRef]
  17. Fox, M.J.; Shamma, J.S. Population Games, Stable Games, and Passivity. In Proceedings of the 51st IEEE Conference on Decision and Control, Maui, Hawaii, 10–13 December, 2012; pp. 7445–7450.
  18. Wang, H. Differential-Passivity based Controlled Synchronization of Networked Robots with Additive Disturbances. In Proceedings of the 31st Chinese Control Conference (CCC‘2), Hefei, China, 25–27 July, 2012; pp. 5838–5843.
  19. Sandholm, W. Excess payoff dynamics and other well-behaved evolutionary dynamics. J. Econ. Theory 2005, 124, 149–170. [Google Scholar] [CrossRef]
  20. Shamma, J.S.; Arslan, G. Dynamic fictitious play, dynamic gradient play, and distributed convergence to Nash equilibria. IEEE Trans. Automat. Contr. 2005, 50, 312–327. [Google Scholar] [CrossRef]
  21. Chasparis, G.; Shamma, J. Distributed dynamic reinforcement of efficient outcomes in multiagent coordination and network formation. Dynamic Games and Applications 2012, 2, 18–50. [Google Scholar] [CrossRef]
  • 1.For notational simplicity, we suppress the dimension n in L 2 .

Share and Cite

MDPI and ACS Style

Fox, M.J.; Shamma, J.S. Population Games, Stable Games, and Passivity. Games 2013, 4, 561-583. https://0-doi-org.brum.beds.ac.uk/10.3390/g4040561

AMA Style

Fox MJ, Shamma JS. Population Games, Stable Games, and Passivity. Games. 2013; 4(4):561-583. https://0-doi-org.brum.beds.ac.uk/10.3390/g4040561

Chicago/Turabian Style

Fox, Michael J., and Jeff S. Shamma. 2013. "Population Games, Stable Games, and Passivity" Games 4, no. 4: 561-583. https://0-doi-org.brum.beds.ac.uk/10.3390/g4040561

Article Metrics

Back to TopTop