Next Article in Journal
The Integer Nucleolus of Directed Simple Games: A Characterization and an Algorithm
Next Article in Special Issue
Security Investment, Hacking, and Information Sharing between Firms and between Hackers
Previous Article in Journal
Swap Equilibria under Link and Vertex Destruction
Previous Article in Special Issue
Interdependent Defense Games with Applications to Internet Security at the Level of Autonomous Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Information Aggregation and Interim Efficiency in Networks

División de Economía, Centro de Investigación y Docencia Económicas (CIDE), Mexico City 01210, Mexico
Submission received: 31 October 2016 / Revised: 23 January 2017 / Accepted: 21 February 2017 / Published: 24 February 2017
(This article belongs to the Special Issue Decision Making for Network Security and Privacy)

Abstract

:
This paper considers a population of agents that are engaged in a listening network. The agents wish to match their actions to the true value of some uncertain (exogenous) parameter and to the actions of the other agents. Each agent begins with some initial information about the parameter and, in addition, is able to receive further information from their neighbors in the network. I derive a closed expression for the (interim) social welfare loss that depends on the initial information structure and on the possible pieces of information that can be gathered under the network. Then, I explore how changes in the network may affect social welfare for extreme levels of complementarity in the agents’ actions. When the level of complementarity is very high, efficiency is achieved regardless of the network structure. For very low levels of complementarity in actions, efficiency can be either associated to more sparse or denser networks, depending on the size of the induced informative gains. The implications of this paper are relevant in security environments where agents are naturally interpreted as analysts who try to forecast the value of a parameter that describes a threat to security.
JEL Classification:
C72; D83; D84; D85

1. Introduction

In many environments of social, economic, or political interest, decision-makers seek to match their actions both to some unknown underlying variable and to the actions chosen by others. While the first motive is typically regarded as a “fundamental motive,” the second motive is purely a “coordination motive”1. The canonical framework that captures both types of motives in strategic scenarios is that of “beauty contest” games2. Beauty contests are particularly suitable to capture environments where security issues are a main concern. A typical example is that of a group of analysts who independently try to predict the value of an exogenous variable that describes a threat to security. Each of the analysts wishes to follow an action appropriate for the true value of the uncertain variable. In addition, since coordination helps prevent (or at least mitigate) security threats, the analyst wishes to follow a course of action similar to the other analysts’ actions as well3. In these beauty contest scenarios, decision-makers wish to collect information that helps them resolve their uncertainty about the underlying variable and the likely actions of the others.
In practice, the presence of networks is ubiquitous in contexts where agents collect information. When decision-makers can only interact locally through a network, the architecture of the network places restrictions on the ways in which they can aggregate their pieces of private information. The typical approach considers that decision-makers can only obtain information from their neighbors in the network. Often, security analysts are involved in networks where their neighbors are other forecasters. In many real-world situations, a network of security analysts encompasses formal connections between different organizations as well as more informal connections based on friendship, family, or informal online relationships. In these environments, efficiency insights provide particularly useful recommendations for the design of teams of analysts, as well as for the appropriateness of establishing collaboration links between different security organizations.
This paper considers a (relatively large) population of decision-makers (or security analysts) that have access to some private signals about the underlying state and that, in addition, can receive (locally, according to their connections in a network) the information that others obtain from their own signals. For this framework, the main contribution of this paper is to derive a closed form for the (interim) social welfare loss function (in Proposition 1) that depends, among other primitives, on the informativeness of the signals and on the collection of subsets of signals that the agents can aggregate under the network.
More specifically, each agent receives initially an informative signal about the uncertain variable and, in addition, he is able to add up to it other signals that were initially provided to his (direct) neighbors in the network. In this way, the network induces a collection of different subsets of signals that can be finally observed within the population. Intuitively, finer collections of subsets of signals are associated to more spare networks whereas denser networks induce coarser collections. Using the welfare loss function derived in Proposition 1, I then explore how efficiency relates to changes in the collection of pieces of information that can be aggregated under the network, and how efficiency is affected by extreme values of the “coordination motive.” The efficiency benchmark used in this paper considers interim utilities. Thus, the central planner is able to access the information available to the agents upon receiving their signals. This seems a reasonable approach in many environments. For security contexts, it is certainly appealing when one considers that the central planner is some central institution that coordinates the teams of analysts that constitute the network.
To investigate this normative question in the context of networks, one must understand the forces behind two different mechanisms. First, how does the informative content of the available signals change as the collection of subsets of signals induced by the network varies? Secondly, how does social welfare depend on the informative content of the collection of subsets of signals induced by the network? The first question refers to the scale properties of information aggregation within sets of signals. To answer the second question, one needs to investigate the social value of information when both a fundamental motive and a coordination motive are present4.
This paper provides a partial answer to the first question in Proposition 2, which shows that the informative content of a given subset of signals increases monotonically as we split it into smaller disjoint subsets. The paper’s insights on efficiency for extreme levels the coordination motive are provided in Proposition 3. Efficiency is guaranteed, regardless of the network structure, for very high levels of complementarity in actions. For very low levels of complementarity in actions, social welfare increases as subsets of observed signals are split into smaller disjoint subsets if and only if the informative gains induced are high enough. As Example 2 illustrates, the intuitive message here is that, when agents are interested only in predicting the value of the uncertain variable, we want to have more sparse networks if and only if the informativeness of the available set of signals allows for high enough informative gains.
The rest of this paper is structured as follows. Section 2 lays out the model and Section 3 derives the social welfare loss function. Section 4 provides the results on the informative content of the information aggregated in the network and on efficiency. Section 5 concludes. While the derivation of equilibrium is a crucial step to obtain our main result, and to investigate efficiency in the proposed benchmark, it is also constructive. Therefore, the required technical details are included in the main text. Other technical details, such as the proofs of Lemma 1, and of Propositions 2 and 3, are relegated to the Appendix.

2. Model

There is a (measure–1) continuum of agents, indexed by i [ 0 , 1 ] , that wish to estimate an unknown state of the world θ R . Each agent receives a utility according to a common (quadratic) utility function u ( a , θ ) that depends on an action profile a : [ 0 , 1 ] R , where a ( i ) is the action chosen by agent i, and on the state θ. This model can be broadly applied to many contexts of strategic interaction under uncertainty where both a fundamental and a coordination motive are present. For security environments, we can think of agents as security analysts or forecasters that wish to estimate a variable that describes a security threat.

2.1. Preferences and Information Structure

The main intuition is conveyed using a beauty contest game (as in [2,4,6,7,8]). Each agent i’s utility is given by
u ( a , θ ) = a ( i ) ( 1 λ ) θ λ 0 1 a ( h ) d h 2 ,
where λ ( 0 , 1 ) is a parameter that measures the degree of strategic complementarity in the agents’ actions. Intuitively, λ captures the relative importance of the coordination motive in the agent’s utility. Higher values of λ indicate higher levels of strategic complementarity in actions.
The state of the world θ is unknown to the agents and the underlying information structure is assumed to be Gaussian with θ N ( 0 , σ 2 ) . Agents receive some (exogenous) information about θ from a finite set of noisy signals S = s 1 , s 2 , , s n throughout two periods t 0 , 1 . Although the set of agents is a continuum, the set of possible signals is assumed to be finite for tractability reasons. Specifically, signals are (exogenously) assigned to agents according to a finite partition N = N 1 , N 2 , , N n of the set of agents [ 0 , 1 ] , where N 1 = [ 0 , i 1 ] , N j = ( i j 1 , i j ] for each j 2 , , n 1 , and N n = ( i n 1 , 1 ] . Then, at t = 0 , each agent i N j receives a noisy signal s j = θ + ε j , where ε j N ( 0 , π j 1 ) . Thus, π j measures the precision of the signal received by any agent i N j . Each noise term ε j is independent of the state θ and the noise terms ε j j = 1 n are independent from each other as well. Signals are assumed to be (conditional on the state) independent (i.e., the conditional random variables ( s j | θ ) j = 1 n are independent)5.

2.2. Network Structure

After receiving their initial information at t = 0 , agents can listen to others at t = 1 according to a (directed) network that locally allows them to receive other agents’ signals. Agents are engaged in a pattern of listening connections or directed links, which determines the network. This paper considers that information flows one way between directedly linked agents and it also assumes that information cannot be transmitted to indirectly connected agents. Specifically, a network is a collection of neighborhoods g ( i ) i [ 0 , 1 ] where agent i’s neighborhood g ( i ) is the set of agents to whom agent i has a directed link. Thus, an agent’s neighborhood includes the agents that he is connected to through a (single) directed link but not those agents that are connected through a path of (several) directed links. In other words, an agent i cannot hear what a neighbor h g ( i ) listens from other agents l g ( h ) such that l g ( i ) . In this setup, a network can be conveniently specified using a neighborhood function
g : [ 0 , 1 ] B [ 0 , 1 ] ,
where B [ 0 , 1 ] denotes the Borel σ-algebra on the interval [ 0 , 1 ] . Each agent i can observe at t = 1 the signals received at t = 0 by all agents in his neighborhood, h g ( i ) . Each agent listens to himself, i g ( i ) .
For security environments, we can think of an agent’s neighborhood as a group of security analysts or forecasters whom he consults for some security project. For a network described by a neighborhood function g, agent i can observe at t = 1 the signals that all agents in his neighborhood (thus, including himself) received at t = 0 . In other words, agent i can observe all signals s j such that, for each neighbor h g ( i ) , we have h N j for the element N j of the partition N of agents. Thus, let
s ( i ) = s j S : g ( i ) N j for N j N
denote a restricted signal profile for agent i, which consists of the subset of signals observed by agent i at t = 1 , conditioned on his neighborhood g ( i ) . Let S g denote the collection of all the different subsets of signals that can be observed within the population under the neighborhood function g so that s ( i ) S g for any agent i [ 0 , 1 ] . Sometimes, I will simply say that the collection S g is induced by the neighborhood function g or by the network. Since the set of available signals S is finite, both the collection of all subsets of signals S g and any restricted signal profile s ( i ) S g must be finite as well. When no reference to a particular agent that observes a subset of signals needs to be made, I will use s S g to denote a generic signal profile observed within the population. In addition, for technical reasons, we will consider throughout the paper any subset of signals s as an ordered string or vector that contains the elements of the subset6.
As an antidote to the abstractness of the model, and to grasp better its functioning, the goal of the following example is to illustrate how agents listen to others in a network.
Example 1.
Consider a set of available signals S = s 1 , s 2 , s 3 which the agents are endowed with at t = 0 according to the partition N = N 1 , N 2 , N 3 , with N 1 = [ 0 , 1 / 3 ] , N 2 = ( 1 / 3 , 2 / 3 ] , and N 3 = ( 2 / 3 , 1 ] . Suppose that the network is specified according to the neighborhood function
g ( i ) = [ 0 , 3 / 10 ] if i [ 0 , 1 / 4 ] [ 2 / 10 , 5 / 10 ] if i [ 1 / 4 , 1 / 2 ] [ 4 / 10 , 8 / 10 ] if i [ 1 / 2 , 3 / 4 ] [ 6 / 10 , 1 ] if i ( 3 / 4 , 1 ] .
In this example, for each of the four neighborhoods, we can easily find another neighborhood with a non-empty intersection, so that the network is minimally connected7. As to which signals the agents can observe at t = 1 under the network described by g, notice that
[ 0 , 3 / 10 ] N 1 = [ 0 , 3 / 10 ] , [ 0 , 3 / 10 ] N 2 = , [ 0 , 3 / 10 ] N 3 = ; [ 2 / 10 , 5 / 10 ] N 1 = [ 2 / 10 , 1 / 3 ] , [ 2 / 10 , 5 / 10 ] N 2 = ( 1 / 3 , 5 / 10 ] , [ 2 / 10 , 5 / 10 ] N 3 = ; [ 4 / 10 , 8 / 10 ] N 1 = , [ 4 / 10 , 8 / 10 ] N 2 = [ 4 / 10 , 2 / 3 ] , [ 4 / 10 , 8 / 10 ] N 3 = ( 2 / 3 , 8 / 10 ] ; [ 6 / 10 , 1 ] N 1 = , [ 6 / 10 , 1 ] N 2 = [ 6 / 10 , 2 / 3 ] , [ 6 / 10 , 1 ] N 3 = ( 2 / 3 , 1 ] .
Therefore, the restricted signal profiles that can be observed within the population at t = 1 are given by
s ( i ) = s 1 if i [ 0 , 1 / 4 ] s 1 , s 2 if i [ 1 / 4 , 1 / 2 ] s 2 , s 3 if i [ 1 / 2 , 1 ] ,
so that the collection of subsets of signals observed by the agents, as induced by the neighborhood function g, is S g = s 1 , s 1 , s 2 , s 2 , s 3 .
For instance, notice that all agents i [ 1 / 2 , 1 ] are able to observe signals s 2 and s 3 at t = 1 . Here, the subset of agents ( 2 / 3 , 1 ] [ 1 / 2 , 1 ] were already able to observe s 3 directly at t = 0 ; as such, a signal was initially assigned to them. From this subset of agents ( 2 / 3 , 1 ] , we observe that the agents in ( 3 / 4 , 1 ] are linked to the agents in [ 6 / 10 , 1 ] and, thus, through the agents in [ 6 / 10 , 2 / 3 ] [ 6 / 10 , 1 ] , they are able to observe s 2 as well at t = 1 . In addition, from the subset of agents ( 2 / 3 , 1 ] , we note that the agents in ( 2 / 3 , 3 / 4 ] are linked to the agents in [ 4 / 10 , 8 / 10 ] and, therefore, through the agents in [ 4 / 10 , 2 / 3 ] [ 4 / 10 , 8 / 10 ] , they are able to observe s 2 as well at t = 1 . As to the remaining agents in [ 1 / 2 , 2 / 3 ] , they were already able to observe s 2 directly at t = 0 as they were initially endowed with such a signal. In addition, we observe that all the agents in [ 1 / 2 , 2 / 3 ] are linked to the agents in [ 4 / 10 , 8 / 10 ] . Therefore, through the agents in ( 2 / 3 , 8 / 10 ] [ 4 / 10 , 8 / 10 ] , they are able to observe s 3 as well at t = 1 . However, any agent 0 i < 1 / 2 is unable to observe s 3 . Such agents can only observe at t = 0 either s 1 (if 0 i 1 / 3 ) or s 2 (if 1 / 3 < i < 1 / 2 ). In addition, they are linked with agents 0 h 5 / 10 who cannot observe s 3 themselves at t = 0 .

2.3. Optimal Actions

Given the informational constraints imposed by the network, the agents are engaged in a game where each agent i [ 0 , 1 ] chooses at t = 1 an action a * ( i ) so as to maximize his conditional expected utility E u ( a , θ ) | s ( i ) . Under the preference specification in (1), a Bayesian Nash equilibrium (BNE) is a function a * : [ 0 , 1 ] R such that each agent i [ 0 , 1 ] solves the problem
min a ( i ) R E a ( i ) ( 1 λ ) θ λ 0 1 a ( h ) d h 2 | s ( i ) .
Nonetheless, we can restrict attention to symmetric BNE where all agents that observe the same subset of signals optimally choose the same action. To see this, suppose that some agents h that observe a common subset of signals s choose different optimal actions. Then, the expectation that other agents who receive a subset of signals s have about the average optimal action followed by the agents that observe s is
E h a * ( h ) d h | s = h E a * ( h ) | s d h .
This expectation depends on the restricted signal profile s but not on the names of the agents. All agents that observe a common subset of signals aggregate the same information and obtain some common posteriors on θ and on the actions chosen by other agents. In addition, since the loss function that the agents are minimizing is strictly convex, the corresponding best-reply must be unique. Therefore, we can write
h E a * ( h ) | s d h = E a * ( s ) | s ,
where a * ( s ) indicates the optimal action chosen by any agent h that observes the subset of signals s . From here onwards, let us use for simplicity E s [ · ] and Var s [ · ] to indicate, respectively, the conditional expectation E [ · | s ] and the conditional variance Var [ · | s ] operators. Given the previous observations, for a network specified by g, an action function a * : S g R is a symmetric BNE if and only if each action a * ( s ) satisfies
a * ( s ) = ( 1 λ ) E s [ θ ] + λ s S g E s a * ( s ) .

3. Equilibrium and Social Welfare

Consider that, after period t = 0 and before period t = 1 , a central planner can influence the structure of the network by changing the neighborhood function g. This type of intervention makes sense only if the social planner has access to the signals available to all agents in the population and thus makes uses of the information available at the interim stage of the game. In this sense, the current paper is addressing interim efficiency issues. The expected loss of all agents that observe a subset of signals s S g , under a symmetric BNE action function a * , is
E s a * ( s ) ( 1 λ ) θ λ s S g a * ( s ) 2 .
The goal of this paper is to investigate how the network described by g influences the shape of the social welfare loss function
L ( g ) = s S g E s a * ( s ) ( 1 λ ) θ λ s S g a * ( s ) 2 .
Since the technical details required to obtain the welfare loss function in this setup are constructive, they are provided in the main text. Using such arguments, Proposition 1 then derives the relevant welfare loss function for our environment with fundamental and coordination motives.
To address this central question, we need first to characterize the class of linear symmetric BNE of the game that the agents play once they receive their signals at t = 1 under the restrictions imposed by the network. As in the related literature (see, e.g., [2,4,6,7,8]), the existence of symmetric BNE is guaranteed under the quadratic-Gaussian structure that the model assumes. To obtain a solution to Equation (2), we must study how information is aggregated and how this influences the agents’ optimal actions. For a neighborhood function g and for a subset of signals s S g , the pairs ( θ , s ) are jointly normally distributed. Let us use Cov [ θ , s ] to denote the vector of covariances between the state of the world and each of the signals in s and Var [ s ] to denote the variance–covariance matrix of the signals in s . It follows from some basic results on normal distributions that
E s [ θ ] = Cov [ θ , s ] · Var [ s ] 1 · s
and
Var s [ θ ] = σ 2 Cov [ θ , s ] · Var [ s ] 1 · Cov [ θ , s ] .
Hence, normality ensures that the conditional expectations of the state are linear in the signals contained in s . This implication allows us to focus the analysis of BNE on linear strategies. If the agents that observe a signal profile s use a linear strategy with respect to such signals, then the optimal action of the agents that observe another signal profile s must be also linear in such signals. While linear strategies are in general fairly simple and intuitive to interpret, in the current context they are also robust.
Equation (2) reveals that the optimal action followed by the agents that observe a signal profile s depend in a recursive way on the average posterior expectation over the true state. Hence, we need to account for arbitrarily higher-order average posterior expectations over θ. To formalize these average posterior expectations, let E ¯ [ θ ] = s S g E s [ θ ] be the average posterior expectation on the state over the collection of possible subsets of signals8. We begin with the 0–order average posterior expectation. Notice that the 0–order average posterior expectation must coincide with the true realization of the state so that we set E ¯ ( 0 ) [ θ ] = θ . Then, for the 1–order average posterior expectation, we have
E ¯ ( 1 ) [ θ ] = E ¯ E ¯ ( 0 ) [ θ ] = E ¯ [ θ ] = s S g E s [ θ ] ,
whereas for higher-order average posterior expectations, we use E ¯ ( m ) [ θ ] = E ¯ [ E ¯ ( m 1 ) [ θ ] ] to indicate in a recursive way the m–order average posterior expectation over θ, for m 2 . With such higher-order average posterior expectations in place, recursive application of Equation (2) allows us to express the optimal action followed by the agents that observe s as
a * ( s ) = ( 1 λ ) E s E ¯ ( 0 ) [ θ ] + λ E s E ¯ ( 1 ) [ θ ] + λ 2 E s E ¯ ( 2 ) [ θ ] + = ( 1 λ ) m = 0 λ m E s E ¯ ( m ) [ θ ] .
Under the assumed information structure, we have Cov [ θ , s ] = σ 2 1 ̲ , where 1 ̲ is a vector of ones with the same dimension as the number of signals contained in the restricted profile s . Furthermore, recall that s j = θ + ε j , where E s [ ε j ] = 0 for all signals j = 1 , , n . Take a given realization of the state θ. Then, using the expression in Equation (5), we obtain that E s E ¯ ( 0 ) [ θ ] = E s [ θ ] , for the 0–order average posterior expectation, and
E s E ¯ ( 1 ) [ θ ] = E s σ 2 s S g 1 ̲ · Var [ s ] 1 · 1 ̲ θ = σ 2 s S g 1 ̲ · Var [ s ] 1 · 1 ̲ E s [ θ ] ,
for the 1–order average posterior expectation. Here again, 1 ̲ is a vector of ones whose dimension equals the number of signals in the profile s . Let us use
ω ¯ g = σ 2 s S g 1 ̲ · Var [ s ] 1 · 1 ̲
to denote the average of the inverses of the posterior variances of the state across signal profiles in the network. Given this notation for the average across (the inverse of) posterior variances, we can write E ¯ [ θ ] = ω ¯ g θ and iterate to obtain that E ¯ ( m ) [ θ ] = ω ¯ g m θ for each m 0 . Thus, we can express the equality in Equation (7) as
a * ( s ) = ( 1 λ ) 1 + λ ω ¯ g + λ 2 ω ¯ g 2 + E s [ θ ] = 1 λ 1 λ ω ¯ g E s [ θ ] ,
where E s [ θ ] satisfies the equality in Equation (5). Now, if we average the expression above over all possible subsets of signals observed in the network, we obtain
s S g a * ( s ) = 1 λ 1 λ ω ¯ g s S g E s [ θ ] = 1 λ 1 λ ω ¯ g ω ¯ g .
Therefore, in a BNE, each agent that observes a subset of signals s wishes to match his action to the objective
( 1 λ ) θ + λ s S g a * ( s ) = 1 λ 1 λ ω ¯ g θ .
By plugging the expressions in Equations (8) and (9) into the expected loss function given by Equation (3), we obtain:
E s ( 1 λ ) E s [ θ ] 1 λ ω ¯ g ( 1 λ ) θ 1 λ ω ¯ g 2 = 1 λ 1 λ ω ¯ g 2 Var s [ θ ] ,
where the conditional variance Var s [ θ ] is given by the expression in Equation (6). By combining this with the expression in Equation (4), we obtain the the social welfare loss function is given by
L ( g ) = 1 λ 1 λ ω ¯ g 2 s S g Var s [ θ ] .
A closing argument is needed to complete the analysis. Recall that our derivation of symmetric BNE has made use of the law of large numbers to average expectations on the state over subsets of signals. Therefore, we must consider that the number of possible subsets of signals is sufficiently large for our formal arguments to be appealing in the environment that we are studying. The analysis of higher-order beliefs used in this paper builds on the approach followed, among others, by [2,4,6,7]. As in these papers, the formal analysis used here also invokes the law of large numbers9. With this last consideration in place, the arguments provided in this section show that the welfare loss function has the form described by the following proposition.
Proposition 1 (Welfare Loss Function).
Suppose that the cardinality of S g is relatively large. Then, under the stated assumptions on preferences, the information structure, and the network of listening links, the social welfare loss function is given, as a function only of the model’s primitives, by the expression
L ( g ) = 1 λ 1 λ ω ¯ g 2 s S g Var s [ θ ] ,
where
ω ¯ g = σ 2 s S g 1 ̲ · Var [ s ] 1 · 1 ̲ .
In this context, the social planner wishes to affect the neighborhood function g so as to minimize the welfare loss described by Equations (10) and (11) above.

4. Informative Content of Signal Profiles and Efficiency

This section derives some results about the informative content of the signal profiles s S g and about how this informative content affects the social welfare under the network described by g.
Let us use γ g = S g to indicate the number of possible subsets of signals that can be observed within the population under the network described by g. Let us use k s to indicate the number of signals included in the profile s . In addition, in accordance with the earlier notation, I will use π j ( s ) to indicate the inverse of the variance of the noise term associated to the coordinate s j of the string s , for each j = 1 , , k s . Intuitively, π j ( s ) indicates the precision of the j-th signal in the profile s . The informative content of the signals contained in s can be conveniently described by the number
w ( s ) : = σ 2 1 ̲ · Var [ s ] 1 · 1 ̲ ,
which identifies the sum of all the entries of the inverse of the variance–covariance matrix of the signal profile s (weighted using the variance of the state of the world). Notice that we can express the average of the inverses of the posterior variances of the state in the network as ω ¯ g = s S g w ( s ) . Intuitively, each w ( s ) is simply a scalar that gives us some information about the joint precision of the signal profile s . Higher values of w ( s ) are associated with lower degrees of noise in the corresponding signals and, therefore, to more informative signal profiles. The following lemma provides formally this implication.
Lemma 1.
The scalar w ( s ) = σ 2 1 ̲ · Var [ s ] 1 · 1 ̲ lies in the interval ( 0 , 1 / σ 2 ) and it increases (strictly) with the sum of the precision of all the signals contained in the profile s , j = 1 k s π j ( s ) .
To explore how welfare depends on the subset of collections of signals S g , we need to understand how the informative content of the available signals S changes as the collection of subsets of signals S g varies. On the one hand, the informative content of a group of signals naturally increases as its size increases or, in other words, as we add more signals to the set. This can be derived directly from Lemma 1 and it is very intuitive: enlarging a subset of signals increases its informative content. On the other hand, in principle, it is unclear how the informative content of a subset of signals changes when such a profile of signals is split into several (smaller) disjoint subsets. Proposition 2 below offers a type of “diminishing returns” result under which the final informative content increases monotonically as we split a signal profile into smaller disjoint subsets. Conversely, adding up two sets of signals decreases their informative content. Larger subsets seem to feature a type of “congestion” to aggregate information. The message conveyed here for security environments is that we want to provide agents with a higher number of (smaller) subsets of signals if we are interested in increasing the overall informative content of the sources of information available in the set of signals S.
Proposition 2.
The informative content of a profile of signals s ¯ S g is strictly smaller than the aggregation of informative contents obtained by splitting the subset s ¯ into two disjoint subsets, that is, w ( s ¯ ) < w ( s ¯ 1 ) + w ( s ¯ 2 ) for any s ¯ 1 , s ¯ 2 such that s ¯ = s ¯ 1 s ¯ 2 with s ¯ 1 s ¯ 2 = .
The result of Proposition 2 is useful to propose ways to minimize the welfare loss function obtained in Proposition 1 as it establishes that the overall informative content described by the term ω ¯ g always increases by separating signal profiles into smaller disjoint subsets of signals. Of course, adding up subsets of signals or splitting subsets of signals into smaller disjoint subsets are not the only ways in which changes in the neighborhood function g affect the collection of profiles S g . For instance, one can propose changes in the neighborhood function g such that a subset s ¯ S g becomes split instead into two non-disjoint subsets s ¯ 1 and s ¯ 2 , where s ¯ = s ¯ 1 s ¯ 2 . Unfortunately, for those complex cases, what one can say about efficiency depends very much on the various precisions π j of the signals involved.
However, restricting attention to interventions that split signal profiles into disjoint sets does allow for some interesting insights on efficiency. When we move from a neighborhood function g to another g such that a given signal profile s ¯ S g is split into two disjoint subsets s ¯ 1 , s ¯ 2 S g (with s ¯ = s ¯ 1 s ¯ 2 and s ¯ 1 s ¯ 2 = ), then we naturally have that g specifies a more sparse network than the network specified by g. This is very intuitive as denser networks allow agents to have access to higher numbers of signals. Building up on Example 1, the following example illustrates this point.
Example 2.
As in Example 1, consider a set of available signals S = s 1 , s 2 , s 3 which the agents are endowed with at t = 0 according to the partition N = N 1 , N 2 , N 3 , with N 1 = [ 0 , 1 / 3 ] , N 2 = ( 1 / 3 , 2 / 3 ] , and N 3 = ( 2 / 3 , 1 ] . Now, suppose that the network is specified instead according to the neighborhood function
g ( i ) = [ 0 , 3 / 10 ] if i [ 0 , 1 / 4 ] [ 2 / 10 , 5 / 10 ] if i [ 1 / 4 , 1 / 2 ] [ 4 / 10 , 2 / 3 ] if i [ 1 / 2 , 2 / 3 ] ( 2 / 3 , 1 ] if i ( 2 / 3 , 1 ] .
Notice that there is a neighborhood, ( 2 / 3 , 1 ] , which is disjoint to any other neighborhood so that the network is not minimally connected. In addition, we observe that g gives us a network more sparse than the network associated to the neighborhood function g analyzed in Example 1. In particular, the agents i [ 1 / 2 , 2 / 3 ] are now linked to a smaller set of agents, [ 4 / 10 , 2 / 3 ] , compared to the one they were linked to under g, [ 4 / 10 , 8 / 10 ] . In addition, the agents i ( 2 / 3 , 1 ] are now linked to a smaller set of agents, ( 2 / 3 , 1 ] , compared to the one they were linked to under g, [ 6 / 10 , 1 ] .
As to which signals the agents can observe at t = 1 under the network described by g , notice that
[ 0 , 3 / 10 ] N 1 = [ 0 , 3 / 10 ] , [ 0 , 3 / 10 ] N 2 = , [ 0 , 3 / 10 ] N 3 = ; [ 2 / 10 , 5 / 10 ] N 1 = [ 2 / 10 , 1 / 3 ] , [ 2 / 10 , 5 / 10 ] N 2 = ( 1 / 3 , 5 / 10 ] , [ 2 / 10 , 5 / 10 ] N 3 = ; [ 4 / 10 , 2 / 3 ] N 1 = , [ 4 / 10 , 2 / 3 ] N 2 = [ 4 / 10 , 2 / 3 ] , [ 4 / 10 , 8 / 10 ] N 3 = ; ( 2 / 3 , 1 ] N 1 = , ( 2 / 3 , 1 ] N 2 = , ( 2 / 3 , 1 ] N 3 = ( 2 / 3 , 1 ] .
Therefore, the restricted signal profiles that can be observed within the population at t = 1 are given by
s ( i ) = s 1 if i [ 0 , 1 / 4 ] s 1 , s 2 if i [ 1 / 4 , 1 / 2 ] s 2 if i [ 1 / 2 , 2 / 3 ] s 3 if i ( 2 / 3 , 1 ] ,
so that the collection of subsets of signals observed by the agents, as induced by the neighborhood function g , is S g = s 1 , s 1 , s 2 , s 2 , s 3 . Thus, by moving from neighborhood function g of Example 1 to neighborhood function g , we are able to split the profile s 2 , s 3 into two disjoint profiles, s 2 and s 3 .
To obtain some insights about efficient information aggregation networks, I turn now to derive a more tractable expression for the welfare loss function. From the expression in Equation (6), and under our normality assumptions, it follows that the average of posterior variances of the state across signal profiles is given by
s S g Var s [ θ ] = σ 2 γ g s S g w ( s ) = σ 2 γ g ω ¯ g .
Finally, as to the term w ( s ) = σ 2 1 ̲ · Var [ s ] 1 · 1 ̲ , the proof of Lemma 1 above derives
w ( s ) = j = 1 k s π j ( s ) 1 + σ 2 j = 1 k s π j ( s ) ( 0 , σ 2 ) .
Therefore, the expressions obtained in Equations (10) and (11) for the social welfare loss can be rewritten as
L ( g ) = σ 2 ( 1 λ ) 2 ( γ g ω ¯ g ) ( 1 λ ω ¯ g ) 2 , where ω ¯ g = s S g j = 1 k s π j ( s ) 1 + σ 2 j = 1 k s π j ( s ) .
The social planner would like to influence the neighborhood function g so as to induce a collection of signal profiles S g that minimizes the expression in Equation (12) above.
On the one hand, from the expression for the welfare loss derived in Proposition 1, which is rewritten in Equation (12) above, we observe that the welfare loss is always minimized as λ tends to 1, regardless of the collection of profiles induced by the network structure. Since the welfare loss function L ( g ) is continuous in λ ( 0 , 1 ) , it follows that when the level of complementarity in actions is very high, interim efficiency does not depend substantially on the network structure.
On the other hand, to explore some insights when complementarity in actions is very low, let us consider two different neighborhood functions g and g such that, moving from g to g only implies that some profile s ¯ S g is split into two disjoint profiles s ¯ 1 , s ¯ 2 S g . Specifically, (i) for some given s ¯ S g , we have s ¯ 1 , s ¯ 2 S g , where s ¯ = s ¯ 1 s ¯ 2 and s ¯ 1 s ¯ 2 = ; whereas (ii) for each s S g \ s ¯ , we have s S g . Then, following some earlier arguments and, in particular, the result of Proposition 2, I ask how splitting one signal profile into two disjoint subsets affects the welfare loss when the level of complementarity in actions is very low. Proposition 3 below shows that the answer to this question depends on the size of the informative gain derived from splitting the selected signal profile.
Proposition 3.
Consider a collection of signal profiles S g that contains a relatively high number of profiles and split some given signal profile s ¯ S g into two disjoint subsets, s ¯ 1 and s ¯ 2 . Let S g denote the collection of signal profiles S g = s S g : s s ¯ s ¯ 1 , s ¯ 2 which results from splitting the chosen profile s ¯ . Suppose that complementarity in actions is very low, λ 0 . Then, under the stated assumptions on preferences, the information structure, and the network of listening links, it follows that L ( g ) < L ( g ) if and only if w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) > 1 .
Proposition 3 states that, for very low levels of complementarity in actions, splitting a signal profile into two disjoint subsets is welfare-improving if and only if the induced informative gain is sufficiently high. In other words, as illustrated by Example 2, when the agents wish to follow a course of action very close to the fundamental parameter, more sparse networks increases social welfare whenever the induced informative gains are high enough. Splitting a signal profile into disjoint signal profiles always increases the informative content (as shown by Proposition 2). However, if the gain in informative content is not sufficiently high, then denser networks are associated with higher social welfare for very low levels of complementarity in actions.

5. Conclusions

This paper has proposed a benchmark to explore (interim) efficiency issues in environments where decision-makers are connected through a listening network and where both a fundamental motive and a coordination motive are present in their preferences. Investigating this topic is of interest when one uses an interim efficiency benchmark. Using the ex-ante efficiency approach instead, one would obtain directly that a network is efficient if and only if it allows all agents to receive all available signals. The reason behind this implication is that, for the case of beauty contest preferences, ex-ante efficiency requires the central planner to solve the same problem that faces each agent. As a consequence, any additional source of information would be welfare-improving.
This paper has derived a closed form for the welfare loss function that depends on the informativeness of the available signals and on the way in which the network enables the agents to gather information. Efficiency is achieved for very high levels of complementarity in actions, regardless of the network structure. For very low levels of complementarity in actions, the central planner wishes to induce finer collections of possible subsets of observed signals if and only if the derived informative gains are sufficiently high. The implications of this paper are useful to provide efficiency recommendations for networks of security analysts that are coordinated by a central institution under the requirement that such an institution can access the pieces of private information available to the analysts.
A natural extension of the setting explored here would be that of considering substitutive actions as well. This possibility seems reasonable in certain contexts, but it is perhaps not very appealing in security environments where coordination helps to prevent security threats. At a more general level, understanding the social value of information when information can be aggregated only locally within neighborhoods in networks, under different efficiency benchmarks and preference specifications, remains an interesting and rather unexplored question.

Acknowledgments

I am indebted to the Editor in charge, Christos Dimitrakakis, and to two anonymous referees for very useful comments. I gratefully acknowledge financial support from Consejo Nacional de Ciencia y Tecnología, Sistema Nacional de Investigadores, grant 41826. I thank Alessandro Pavan for his feedback. This research project was conducted while visiting the Department of Economics at UC San Diego. I thank this institution for its generous hospitality and support. Any remaining errors are my own.

Conflicts of Interest

The author declares no conflict of interest.

Appendix

Proof of Lemma 1.
Under our normality assumptions, we have
Var [ s ] = σ 2 + π 1 1 ( s ) σ 2 σ 2 σ 2 σ 2 + π 2 1 ( s ) σ 2 σ 2 σ 2 σ 2 + π k s 1 ( s ) = σ 2 1 ̲ · 1 ̲ + D ,
where D is the diagonal matrix D = diag π j 1 ( s ) j = 1 , , k s that contains the variances of the noises of the signals in the profile s . Notice that
D 1 = π 1 ( s ) 0 0 0 π 2 ( s ) 0 0 0 π k s ( s ) .
Using a version of the Sherman–Morrison’s formula to compute the inverse of a sum of matrices, (see, e.g., [10,11]), we obtain
Var [ s ] 1 = D 1 σ 2 1 + σ 2 1 ̲ · D 1 · 1 ̲ D 1 · 1 ̲ · 1 ̲ · D 1 .
It follows that
σ 2 1 + σ 2 1 ̲ · D 1 · 1 ̲ = σ 2 1 + σ 2 j = 1 k s π j ( s )
and
D 1 · 1 ̲ · 1 ̲ · D 1 = π 1 2 ( s ) π 1 ( s ) π 2 ( s ) π 1 ( s ) π k s ( s ) π 2 ( s ) π 1 ( s ) π 2 2 ( s ) π 2 ( s ) π k s ( s ) π k s ( s ) π 1 ( s ) π k s ( s ) π 2 ( s ) π k s 2 ( s ) .
By doing the algebra, it then follows that
Var [ s ] 1 = 1 1 + σ 2 j = 1 k s π j × π 1 + σ 2 j 1 π 1 π j σ 2 π 1 π 2 σ 2 π 1 π k s σ 2 π 2 π 1 π 2 + σ 2 j 2 π 1 π j σ 2 π 2 π k s σ 2 π k s π 1 σ 2 π k s π 2 π k s + σ 2 j k s π k s π j ,
where the arguments ( s ) have been conveniently dropped to simplify the expression. Therefore, in order to obtain a closed expression for the scalar w ( s ) = 1 ̲ · Var [ s ] 1 1 ̲ , we need to aggregate all the entries of the matrix obtain above. We obtain
w ( s ) = j = 1 k s π j ( s ) 1 + σ 2 j = 1 k s π j ( s ) ( 0 , σ 2 ) .
We observe directly that higher values of the (additive) aggregation of the precision of the signals j = 1 k s π j ( s ) , contained in the signal profile s , determine higher values of the scalar w ( s ) . ☐
Proof of Proposition 2.
Let us use π s = j = 1 k s π j ( s ) as a shorthand notation for the sum of the precision of the signals contained in a signal profile s S g . Consider a signal profile s ¯ and suppose that we split it into two disjoint subsets of signals s ¯ 1 , s ¯ 2 . Thus, we are considering s ¯ = s ¯ 1 s ¯ 2 with s ¯ 1 s ¯ 2 = . First, notice that π s ¯ = π s ¯ 1 + π s ¯ 2 . Secondly, it follows from expression (A1) obtained in the proof of Lemma 1 that
w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) = π s ¯ 1 1 + σ 2 π s ¯ 1 + π s ¯ 2 1 + σ 2 π s ¯ 2 π s ¯ 1 + π s ¯ 2 1 + σ 2 ( π s ¯ 1 + π s ¯ 2 ) = σ 2 π s ¯ 1 π s ¯ 2 ( 2 + σ 2 π s ¯ 1 + σ 2 π s ¯ 2 ) ( 1 + σ 2 π s ¯ 1 ) ( 1 + σ 2 π s ¯ 2 ) [ 1 + σ 2 ( π s ¯ 1 + π s ¯ 2 ) ] > 0 ,
as stated. ☐
Proof of Proposition 3.
For the selected signal profile s ¯ S g , let us use α = s S g \ s ¯ w ( s ) as shorthand notation for the informative content of all profiles different from s ¯ in the collection S g . Using the expression for the welfare loss obtained in Equation (12), it follows that the sign of the difference L ( g ) L ( g ) coincides with the sign of the expression
( 1 λ α ) 2 w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) 1 + λ 2 ( γ g α ) w ( s ¯ 1 ) + w ( s ¯ 2 ) 2 w ( s ¯ ) 2 w ( s ¯ ) 2 w ( s ¯ ) w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) 2 λ ( 1 λ α ) ( γ g α ) w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) w ( s ¯ ) .
Notice that the welfare loss specified in Equation (12) is continuous in λ ( 0 , 1 ) . Then, for λ 0 , we obtain that the the sign of the difference L ( g ) L ( g ) coincides with the sign of w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) 1 . For very low levels of complementarity in actions ( λ 0 ), it then follows that L ( g ) < L ( g ) if and only if w ( s ¯ 1 ) + w ( s ¯ 2 ) w ( s ¯ ) > 1 , as stated. ☐

References

  1. Keynes, J.M. The General Theory of Employment, Interest, and Money; MacMillan: London, UK, 1936. [Google Scholar]
  2. Morris, S.; Shin, H. The Social Value of Public Information. Am. Econ. Rev. 2002, 92, 1521–1534. [Google Scholar] [CrossRef]
  3. Angeletos, G.M.; Pavan, A. Transparency of Information and Coordination in Economies with Investment Complementarities. Am. Econ. Rev. 2004, 94, 91–98. [Google Scholar] [CrossRef]
  4. Angeletos, G.M.; Pavan, A. Efficient Use of Information and Social Value of Information. Econometrica 2007, 75, 1103–1142. [Google Scholar] [CrossRef]
  5. Allen, F.; Morris, S.; Shin, H.S. Beauty Contest and Iterated Expectations in Asset Markets. Rev. Financ. Stud. 2006, 19, 720–752. [Google Scholar] [CrossRef]
  6. Hellwig, C.; Veldkamp, L. Knowing What Others Know: Coordination Motives in Information Acquisition. Rev. Econ. Stud. 2009, 76, 223–251. [Google Scholar] [CrossRef]
  7. Dewan, T.; Myatt, D.P. The Qualities of Leadership: Direction, Communication, and Obfuscation. Am. Political Sci. Rev. 2008, 102, 351–368. [Google Scholar] [CrossRef]
  8. Jimenez-Martinez, A. Information Acquisition Interactions in Two-Player Quadratic Games. Int. J. Game Theory 2014, 43, 455–485. [Google Scholar] [CrossRef]
  9. Calvó-Armengol, A.; de Martí-Beltran, J. Information Gathering in Organizations: Equilibrium, Welfare, and Optimal Network Structure. J. Eur. Econ. Assoc. 2009, 7, 116–161. [Google Scholar] [CrossRef]
  10. Sherman, J.; Morrison, W.J. Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix. Ann. Math. Stat. 1950, 21, 124–127. [Google Scholar] [CrossRef]
  11. Henderson, H.V.; Searle, S.R. On Deriving the Inverse of a Sum of Matrices. Siam Rev. 1981, 23, 53–60. [Google Scholar] [CrossRef]
  • 1.For example, suppose that the profitability of some investment activity depends on an uncertain exogenous state of the world and on the aggregate investment. Here, investors would like to pick investment strategies that match both the exogenous variable and the other investors’ strategies as well.
  • 2.The “beauty contest” terminology comes originally from a well-known parable by ([1], Chapter 12). Following the seminal contribution of [2], “beauty contest” games have been extensively used to explore a wide range of phenomena in a number of settings, including investment games ([3,4]), financial markets ([5]), monopolistic competition ([6]), or models of political leadership ([7]), among others.
  • 3.For example, under a terrorist attack threat, the analyst wishes to assess which is the most likely location of the attack but also wants to come up with locations not very distant from those predicted by other analysts. In this way, counterterrorism measures could be more effective to prevent the attack.
  • 4.Ref. [4] have investigated in a very comprehensive way the social value of information in an ex-ante efficiency benchmark without restrictions in the form of local interactions and where the agents have both private and public sources of information. For that environment, they have shown that whether more informative content increases or decreases welfare depends on whether equilibrium is efficient under both complete and incomplete information or only under incomplete information. Their contribution highlights that understanding the social value of information depends crucially on the notion of efficiency used. Without a well-specified efficiency benchmark, assessing the social value of information follows the folk theorem that “everything goes” in a second-best world. Assessing the social value of information with complementarities in the presence of networks remains a question far from understood.
  • 5.Of course, signals cannot be unconditionally independent because all of them depend on the state of the world.
  • 6.For instance, suppose for some agent i, the profile s ( i ) = s S g is the subset of signals s 1 , s 3 , s 7 , s 100 . Then, we will consider s ( i ) = s = ( s 1 , s 3 , s 7 , s 100 ) for technical tractability.
  • 7.Formally, in the current content, the network is minimally (directedly) connected if for each neighborhood g ( i ) g ( [ 0 , 1 ] ) there exists another neighborhood g ( h ) g ( [ 0 , 1 ] ) such that g ( i ) g ( h ) . I thank an anonymous referee for pointing out an error in an earlier version of this definition. This notion of minimally connectedness is equivalent to having, for any two different agents, at least a directed path in the network that connects them.
  • 8.Since this is an average over all subsets of signals, E ¯ [ θ ] equivalently indicates the average posterior expectation on θ over all agents.
  • 9.For applications where one considers instead a relatively small number of subsets of signals, the law of large numbers cannot be reasonably invoked to compute averages of expectations on the state. In these cases, keeping track of the higher-order beliefs that are required to characterize equilibria follows a completely different approach. In particular, under certain conditions, one can make use of the iterated application of a knowledge index matrix. The idea of using a knowledge index matrix to track individual arbitrarily higher-order beliefs in a network was originally proposed by [9]. An application of the knowledge index matrix to information acquisition problems in small populations has been recently provided by [8].

Share and Cite

MDPI and ACS Style

Jimenez-Martinez, A. On Information Aggregation and Interim Efficiency in Networks. Games 2017, 8, 15. https://0-doi-org.brum.beds.ac.uk/10.3390/g8010015

AMA Style

Jimenez-Martinez A. On Information Aggregation and Interim Efficiency in Networks. Games. 2017; 8(1):15. https://0-doi-org.brum.beds.ac.uk/10.3390/g8010015

Chicago/Turabian Style

Jimenez-Martinez, Antonio. 2017. "On Information Aggregation and Interim Efficiency in Networks" Games 8, no. 1: 15. https://0-doi-org.brum.beds.ac.uk/10.3390/g8010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop