Next Article in Journal
A Bayesian Method for Characterizing Population Heterogeneity
Previous Article in Journal
Learning (Not) to Evade Taxes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conflict without an Apparent Cause

Department of Economics, Kennesaw State University, 560 Parliament Garden Way, Kennesaw, GA 30144, USA
*
Author to whom correspondence should be addressed.
Submission received: 16 August 2019 / Revised: 25 September 2019 / Accepted: 27 September 2019 / Published: 2 October 2019

Abstract

:
A game-theoretic model of repeated interaction between two potential adversaries is analyzed to illustrate how conflict could possibly arise from rational decision-makers endogenously processing information, without any exogenous changes to the fundamentals of the environment. This occurs as a result of a convergence of beliefs about the true state of the world by the two players. During each period, each adversary must decide to either stage an attack or not. Conflict ensues if either player chooses to initiate an attack. Choosing to not stage an attack in a given period reveals information to the player’s rival. Thus, over time, beliefs about the true state of the world converge. Depending upon the true state of the world, we can ultimately have either of the two adversaries initiating an attack (either with or without regret) after an arbitrarily long period of tranquility. When this happens, it is as if conflict has suddenly arisen without any apparent cause or impetus. Alternatively (again, depending upon the true state of the world), we could possibly have beliefs converge to a point where neither adversary wants to initiate conflict.

1. Introduction

This paper illustrates how conflict between two parties could possibly arise as a result of rational decision-makers endogenously processing information—that is, without any exogenous changes to the fundamentals of the environment. Consequently, an agent may choose to initiate a conflict for no apparent reason: without instigation or provocation; without observing a change in the behavior of the rival party; without any changes in the costs or benefits of engaging in conflict.
The forces at play in the present model are quite similar to those identified by Hart and Tauman [1] (2004), where a market environment between two potential traders is examined. An early paper in this field that predates Hart and Tauman is Geanakoplos and Polemarchakis [2] (1982). Several researchers in the field of finance have since extended this literature to investigate the cause of market crashes. Examples of such papers are Barlevy and Veronesi [3] (2003), Yuan [4] (2005), etc. This literature is related to our work. We are, however, not aware of any application of this idea in the analysis of conflicts.
In Hart and Tauman [1] (2004), agent behavior eventually changes as a result of the gradual processing of information and updating of beliefs. In a market setting, trade will occur when one party wants to be a buyer and one party wants to be a seller. A “crash” will occur when both want to be sellers; a “bubble” will occur when both want to be buyers. A market setup is fairly symmetrical, in that each party wants to buy when they believe the value is high and sell when they believe the value is low. Transactions occur when agents have different beliefs, but the market collapses once beliefs converge.
In the game-theoretic model of conflict presented and analyzed below, there are two agents who are opposed to one another (“Adversary A,” denoted as A , and “Adversary B,” denoted as B ). Each wants to act (i.e., initiate an attack) if, and only if, he believes that the probability that he possesses the upper-hand is sufficiently high. Thus, as beliefs converge, an agent may choose to attack after not doing so in previous periods. Within the present framework (depending upon the true state of the world), we can ultimately have: (i) B eventually initiating an attack without regret; (ii) B eventually initiating an attack with regret; (iii) A eventually initiating an attack without regret; (iv) A eventually initiating an attack with regret; or (v) neither agent choosing to initiate an attack (indefinite stability).
Numerous scholars within the field of international relations have broadly examined the causes of conflict. Levy [5] (1998) provides an extensive overview of the literature, assessing multiple explanations including balance of power theories, economic interdependence and war, domestic coalition theories, and decision-making under risk and uncertainty. Academics such as Bueno De Mesquita [6] (1985) and Fearon [7] (1995) have put forth ideas that treat the decision to engage in conflict as a rational choice. In contrast, Levy [8] (1983) highlights how misperceptions—about an adversary’s capabilities, adversary’s intentions, or third-party interests—can essentially serve as the root cause of conflict. Such formulations are directly compatible with traditional economic and game-theoretic analyses of behavior. Consistent with this rationality-based approach, van Evera [9] (1998) argues that conflict is more likely to arise when parties view conquest to be easy or low cost. Also in this vein, Ohlson [10] (2008) argues that, “people take to arms because they have Reasons in the form of grievances and goals, they have Resources in the form of capabilities and opportunities, and they have Resolve because they perceive of no alternative to violence in order to achieve their goals” (page 134). From here it follows that an adversary would be more inclined to initiate conflict if one of these factors were to change exogenously. For example, if a rival country enters a time of political turmoil, leading to an economic downturn and corresponding decline in military capacity, an adversary might very well choose to launch an opportunistic attack. Identifying such a change in external circumstances as the cause of conflict is often quite natural and consistent with a view of conflict as a rational decision.
But, the main point of the present study is that an easily identifiable external cause of a conflict might not be present, even when a rational agent chooses to initiate conflict. Instead, it may possibly be the case that an agent chooses to rationally initiate a conflict (after multiple periods of peace) for no apparent reason, as a consequence of refining his beliefs about the true state of the world. If this occurs, then any attempt to identify the changed external factor that caused the conflict would be a search in vain. Recognizing the potential for information processing and the dynamic updating of beliefs to essentially be a cause of conflict is important for both academics and practitioners in the fields of international relations and conflict economics.
The remainder of the paper is structured as follows. In Section 2, we outline the general framework of the game that is considered throughout. A very simple example is analyzed in Section 3 to illustrate the main point of the paper described in the previous paragraph (i.e., that observing the action “not attack” can be informative and make an adversary update his beliefs in such a way that he rationally chooses to initiate an attack in a subsequent period). A more complex example is presented and analyzed in Section 4 to more precisely highlight the driving force behind this phenomenon and to illustrate the qualitatively different outcomes that can arise for different true states of the world. A generalization of this more complex example is specified and analyzed in Section 5 to show that conflict may eventually be initiated after an arbitrarily long number of periods of initial tranquility. Section 6 concludes.

2. Framework of Game

Consider a simple game in which there are two parties to a potential conflict: “Adversary A” (denoted as A ) and “Adversary B” (denoted as B ). These two parties could be two sovereign nations, a sovereign nation and a terrorist organization, (for an overview of the literature analyzing terrorism using tools of game theory, see Sandler and Arce M. [11] (2003)), or, more generally, any two parties that could possibly be engaged in conflict with one another. In each period, each player must decide to either attack or wait (i.e., initiate or not initiate conflict). A period is a duration of time over which each player must make a decision. It is assumed that the period length is common knowledge between the players. If either party chooses attack, conflict ensues in the present period. If both parties choose wait, no conflict ensues in the present period and the decision process repeats itself in a subsequent period. Assume each player makes this decision to maximize his expected payoff, taking into account the costs of engaging in conflict, coupled with the expected gain from, and assessed probability of, “winning” the conflict.
Let the continuous interval Ω = [ 0 , 1 ] denote the set of possible states of the world. Each state of the world is characterized as one of two environments: “favorable to A ” ( f A ) or “favorable to B ” ( f B ). Engaging in a conflict costs A resources of C A = 6 and costs B resources of C B = 3 , regardless of the outcome. (Assuming specific values for costs and benefits (as well as a specific partition of the state space into the different environments of “favorable to A” and “favorable to B”) are acceptable, since the aim of the entire analysis is to illustrate qualitatively different outcomes that can possibly arise.) Separate from these costs, if the true environment is f A , then A expects to gain and B expects to lose V ( f A ) = 12 from a conflict. If instead the true environment is f B , then (again, separate from these costs) B expects to gain and A expects to lose V ( f B ) = 8 from a conflict. Assuming a baseline payoff of zero from no conflict, A will want to attack if and only if he believes that the probability of f B , denoted q , is such that:
( 1 q ) ( 12 6 ) + q ( 8 6 ) 0
6 ( 1 q ) 14 q 0
q 3 10
Similarly, B will want to attack if and only if he believes q is such that:
q ( 8 3 ) + ( 1 q ) ( 12 3 ) 0
5 q 15 ( 1 q ) 0
q 3 4
Suppose that the prior probability assessments of A and B are such that they both accurately believe that each point on the interval Ω = [ 0 , 1 ] is equally likely. The private information of each agent can be summarized by a partition of the state space. This serves to specify the framework of the game to be played between to the adversaries.

3. Simple Example to Illustrate How Updating of Beliefs Can Lead to Initiation of Attack

Within this section we present an example, with the simplest possible initial private information partitions, to illustrate how the updating of beliefs following an observation of “no attack” can lead to an adversary choosing to initiate an attack in a subsequent period.
Suppose, as illustrated in Figure 1, that Adversary A’s single information partition is:
A 1 = [ 0 , 1 ]
And that Adversary B’s two information partitions are:
B 1 = [ 0 , 0.5 ]   and   B 2 = ( 0.5 , 1 ]
To understand the interpretation of these information partitions, suppose that the true state of the world is ω = 0.45 . The players do not observe the actual value of the true state of the world; rather, they only observe the information partition in which it is contained. So, when the true state of the world is ω = 0.45 , A knows that the true state is ω A 1 = [ 0 , 1 ] , but cannot distinguish between the points within this subset. Similarly, B knows that the true state is ω B 1 = [ 0 , 0.5 ] , but cannot distinguish between the points within this subset.
Assume that
f B = [ 0.1 , 0.6 ]
So that
f A = [ 0 , 1 ] \ f B = f B C = [ 0 , 0.1 ) ( 0.6 , 1 ]
The set f B is illustrated by the shaded interval in Figure 1.
When analyzing agent behavior in any period, it is critical to correctly determine the probability assessment of each agent over each possible environment ( f A and f B ) given the agent’s knowledge regarding what states of the world are possible. For the simplistic information structure illustrated in Figure 1, this is straightforward. (As will be seen from the analysis of the more complex example in Section 4, to correctly determine the probability assessment of each agent over each possible environment in each period, it is necessary to recognize what information is common knowledge—as defined by Aumann [12] (1976)—between the two agents at any point in time.) In Period 1, A simply knows that ω A 1 = [ 0 , 1 ] , regardless of the actual state of the world. Thus, based upon his knowledge, A computes the probability that the true state of the world is “favorable to B ” ( f B ) to be q = 0.6 0.1 1 0   =   0.5 . Consequently, A does not want to attack in Period 1.
In contrast, the initial probability assessment by B in Period 1 critically depends upon the actual state of the world. For any ω B 1 = [ 0 , 0.5 ] , B computes the probability that the true state of the world is “favorable to B ” ( f B ) to be q = 0.5 0.1 0.5 0   =   0.8 . Similarly, for any ω B 2 = ( 0.5 , 1 ] , B computes the probability that the true state of the world is “favorable to B ” ( f B ) to be q = 0.6 0.5 1 0.5   =   0.2 . Thus, in Period 1 B will choose to stage an attack if and only if the true state of the world is ω B 1 = [ 0 , 0.5 ] .
If no attack is staged by either adversary in Period 1, then the game transitions to Period 2. But this only occurs if the true state of the world is ω B 2 = ( 0.5 , 1 ] . Therefore, if the game reaches this stage (which occurs only as a consequence of B choosing to not attack in Period 1), then A now knows that ω B 2 = ( 0.5 , 1 ] . Based upon his updated beliefs, A now computes the probability that the true state of the world is “favorable to B ” ( f B ) to be q = 0.6 0.5 1 0.5   =   0.2 . For this probability assessment, A rationally chooses to stage an attack in Period 2. As a result of updating of beliefs after observing “no attack” by his rival in Period 1, an adversary has chosen to initiate an attack in a later period of interaction. Thus, conflict is initiated in Period 2 after an initial period of tranquility. This conflict results without any observed impetus—there are no new events or apparent provocations, there are no changes in the costs or benefits of engaging in conflict. Rather, the stimulus is simply the updating of information undertaken by an agent after observing the action chosen by his rival.
Within the next section, we present and analyze a more complex example to illustrate that the driving force which can give rise to this possibility (of an attack being staged after initial periods of tranquility) is the evolution of the states of the world that are common knowledge—as defined by Aumann [12] (1976)—after both adversaries choose to “not attack.”

4. More Complex Example to Illustrate Importance of Updated Beliefs and Common Knowledge

Observing that a rival has chosen to “not attack” in any particular period conveys information which allows a player to update his beliefs about the true state of the world. In the example analyzed in the previous section, we saw how an observation by A . that B did not attack in Period 1 reveals to A that the true state of the world must be ω B 2 = ( 0.5 , 1 ] . Consequently, A rationally chooses to stage an attack in Period 2 (after the initial tranquility in Period 1). More precisely, this change in behavior by A is a result of a dynamic updating of beliefs about the true state of the world, based upon the information conveyed by the observed chosen action of B . Analysis of a more complex example will serve to illustrate this driving force more precisely and will also allow us to see the qualitatively different types of outcomes which can arise in this framework. From this more complex example we will see that (depending upon the true state of the world) after multiple initial periods of initial tranquility, we can then have: (i) B choose to initiate an attack without regret (i.e., when the true environment is f B ); (ii) B choose to initiate an attack with regret (i.e., when the true environment is f A ); (iii) A choose to initiate an attack without regret (i.e., when the true environment is f A ); (iv) A choose to initiate an attack with regret (i.e., when the true environment is f B ); or (v) perpetual stability (with neither agent ever choosing to attack in any period).
Suppose, as illustrated in Figure 2, that Adversary A’s partitions are:
A 1 = [ 0 , 0.2 ] ,   A 2 = ( 0.2 , 0.4 ] ,   A 3 = ( 0.4 , 0.6 ] ,   A 4 = ( 0.6 , 0.8 ] ,   and   A 5 = ( 0.8 , 1 ]
And that Adversary B’s partitions are:
B 1 = [ 0 , 0.3 ] ,   B 2 = ( 0.3 , 0.5 ]   ,   B 3 = ( 0.5 , 0.7 ] ,   and   B 4 = ( 0.7 , 1 ] .
Assume that
f B = [ 0 , 0.18 ]     [ 0.28 , 0.38 ]     [ 0.48 , 0.58 ]     [ 0.9 , 1 ]
So that
f A = [ 0 , 1 ] \ f B = f B C = ( 0.18 , 0.28 )     ( 0.38 , 0.48 )     ( 0.58 , 0.9 ) .
The set f B is illustrated by the four shaded intervals in Figure 2.
As noted in the previous section, when analyzing agent behavior in each period, it is critical to correctly determine the probability assessment of each agent over each possible environment ( f A and f B ) given the agent’s knowledge regarding what states of the world are possible. To formally do so, it is important to recognize what information is common knowledge between the two agents at any point in time. Let p = 1 , 2 , 3 , denote the different periods of interaction between the agents, and let Ω p denote the set of information that is common knowledge between the two agents at the start of Period p (before they choose to either wait or attack in Period p ).
Because of the way in which the information partitions in Figure 2 overlap one another, regardless of the true state of the world in Period 1 we have Ω 1 = Ω . That is, at the start it is not common knowledge that any particular value of ω [ 0 , 1 ] did not occur. For example, suppose the true state of the world is ω = 0.45 . A knows ω A 3 = ( 0.4 , 0.6 ] , and B knows ω B 2 = ( 0.3 , 0.5 ] . Thus, while it is mutual knowledge that the true state of the world is neither ω 0.3 nor ω > 0.6 , these facts are not common knowledge. As defined by Aumann [12] (1976), something is common knowledge between A and B if and only if: A knows it; B knows it; A knows that B knows it; B knows that A knows it; A knows that B knows that A knows it; ad infinitum. There are several researchers who extend Aumann [12] (1976). Some examples are Cave [13] (1983), Morris [14] (1995) and Ozkaya [15] (2012).
To understand why neither ω 0.3 nor ω > 0.6 are common knowledge in Period 1, let K A S denote the smallest subset S for which it can be stated that “ A knows that the true state of the world is within S .” Likewise, let K A K B S denote the smallest subset S for which it can be stated that “ A knows that B knows that the true state of the world is within S ,” and so on. This hierarchical description of layers of knowledge is described in Aumann [16] (1999). Given the information structure as summarized by Figure 2:
K A S = A 3 ,
K A K B S = B 2 B 3 ,
K A K B K A S = A 2 A 3 A 4 ,   and
K A K B K A K B S = B 1 B 2 B 3 B 4 = Ω .
That is, when the true state of the world is ω = 0.45 , in Period 1 it is not the case that A knows that B knows that A knows that B knows that the true state of the world is not any particular value of ω [ 0 , 1 ] . Similarly, K B K A K B K A K B S = B 1 B 2 B 3 B 4 = Ω . More generally, regardless of the actual true state of the world, we quickly reach a point where both K A K B S = Ω and K B K A S = Ω .

4.1. Realized States for Which B Ultimately Chooses to Initiate an Attack

Suppose the true state of the world is ω A 1 = [ 0 , 0.2 ] —that is, somewhere on the interval between ω = 0 and ω = 0.2 . At the start of Period 1, A knows ω A 1 = [ 0 , 0.2 ] , and B knows ω B 1 = [ 0 , 0.3 ] . Consequently, A computes the probability of the true environment being f B to be P A ( f B | A 1 Ω 1 ) = 0.18 0.2 = 9 10 , and B computes the probability of the true environment being f B to be P B ( f B | B 1 Ω 1 ) = 0.18 + 0.02 0.3 = 2 3 . The agents base their actions in Period 1 (and in each subsequent period) upon these computed probabilities over this interval which they know to be the true state of the world (denoted by A 1 and B 1 , as in Table 1). Thus, A will choose to wait (i.e., not attack) since P A ( f B | A 1 Ω 1 ) = 9 10 > 3 10 , and B will choose to wait (i.e., not attack) since P B ( f B | B 1 Ω 1 ) = 2 3 < 3 4 .
But to comprehend how an observation of “no attack” is informative and leads to a dynamic updating of beliefs, it is necessary to compute each agent’s perceived probability of f B in each distinct information set (in order for agents to be able to infer how their rival would have behaved in other states of the world). To this end, in Period 1: P A ( f B | A 2 Ω 1 ) = 0.1 0.2 = 1 2 , P A ( f B | A 3 Ω 1 ) = 0.1 0.2 = 1 2 , P A ( f B | A 4 Ω 1 ) = 0 0.2 = 0 , and P A ( f B | A 5 Ω 1 ) = 0.1 0.2 = 1 2 for A ; and P B ( f B | B 2 Ω 1 ) = 0.1 0.2 = 1 2 , P B ( f B | B 3 Ω 1 ) = 0.08 0.2 = 2 5 , and P B ( f B | B 4 Ω 1 ) = 0.1 0.3 = 1 3 for B . These probabilities are summarized in the top row of Table 1.
At the start of Period 2, it is common knowledge that neither agent chose to attack in Period 1. As an immediate consequence, it is common knowledge that the true state of the world is not within A 4 , since if it were, then A would have chosen to attack in Period 1. As a further result of no attack occurring in Period 1, it becomes common knowledge that the true state of the world is not within A 5 , since this segment of the state space is now cutoff from the information sets that overlap one another starting at the true state of the world. Thus, at the start of Period 2, the common knowledge information set is reduced to Ω 2 = [ 0 , 0.6 ] . To understand this, recognize that at the start of Period 2:
K A S = [ 0 , 0.2 ] = A 1 ,
K A K B S = [ 0 , 0.3 ] = B 1 ,
K A K B K A S = [ 0 , 0.4 ] = A 1 A 2 ,
K A K B K A K B S = [ 0 , 0.5 ] = B 1 B 2 ,   and
K A K B K A K B K A S = K A K B K A K B K A K B S = K A K B K A K B K A S = [ 0 , 0.6 ] = A 1 A 2 A 3 .
Similarly,
K B S = [ 0 , 0.3 ] = B 1 ,
K B K A S = [ 0 , 0.4 ] = A 1 A 2 ,
K B K A K B S = [ 0 , 0.5 ] = B 1 B 2 ,   and
K B K A K B K A S = K B K A K B K A K B S = K B K A K B K A S = [ 0 , 0.6 ] = A 1 A 2 A 3 .
For this new common knowledge information set of Ω 2 = [ 0 , 0.6 ] : A 4 Ω 2 = A 5 Ω 2 = B 4 Ω 2 = { } and B 3 Ω 2 B 3 Ω 1 . Consequently, P A ( f B | A 4 Ω 2 ) , P A ( f B | A 5 Ω 2 ) , and P B ( f B | B 4 Ω 2 ) would be meaningless to compute. Further,
P B ( f B | B 3 Ω 2 ) = 0.08 0.1 = 4 5 2 5 = P B ( f B | B 3 Ω 1 ) .
Each of the other five assessed probabilities are effectively unchanged from Period 1. These probabilities are summarized in the second row of Table 1. In Period 2 A again chooses to wait since P A ( f B | A 1 Ω 2 ) = 9 10 > 3 10 , and B again chooses to wait since P B ( f B | B 1 Ω 2 ) = 2 3 < 3 4 .
Now, at the start of Period 3, it is common knowledge that neither agent chose to attack in Period 2. Thus, it is common knowledge that the true state of the world is not within B 3 , since if it were then B would have chosen to attack in Period 2. The common knowledge information set becomes Ω 3 = [ 0 , 0.5 ] . For this refined information set, P B ( f B | B 3 Ω 3 ) would be meaningless to compute and P A ( f B | A 3 Ω 3 ) = 0.02 0.1 = 1 5 1 2 = P A ( f B | A 3 Ω 2 ) . The other assessed probabilities are effectively unchanged from Period 2. These probabilities are summarized in the third row of Table 1. In Period 3 A again waits since P A ( f B | A 1 Ω 3 ) = 9 10 > 3 10 , and B again waits since P B ( f B | B 1 Ω 3 ) = 2 3 < 3 4 .
At the start of Period 4 it is now common knowledge that the true state of the world is not within A 3 , since if it were then A would have chosen to attack in Period 3. The common knowledge information set is now further refined to Ω 4 = [ 0 , 0.4 ] . Consequently, P B ( f B | B 2 Ω 4 ) = 0.08 0.1 = 4 5 1 2 = P B ( f B | B 2 Ω 3 ) . In Period 4 neither A nor B will attack since P A ( f B | A 1 Ω 4 ) = 9 10 > 3 10 and P B ( f B | B 1 Ω 4 ) = 2 3 < 3 4 .
At the start of Period 5 it is common knowledge that the true state of the world is not within B 2 (since if it were, B would have chosen to attack in Period 4). Thus, Ω 5 = [ 0 , 0.3 ] and P A ( f B | A 2 Ω 5 ) = 0.02 0.1 = 1 5 1 2 = P B ( f B | A 2 Ω 4 ) . Again, both agents choose to wait.
After A chooses to wait in Period 5, it becomes common knowledge that the true state of the world is not within A 2 . Accordingly, Ω 6 = [ 0 , 0.2 ] . But this refinement alters B ’s perception of the true state of the world in a meaningful way. Agent B now knows that the true state of the world is not in A 2 B 1 = ( 0.2 , 0.3 ] , but rather must be in A 1 B 1 = [ 0 , 0.2 ] . As a result, B computes the probability of the true environment being f B to be P B ( f B | B 1 Ω 6 ) = 0.18 0.2 = 9 10 . Since P B ( f B | B 1 Ω 6 ) = 9 10 > 3 4 , agent B will choose to attack in Period 6. Thus, conflict is initiated in Period 6 after five initial periods of calm. As was the case in the simpler example in Section 3, this conflict results without any observed impetus. Rather, it resulted from the updating of information (i.e., the refinement of the common knowledge information set) undertaken by the agents after observing the actions chosen by their rival.
In this case, when B chooses to initiate the attack, the information set partitions of the two agents have converged and both agents agree that the true state of the world is ω A 1 = [ 0 , 0.2 ] . However, within this range both f B and f A are possible, the former being true for ω [ 0 , 0.18 ] and the latter being true for ω ( 0.18 , 0.2 ] . Thus, this example illustrates how (i) B could choose to initiate an attack without regret (i.e., if the true environment is f B ) and (ii) B could choose to initiate an attack with regret (i.e., if the true environment is f A ).

4.2. Realized States for Which A Chooses to Initiate an Attack

Within this subsection we illustrate that it is also possible for A to eventually initiate an attack after multiple periods of not doing so. Recognize that, even though we have already shown that B may choose to do so, showing that A may also choose to do so for the same initial information structure and costs/benefits from conflict does, in fact, provide additional insights. Only after obtaining these results will we have shown that with the information structure and costs/benefits from conflict fixed, depending upon the true state of the world, either one of the two potential adversaries may ultimately choose to initiate conflict after multiple initial periods of tranquility.
Instead suppose the true state of the world is ω ( 0.4 , 0.5 ] —that is, somewhere on the interval between ω = 0.4 and ω = 0.5 . At the start of Period 1, A knows ω A 3 = ( 0.4 , 0.6 ] , and B knows ω B 2 = ( 0.3 , 0.5 ] . Consequently, the agents base their actions upon the computed probabilities P A ( f B | A 3 Ω 1 ) = 1 2 and P B ( f B | B 2 Ω 1 ) = 1 2 . Both agents choose to wait (i.e., not attack) since P A ( f B | A 3 Ω 1 ) = 1 2 > 3 10 and P B ( f B | B 2 Ω 1 ) = 1 2 < 3 4 . Since in Period 1 we have Ω 1 = Ω , all nine of the computed probabilities (i.e., five for A and four for B ) are equal in value to what they were in the example from Section 4.1, as reported in the top row of Table 2.
As a first step, we again have that, at the start of Period 2—after both agents observe that no attack occurred in Period 1—it becomes common knowledge that the true state of the world is not in A 4 or A 5 . Thus, Ω 2 = [ 0 , 0.6 ] . Again, P A ( f B | A 4 Ω 2 ) , P A ( f B | A 5 Ω 2 ) , and P B ( f B | B 4 Ω 2 ) would be meaningless to compute, and P B ( f B | B 3 Ω 2 ) = 0.08 0.1 = 4 5 2 5 = P B ( f B | B 3 Ω 1 ) . In Period 2 both agents again choose to wait, since P A ( f B | A 3 Ω 2 ) = 1 2 > 3 10 and P B ( f B | B 2 Ω 2 ) = 1 2 < 3 4 .
After observing no attack in Period 2, in Period 3 we have Ω 3 = [ 0 , 0.5 ] . As reported in Table 2, this results in P A ( f B | A 3 Ω 3 ) = 1 5 < 3 10 , for which A will choose to attack. Similar to the example in Section 4.1, conflict is initiated in Period 3 after multiple initial periods of calm.
In contrast to the example in Section 4.1, when the true state of the world is ω ( 0.4 , 0.5 ] , conflict is initiated before the information set partitions of the agents have converged. When A chooses to attack, he knows that the true state of the world is ω ( 0.4 , 0.5 ] , but B more broadly believes that the true state of the world could be anywhere in the larger set ω ( 0.3 , 0.5 ] . Nonetheless, in this example both f B and f A are possible, the former being true for ω [ 0.48 , 0.5 ] and the latter being true for ω ( 0.4 , 0.48 ) . This example illustrates how (iii) A could choose to initiate an attack without regret (i.e., if the true environment is f A ) and (iv) A could choose to initiate an attack with regret (i.e., if the true environment is f B ).

4.3. Realized States for Which Neither Agent Ever Chooses to Initiate an Attack

Finally, suppose the true state of the world is ω A 5 = ( 0.8 , 1 ] —that is, somewhere on the interval between ω = 0.8 and ω = 1 . At the start of Period 1, A knows ω A 5 = ( 0.8 , 1 ] , and B knows ω B 4 = ( 0.7 , 1 ] . Since in Period 1 we have Ω 1 = Ω , all nine of the initial computed probabilities are again equal in value to what they were in the two previous examples presented in Section 4.1 and Section 4.2, as reported in the top row of Table 3. In Period 1 both agents choose to not attack since P A ( f B | A 5 Ω 1 ) = 1 2 > 3 10 and P B ( f B | B 4 Ω 1 ) = 1 3 < 3 4 . As an immediate consequence of observing that no attack occurred in Period 1, it becomes common knowledge that the true state of the world is not within A 4 . Furthermore, it becomes common knowledge that the true state is not within A 1 A 2 A 3 (since this portion of the state space is now cutoff from the information sets that overlap one another starting at the true state of the world). Thus, at the start of Period 2, the common knowledge information set is reduced to Ω 2 = ( 0.8 , 1 ] .
For Ω 2 = ( 0.8 , 1 ] we have P A ( f B | A 5 Ω 2 ) = 1 2 and P B ( f B | B 4 Ω 2 ) = 1 2 (all other probabilities would be meaningless to compute), for which neither agent chooses to attack. No current or future actions lead to any further refinement of the common knowledge information set or the subsequent assessed probabilities. Beliefs and assessed probabilities have converged and are equal to Ω p = ( 0.8 , 1 ] , P A ( f B | A 5 Ω p ) = 1 2 , and P B ( f B | B 4 Ω p ) = 1 2 for all p = 2 , 3 , 4 . Consequently, neither agent ever chooses to attack in any future period. This illustrates a situation wherein we realize (v) indefinite stability, in which neither agent ever chooses to initiate an attack.
Note that an outcome of this nature could not be realized in the model of market trade analyzed by Hart and Tauman [1]. This is because within their model the two agents had a common probability cutoff upon which behavior was based. Once a sufficient number of periods transpire, the agents’ beliefs converge and both parties will either agree that the better action is buy or that the better action is sell. Eventually the agents agree and want to behave in the same manner, causing the market to collapse (with either a crash or a bubble occurring).
In contrast, in the model of conflict presented here, the interests of an individual agent are diametrically opposed to those of their rival. An agent only wants to “act” (i.e., initiate an attack) when he believes that the probability with which he possesses the upper-hand is sufficiently high. Given the negative constant sum nature of the payoffs realized if either agent initiates an attack (as described in Section 2, the gain/loss of V ( f A ) or V ( f B ) sums to zero across the two players, so that the positive valued costs result in a combined loss of C A + C B = 9 when conflict occurs) it is clearly possible for the behavioral cutoffs for the two agents to differ from one another. That is, the probability of f B below which A will want to initiate an attack can be strictly less than the probability of f B above which B will want to initiate an attack. If this is the case and the true state of the world is such that beliefs converge to the point where the common assessed probability of f B is between these two cutoffs, then the agents can realize the harmonious outcome of indefinite stability.

5. Illustration of Conflict Initiated after Arbitrarily Long Period of Initial Tranquility

The aim of this section is to show how the more complex example from Section 4 can be generalized so that conflict is initiated by either party after any arbitrarily large number of periods of tranquility. Suppose A has a total of X 4 information sets. Let A 1 = [ 0 , 0.2 ] and A X = ( 0.8 , 1 ] . Divide the interval from 0.2 to 0.8 into X 2 information sets of equal length, so that A j = ( 0.2 + 0.6 ( j 2 ) X 2 , 0.2 + 0.6 ( j 1 ) X 2 ] for j = 2 , , X 1 . Each intermediate segment has length of 0.6 X 2 , with a midpoint of 0.2 + 0.6 j 0.9 X 2 . for j = 3 , , X 1 . Suppose B has a total of X − 1 information sets. Let B 1 = [ 0 , 0.2 + 0.3 X 2 ] , B X 1 = [ 0.2 + 0.6 ( X 1 ) 0.9 X 2 , 1 ] , and B j = [ 0.2 + 0.6 j 0.9 X 2 , 0.2 + 0.6 ( j + 1 ) 0.9 X 2 ] for j = 2 , , X 2 . Further define the following X 1 intervals: f 1 , B = [ 0 , 0.15 + 0.09 X 2 ] , f X 1 , B = [ 0.9 , 1 ] , and f j , B = [ 0.2 + 0.6 j 0.96 X 2 , 0.2 + 0.6 j 0.66 X 2 ] . for j = 2 , , X 2 . Let f B = f 1 , B f 2 , B f X 2 , B f X 1 , B , and let f A = [ 0 , 1 ] \ f B = f B C . Note that A X 1 f B = { } . Recognize that the example from Section 4 is a special case of this environment with X = 5 .
Observe that for each information set B j , j = 2, …, X − 2, the lower boundary is the midpoint of A j and the upper boundary is the midpoint of A j + 1 . Consequently, because of the manner in which the information sets of the two agents overlap one another, regardless of the true state of the world in Period 1 we have Ω 1 = Ω . Table 4 summarizes each agent’s perceived probability of f B in each distinct information set when Ω 1 = Ω (which is true at the start of Period 1, no matter the true state of the world).
To see that B might choose to initiate conflict after an arbitrarily large number of periods of tranquility, consider ω A 1 = [ 0 , 0.2 ] . At the start of Period 1, A knows ω A 1 and B knows ω B 1 . Thus, P A ( f B | A 1 Ω 1 ) = 3 4 + 0.45 X 2 > 3 4 and P B ( f B | B 1 Ω 1 ) = 0.15 ( X 2 ) + 0.15 0.2 ( X 2 ) + 0.3 < 3 4 . Neither agent chooses to attack. For each period p = 1 , , X we have B 1 Ω p , so that the relevant probability upon which each agent bases his behavior remains unchanged. However, after A chooses to not attack in Period X , Ω X + 1 = A 1 . Consequently, P B ( f B | B 1 Ω X + 1 ) = 3 4 + 0.45 X 2 > 3 4 , prompting B to attack in Period X + 1 . Since X can be arbitrarily large, this example illustrates how B might eventually choose to stage an attack after arbitrarily many periods of tranquility. Recognize that since both f A and f B are possible within A 1 = [ 0 , 0.2 ] , we have that B may either regret or not regret initiating this attack.
Next, to see that A might choose to initiate conflict after an arbitrarily large number of periods of tranquility, consider ω A 2 B 1 = ( 0.2 , 0.2 + 0.3 X 2 ] . At the start of Period 1, A knows ω A 2 and B knows ω B 1 . Thus, P A ( f B | A 2 Ω 1 ) = 1 2 and P B ( f B | B 1 Ω 1 ) = 0.15 ( X 2 ) + 0.15 0.2 ( X 2 ) + 0.3 < 3 4 . Neither agent chooses to attack. For each period p = 1 , , X 1 we have A 2 B 1 Ω P , so that the relevant probability upon which each agent bases his behavior remains unchanged. However, after B chooses to not attack in Period X 1 , Ω X = B 1 . Accordingly, P A ( f B | A 2 Ω X ) = 1 5 , prompting A to attack in Period X . Since X can be arbitrarily large, this example illustrates how A might eventually choose to stage an attack after arbitrarily many periods of tranquility. Again, since both f B and f A are possible within ω A 2 B 1 = ( 0.2 , 0.2 + 0.3 X 2 ] , we have that A may either regret or not regret initiating this attack.
Finally, to see that this general example can lead to indefinite stability, consider ω A X = ( 0.8 , 1 ] . In Period 1, A knows ω A X and B knows ω B X 1 . Consequently, P A ( f B | A X Ω 1 ) = 1 2 and P B ( f B | B X 1 Ω 1 ) = X 2 2 X 1 < 3 4 . For these probabilities, neither agent chooses to attack. After A chooses not to attack in Period 1, it becomes common knowledge that ω A X . That is, Ω 2 = A X , for which P A ( f B | A X Ω 2 ) = 1 2 and P B ( f B | B X 1 Ω 2 ) = 1 2 < 3 4 . Neither agent chooses to attack in Period 2. The common knowledge information set and relevant probabilities remain unchanged in all future periods—that is, Ω p = A X and P A ( f B | A X Ω p ) = P B ( f B | B X 1 Ω p ) = 1 2 for every p = 2 , 3 , . Neither agent ever chooses to attack, and therefore, indefinite stability results.

6. Concluding Remarks

The game presented and analyzed here illustrates how it is possible for an agent to rationally choose to initiate conflict without instigation or provocation, without observing a change in the behavior of his rival, or without any changes in the costs or benefits of engaging in conflict. Rather, conflict may result when rational decision-makers endogenously process information—that is, without any exogenous changes to the fundamentals of the environment. This occurs because an observation that an adversary has chosen to not stage an attack conveys information that allows a player to update his beliefs regarding the true state of the world.
The forces at play are very similar to those identified by Hart and Tauman [1] (2004) in a trading environment. But, the model itself and the ultimate results are qualitatively different. In the present model the state space is continuous, not discrete. Further, because of the negative constant sum nature of the payoffs realized when an attack occurs, the probabilities which serve as behavioral cutoffs for the two agents can differ in value from one another. Sebenius and Geanakoplos [17] (1983) analyze the updating of beliefs in a gamble game between two players in which the state space is continuous. However, their setup is qualitatively different from that considered here in numerous ways, one of which is the fact that—like in Hart and Tauman—both players ultimately share a common valued behavioral cutoff. Consequently, it is possible for the system to reach an equilibrium in which an attack is never initiated. The parallel outcome in Hart and Tauman would be for the market to never collapse (that is, never realize a crash or a bubble), but this cannot occur since the traders are symmetric and have the same common cutoff dictating whether they want to buy or sell.
In the present model, as beliefs converge, we will ultimately have: (i) B eventually initiating an attack without regret; (ii) B eventually initiating an attack with regret; (iii) A eventually initiating an attack without regret; (iv) A eventually initiating an attack with regret; or (v) neither agent choosing to initiate an attack (indefinite stability). It is important to note (as illustrated by the presentation and analysis of the more complex example in Section 4) that we can realize all five of these different outcomes for a single example, depending upon what the actual true state of the world happens to be. However, this does not imply that the indefinite avoidance of conflict can ever be expected (even if the true state of the world at the onset is ω A 5 = ( 0.8 , 1 ] ) or that all conflicts that do arise are the result of information processing. After all, over time there clearly could be exogenous changes to the system (e.g., changes of types, changes of cost/benefits that lead to changes in cutoff probabilities, changes in fundamental information partitions) which lead to rational agents choosing to initiate an attack.

Author Contributions

The research idea was conceptualized by T.M. The formal analysis, investigation, writing of result, and revisions of manuscript were all jointly conducted by both A.B. and T.M.

Funding

This research received no external funding.

Acknowledgments

We would like to thank participants of the 27th Stony Brook International Conference on Game Theory and two anonymous referees for providing valuable feedback.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hart, S.; Tauman, Y. Market Crashes without External Shocks. J. Bus. 2004, 77, 1–8. [Google Scholar] [CrossRef]
  2. Geanakoplos, J.D.; Polemarchakis, H.M. We Can’t Disagree Forever. J. Econ. Theory 1982, 28, 192–200. [Google Scholar] [CrossRef]
  3. Barlevy, G.; Veronesi, P. Rational panics and stock market crashes. J. Econ. Theory 2003, 110, 234–263. [Google Scholar] [CrossRef]
  4. Yuan, K. Asymmetric price movements and borrowing constraints: A rational expectations equilibrium model of crises, contagion, and confusion. J. Financ. 2005, 60, 379–411. [Google Scholar] [CrossRef]
  5. Levy, J.S. The Causes of War and the Conditions of Peace. Annu. Rev. Political Sci. 1998, 1, 139–165. [Google Scholar] [CrossRef]
  6. De Mesquita, B.B. The War Trap Revisited: A Revised Expected Utility Model. Am. Political Sci. Rev. 1985, 79, 156–177. [Google Scholar] [CrossRef]
  7. Fearon, J.D. Rationalist Explanations for War. Int. Organ. 1995, 49, 379–414. [Google Scholar] [CrossRef]
  8. Levy, J.S. Misperception and the Causes of War: Theoretical Linkages and Analytical Problems. World Politics 1983, 36, 76–99. [Google Scholar] [CrossRef]
  9. Van Evera, S. Offense, Defense, and the Causes of War. Int. Secur. 1998, 22, 5–43. [Google Scholar] [CrossRef]
  10. Ohlson, T. Understanding Causes of War and Peace. Eur. J. Int. Relat. 2008, 14, 133–160. [Google Scholar] [CrossRef]
  11. Sandler, T.; Arce M, D.G. Terrorism and Game Theory. Simul. Gaming 2003, 34, 319–337. [Google Scholar] [CrossRef]
  12. Aumann, R.J. Agreeing to Disagree. Ann. Stat. 1976, 4, 1236–1239. [Google Scholar] [CrossRef]
  13. Cave, J.A. Learning to agree. Econ. Lett. 1983, 12, 12147–12152. [Google Scholar] [CrossRef]
  14. Morris, S. The common prior assumption in economic theory. Econ. Philos. 1995, 11, 227–253. [Google Scholar] [CrossRef]
  15. Ozkaya, A. Agree or Convince (No. 12-3); Galatasaray University Economic Research Center: Istanbul, Turkey, 2012. [Google Scholar]
  16. Aumann, R.J. Interactive Epistemology I: Knowledge. Int. J. Game Theory 1999, 28, 263–300. [Google Scholar] [CrossRef]
  17. Geanakoplos, J.; Sebenius, J.K. Don’t Bet on It: Contingent Agreements with Asymmetric Information. J. Am. Stat. Assoc. 1983, 78, 424–426. [Google Scholar]
Figure 1. Information sets for simple example.
Figure 1. Information sets for simple example.
Games 10 00039 g001
Figure 2. Information sets for more complex example.
Figure 2. Information sets for more complex example.
Games 10 00039 g002
Table 1. Dynamics when true state of world is ω A 1 = [ 0 , 0.2 ] .
Table 1. Dynamics when true state of world is ω A 1 = [ 0 , 0.2 ] .
Period Ω p Adversary AAdversary B
A 1 A 2 A 3 A 4 A 5 Action B 1 B 2 B 3 B 4 Action
1 [ 0 , 1 ] 9 10 1 2 1 2 0 1 2 wait 2 3 1 2 2 5 1 3 wait
2 [ 0 , 0.6 ] 9 10 1 2 1 2 wait 2 3 1 2 4 5 wait
3 [ 0 , 0.5 ] 9 10 1 2 1 5 wait 2 3 1 2 wait
4 [ 0 , 0.4 ] 9 10 1 2 wait 2 3 4 5 wait
5 [ 0 , 0.3 ] 9 10 1 5 wait 2 3 wait
6 [ 0 , 0.2 ] 9 10 wait 9 10 attack
Table 2. Dynamics when true state of world is ω ( 0.4 , 0.5 ] .
Table 2. Dynamics when true state of world is ω ( 0.4 , 0.5 ] .
Period Ω p Adversary AAdversary B
A 1 A 2 A 3 A 4 A 5 Action B 1 B 2 B 3 B 4 Action
1 [ 0 , 1 ] 9 10 1 2 1 2 0 1 2 wait 2 3 1 2 2 5 1 3 wait
2 [ 0 , 0.6 ] 9 10 1 2 1 2 wait 2 3 1 2 4 5 wait
3 [ 0 , 0.5 ] 9 10 1 2 1 5 attack 2 3 1 2 wait
Table 3. Dynamics when true state of world is ω A 5 = ( 0.8 , 1 ] .
Table 3. Dynamics when true state of world is ω A 5 = ( 0.8 , 1 ] .
Period Ω p Adversary AAdversary B
A 1 A 2 A 3 A 4 A 5 Action B 1 B 2 B 3 B 4 Action
1 [ 0 , 1 ] 9 10 1 2 1 2 0 1 2 wait 2 3 1 2 2 5 1 3 wait
2 ( 0.8 , 1 ] 1 2 wait 1 2 wait
Table 4. Initial probability assessments in generalized complex example.
Table 4. Initial probability assessments in generalized complex example.
Period Ω p Adversary AAdversary B
A 1 A j A X 1 A X B 1 B j B X 2 B X 1
1 [ 0 , 1 ] 3 4   + 0.45   X 2 1 2 0 1 2 0.15 ( X 2 ) + 0.15 0.2 ( X 2 ) + 0.3 1 2 2 5 X 2 2 X 1
j = 2 , X 2 j = 2 , X 3

Share and Cite

MDPI and ACS Style

Mathews, T.; Bagchi, A. Conflict without an Apparent Cause. Games 2019, 10, 39. https://0-doi-org.brum.beds.ac.uk/10.3390/g10040039

AMA Style

Mathews T, Bagchi A. Conflict without an Apparent Cause. Games. 2019; 10(4):39. https://0-doi-org.brum.beds.ac.uk/10.3390/g10040039

Chicago/Turabian Style

Mathews, Timothy, and Aniruddha Bagchi. 2019. "Conflict without an Apparent Cause" Games 10, no. 4: 39. https://0-doi-org.brum.beds.ac.uk/10.3390/g10040039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop