Next Article in Journal
Research on Extraction of Compound Fault Characteristics for Rolling Bearings in Wind Turbines
Next Article in Special Issue
A Unified Theory of Human Judgements and Decision-Making under Uncertainty
Previous Article in Journal
Information Flow Analysis between EPU and Other Financial Time Series
Previous Article in Special Issue
Social Laser Model for the Bandwagon Effect: Generation of Coherent Information Waves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Evolutionary Processes in Quantum Decision Theory

by
Vyacheslav I. Yukalov
1,2,†
1
Bogolubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia
2
Department of Management, Technology and Economics, ETH Zürich, Swiss Federal Institute of Technology, CH-8032 Zürich, Switzerland
Current address: Bogolubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia.
Submission received: 3 May 2020 / Revised: 10 June 2020 / Accepted: 14 June 2020 / Published: 18 June 2020
(This article belongs to the Special Issue Quantum Models of Cognition and Decision-Making)

Abstract

:
The review presents the basics of quantum decision theory, with an emphasis on temporary processes in decision making. The aim is to explain the principal points of the theory. How an operationally-testable, rational choice between alternatives differs from a choice decorated by irrational feelings is elucidated. Quantum-classical correspondence is emphasized. A model of quantum intelligence network is described. Dynamic inconsistencies are shown to be resolved in the frame of the quantum decision theory.

1. Introduction

In recent years, there has been great interest in the possibility of formulating decision theory in the language of quantum mechanics. Numerous references on this topic can be found in the books [1,2,3,4] and review articles [5,6,7,8]. This interest is caused by the inability of classical decision theory [9] to comply with the behavior of real decision makers, which requires the development of other approaches. Resorting to the techniques of quantum theory gives hope for a better representation of behavioral decision making. There are several variants of using quantum mechanics for interpreting conscious effects. The aim of the present review is not the description of the existing variants, which would need too much space and can be found in the cited literature [1,2,3,4,5,6,7,8], but a survey of the approach suggested by the author and his coworkers. This approach was coined [10] quantum decision theory (QDT).
In the present review, we limit ourselves by considering quantum decision theory, but we do not touch other trends in the ramified applications of quantum techniques, such as quantum approaches in physics, chemistry, biology, economics and finance, quantum information processing, quantum computing, and quantum games. It is evident that all those fields cannot be reasonably described in a single review.
Although the theory of quantum games shares similarities with decision theory, there exists an important difference between the standard treatment of quantum games [11,12,13,14,15] and the main idea of quantum decision theory presented in the review. In the theory of quantum games, one usually assumes that players are quantum devices of a kind following quantum rules [16,17]. However, in the approach of quantum decision theory [10], decision makers do not have to be quantum devices; moreover, they can be real human beings. The mathematics of QDT is analogous to the mathematics in the theory of quantum measurements, where an observer is a classical human being, while the observed processes are characterized by quantum laws. In QDT, quantum theory is a technical language allowing for the description of decision processes. Quantum techniques turn out to be a very convenient tool for characterizing realistic human decision processes incorporating rational–irrational duality, because quantum techniques are designed for taking into account the dual nature of quantum world, such as the particle–wave duality. The mathematical generalization of decision-making processes by including into them the rational–irrational duality is one of the main achievements of QDT. Summarizing, the specific features of QDT, distinguishing it from many variants of decision theory employing quantum techniques, are as follows.
(i) The mathematics of QDT is analogous to that used for characterizing quantum measurements, so that there is a direct correspondence between QDT and the theory of quantum measurements. (ii) The approach is general, not only in the description of both decision theory and quantum measurements, but also in its mathematical formulation, allowing for the interpretation of various decision events or quantum measurements. (iii) The theory shows the difference between the decision making with respect to operationally testable events and the choice under uncertainty caused by irrational subconscious feelings. (iv) Single decisions and successive decisions are described on equal footing. (v) Temporal evolution of decision processes is formulated. (vi) Quantum-classical correspondence is preserved explaining the relation between quantum and classical probabilities. (vii) No paradoxes of classical decision theory arise. (viii) Different dynamic decision inconsistencies find natural explanation in the frame of QDT. (ix) The theory can be applied to single decision makers and also to decision-maker societies forming quantum intelligence networks.
Of course, not all above-mentioned aspects will and can be considered in detail in the present review. To describe all of them would require a huge book. We shall concentrate on the principal points the theory is based on and will emphasize temporal effects that have not been sufficiently discussed in the previous publications. In order to make clear for the reader the basic ideas of the QDT, it is useful to present the foundations in several steps. First, we need to define the process of decision making in the case of operationally testable events. This introduces the method of calculating quantum probabilities in the case of projection valued measures. Then it is straightforward to generalize the considerations to dual processes, when a decision is made by taking into account rational and irrational sides, which involves the use of positive operator-valued measures. Next, the approach to describing a society of decision makers, forming a quantum intelligence network, is presented. Finally, several dynamic decision inconsistencies are considered as an illustration of how they can be naturally resolved in the frame of QDT.

2. Operationally Testable Events

The mathematics of QDT is similar to that employed for treating quantum measurements. Therefore, each formula can be interpreted from two angles—from the point of view of quantum measurements or decision theory [10,18,19,20]. Below, as a rule, we follow the interpretation related to decision theory, only occasionally mentioning quantum measurements. Some parts of the scheme below are familiar to quantum mechanics, but here we suggest the interpretation in the language of decision theory, which is needed for introducing the basic notions of QDT. These basic notions are introduced step by step in order to better demonstrate the logic of the QDT. We will not plunge into the details of numerous applications that can be found in the published literature. The main aim of this review is the explanation of the principal points of the approach, with the emphasis on the evolutionary side of decision processes.
The typical illustration of decision making is based on the choice between given alternatives, say forming a set { A n } , enumerated by the index n = 1 , 2 , , N A . The alternatives correspond to events that can be operationally tested. Each alternative is represented by a vector | A n , where the bra-ket notation [21] is used. The vectors of alternatives pertain to a Hilbert space of alternatives
H A = span   {   |   n     } ,
being a closed linear envelope over an orthonormal basis. Decisions are made by a subject whose mind is represented by a Hilbert space
H S = span   {   |   α     }
that can be called the subject space of mind, or just the subject space. Thus, the total space where decisions are made is the decision space
H = H A H S .
This is a closed linear envelope over an orthonormal basis,
H = span   {   |   n α   |   n   |   α     } .
The alternatives from the given set, represented by vectors | A n , are assumed to be orthonormal to each other,
  A m   |   A n   = δ m n ,
which means that the alternatives are mutually exclusive. The alternative vectors are not required to form a basis. For each alternative, it is possible to show a corresponding alternative operator
P ^ ( A n ) = |   A n     A n   | .
These operators enjoy the properties of projectors
P ^ ( A m ) P ^ ( A n ) = δ m n P ^ ( A n ) , n P ^ ( A n ) = 1 ,
whose family forms a projection-valued measure.
The state of a decision maker is represented by a statistical operator that is a semi-positive trace-one operator,
Tr H   ρ ^ ( t ) = 1 ,
depending on time t and acting on the space H . In decision making, ρ ^ is termed the decision-maker state or just the decision state. The pair { H , ρ ( t ) } is a decision ensemble. The probability of observing an event A n is
p ( A n , t ) = Tr H   ρ ^ ( t ) P ^ ( A n ) ,
which is uniquely defined for a Hilbert space of dimensionality larger than two [22]. Since P ^ ( A n ) acts on the space H A , probability (6) can be rewritten as
p ( A n , t ) = Tr A   ρ ^ A ( t ) P ^ ( A n ) ,
where the trace is over the space H A , and
ρ ^ A ( t ) = Tr S   ρ ^ ( t ) = α   α   |   ρ ^ ( t )   |   α   ,
with the trace over the space H S . Note that probability (7) can also be written as
p ( A n , t ) = Tr A   P ^ ( A n ) ρ ^ A ( t ) P ^ ( A n ) .
Expression (6), or (7), is the predicted probability of observing an event A n , or in decision theory, this is the predicted probability of choosing an alternative A n . When comparing the predicted theoretical probabilities with empirical probabilities, the former are interpreted in the frequentist framework [23,24]. Thus, p ( A n , t ) is the fraction of equivalent decision makers preferring the alternative A n at time t. This can be reformulated in the different way: p ( A n , t ) is the frequency of choosing the alternative A n by a decision maker, if the choice is repeated many times under the same conditions until the time t.
We say that an alternative A i is stochastically preferred (or simply preferred) to an alternative A j if and only if p ( A i ) > p ( A j ) , and the alternatives A i and A j are stochastically indifferent if and only if p ( A i ) = p ( A j ) .

3. Single Decision Making

Usually, one considers decision making as a process instantaneous in time, which is certainly an overly rough modeling. Here we stress the importance of dealing with realistic decision making developing in time. The evolutionary picture saves us from different inconsistencies arising in the toy model of instantaneous decisions [18,25].
Suppose at the initial time t = 0 we plan to make a choice between the alternatives from the given set. This implies a stage of preparation, when the decision maker is characterized by a state ρ ^ ( 0 ) . The evolution of the decision-maker state in time can be represented as
ρ ^ ( t ) = U ^ ( t )   ρ ^ ( 0 )   U ^ + ( t ) ,
involving the evolution operator U ^ ( t ) . To keep the state normalization intact, the evolution operator has to be unitary,
U ^ + ( t ) U ^ ( t ) = 1 .
By employing a self-adjoint evolution generator H ^ ( t ) , the time transformation of the evolution operator can be written as the Lie differential equation
d d t   U ^ ( t ) = H ^ ( t ) U ^ ( t ) ,
with the initial condition
U ^ ( 0 ) = 1 ^ .
A very important clause is the requirement that the considered decision makers be well defined individuals, who do not drastically vary in time; otherwise, there would be no meaning in predicting their decisions. In other words, the mental features of decision makers at one moment of time are to be similar to those at a different moment of time. Briefly speaking, we can say that the decision makers should have the property of self-similarity. This imposes a restriction on the form of the evolution generator. This restriction can be written as the commutator
H ^ ( t ) ,   0 t H ^ ( t ) d t = 0 .
Recall that in quantum theory the operator commuting with the evolution generator is called the integral of motion. In our case, condition (12) does not mean that the evolution generator does not depend on time, but it tells us that the properties of this operator are in some sense invariant with time, preserving the similarity of the decision makers at different moments of time. Thus, in decision theory, condition (12) can be understood as the invariance of the properties of decision makers; that is, of the decision-maker self-similarity. In the theory of operator differential equations, commutator (12) is termed the Lappo-Danilevsky condition [26]. Under this provision, the evolution operator, satisfying the evolution Equation (10), with the initial condition (11), reads as
U ( t ) = exp i 0 t H ^ ( t )   d t .
This form of the evolution operator satisfies the group properties that can be represented as the property of functional self-similarity [27].
Till now, we have not specified the choice of a basis for the decision space H . It is convenient to accept as the basis that one composed of the eigenfunctions of the evolution generator, such that
H ^ ( t )   |   n α   = E n α ( t )   |   n α   .
Generally, the eigenfunctions here can depend on time. However, there are two cases when this dependence is either negligible or absent at all. One possibility is when the eigenfunctions vary with time much slower than the eigenvalues E n α ( t ) . Then there exists a time horizon till which the time variation of the eigenfunction can be neglected, which is regulated by a kind of adiabatic conditions [28]. The other case is the already assumed Lappo-Danilevsky condition (12) implying the decision-maker self-similarity. It is easy to show that the Lappo-Danilevsky condition (12) is equivalent to the validity of eigenproblem (14) with time-independent eigenfunctions. Exactly the same situation happens in quantum theory in the case of nondestructive or nondemolition measurements [29,30,31,32,33,34,35].
This basis is complete, since the evolution generator is self-adjoint. Then
U ^ ( t )   |   n α   = U n α ( t )   |   n α   ,
where
U n α ( t ) = exp i 0 t E n α ( t )   d t .
In this way, we find the expression for the time-dependent predicted probability of choosing the alternative A n at time t,
p ( A n , t ) = n 1 n 2   α U n 1 α ( t )   α n 1   |   ρ ^ ( 0 )   |   n 2 α     U n 2 α * ( t )   n 2   |   P ^ ( A n )   |   n 1   .
Again, to compare this expression with experimental observations, it is reasonable to interpret it as a frequentist probability [23,24], although other interpretations are admissible [36].
The evolution generator is defined on the decision space H and characterizes the velocity of the change of the decision-maker state caused by the influence of the set of alternatives. The impact of the evolution generator on the decision-maker state is assumed to be finite, such that, if the decision process starts at time t A and ends at time t A + τ A , being taken in the time interval [ t A , t A + τ A ] , then
t A t A + τ A E n α ( t )   d t   < .
This impact can be asymptotically large, remaining finite. The speed of the decision process can be quantified by a rate parameter g, which the eigenvalue E n α ( t ) = E n α ( t , g ) depends on in such a way that in the limit of a slow process
E n α ( t , g ) 0 ( g 0 ) ,
while under a fast process
E n α ( t , g ) ( g ) .
In the case of a slow process,
U n α ( t ) 1 ( g 0 ) .
This yields the probability
p ( A n , t ) Tr A   ρ ^ A ( 0 ) P ^ ( A n ) ( g 0 ) ,
which implies that the decision state practically does not change:
ρ ^ A ( t ) ρ ^ A ( 0 ) ( g 0 ) .
When the process is fast, in the sum (17) one has
U m α ( t )   U n α * ( t ) δ m n ( g ) .
This leads to the probability
p ( A n , t ) Tr A   ρ ^ A ( t ) P ^ ( A n ) ( g ) ,
with the state
ρ ^ A ( t ) m P ^ m   ρ ^ A ( 0 )   P ^ m ( g ) ,
where
P ^ m |   m     m   | .
In the intermediate cases of the process rate, one has to resort to the general probability (17).

4. Changing Initial Conditions

Assume that a subject made a decision during the time interval [ t A , t n t A + τ A ] choosing the alternative A n . Then what would be the following evolution of the probability p ( A n , t ) ? Strictly speaking, there is nothing special in the fact of one subject making a decision at any moment of time. According to the frequentist understanding, the probability gives a distribution over an ensemble of decision makers or over many repeated decisions. This means that the probability yields a fraction of decision makers (or the fraction of decisions) preferring this or that alternative. In the following moments of time, the probability will continue to be defined by Equation (17).
However, one can put the question in a different manner: Suppose we are interested in the decision making of just a single subject who certainly chose the alternative A n , so that at the moment t n the probability became one, instead of that prescribed by Equation (17). Then what would be the following evolution of the probability? Again, there is nothing extraordinary in that case. Before making a decision, the predicted probability was described by Equation (17). After making the decision, if we insist that a posteriori probability became one, this means that we have to replace p ( A n , t n ) by one, treating the latter as a new initial condition. Thus, the replacement
p ( A n , t n ) 1
means nothing but the change of the initial condition for the equation describing the evolution of the decision state. The related replacement
ρ ^ A ( t n )     ρ ^ L ( t n )
assumes that starting from the moment of time t n , we treat as the new initial condition the state defined by the equality
Tr A   ρ ^ L ( t n ) P ^ ( A n ) = 1 .
The simplest solution for the latter equation is the Lüders state [37]
ρ ^ L ( t n ) = P ^ ( A n ) ρ ^ A ( t n ) P ^ ( A n ) Tr A   ρ ^ A ( t n ) P ^ ( A n ) .
Note that here ρ ^ L ( t n ) is the a posteriori post-decision state, while ρ ^ A ( t n ) is the a priori anticipated state; that is,
ρ ^ L ( t n ) ρ ^ L ( t n + 0 ) , ρ ^ A ( t n ) ρ ^ A ( t n 0 ) .
The replacement (26) often is labeled “collapse”. For wave functions, this replacement is equivalent to the sudden change from a state | ψ to the state | A n . It is really easy to check that the wave function “collapse” implies the replacement
ρ ^ A ( t n ) = |   ψ     ψ   |     ρ ^ L ( t n ) = |   A n     A n   | .
Under the name “collapse”, one means a sudden jump of the state, which can be dramatic, provided the wave function or decision state describes real matter. However, a decision state and a wave function are nothing but the probabilistic characteristics that can be used for calculating and predicting a priori quantum probabilities satisfying evolution equations. Fixing a decision state at some moment of time simply means fixing new initial conditions for the state evolution. Thus, the Lüders state is just the new initial condition for the moment of time t n ,
ρ ^ L ( t n ) = Tr S   ρ ^ ( t n , t n ) ,
after which again the predicted probabilities have to be found from the state evolution
ρ ^ ( t , t n ) = U ^ ( t , t n )   ρ ^ ( t n , t n )   U ^ + ( t , t n ) ,
with the evolution operator
U ^ ( t , t n ) = exp i t n t H ^ ( t )   d t .
This defines the decision state
ρ ^ A ( t , t n ) = Tr S   ρ ^ ( t , t n ) ( t > t n )
after the time t n . Respectively, this decision state gives the probability of choosing an alternative A m after the time t n ,
p ( A m , t ) = Tr A   ρ ^ A ( t , t n )   P ^ ( A m ) ( t > t n ) .
Comparing Equations (29) and (32) yields the initial condition
ρ ^ A ( t n , t n ) = ρ ^ L ( t n ) .
Following the same steps as in the previous section, we get the time evolution of the probability for t > t n ,
p ( A m , t ) = n 1 n 2   α U n 1 α ( t , t n )   α n 1   |   ρ ^ ( t n , t n )   |   n 2 α   × × U n 2 α * ( t , t n )   n 2   |   P ^ ( A m )   |   n 1   ( t > t n ) ,
where
U n α ( t , t n ) = exp i t n t E n α ( t )   d t .
In the limit of a slow process, this results in
ρ ^ A ( t , t n ) ρ ^ L ( t n ) ( g 0 ) .
And under a fast process, we have
ρ ^ A ( t , t n ) m P ^ m   ρ ^ L ( t n )   P ^ m ( g ) .
Let us consider the situation, when a subject, after fixing the initial condition (29), where the alternative A n was chosen, is interested in finding the probability of deciding on the alternative A m . In the slow-process limit, we get
p ( A m , t ) = Tr A   ρ ^ L ( t n )   P ^ ( A m ) p L ( A m , t n ) .
Introducing the Wigner [38] probability
p W ( A m , t n ) = Tr A   P ^ ( A n )   ρ ^ A ( t n )   P ^ ( A n )   P ^ ( A m )
yields the relation
p W ( A m , t n ) = p L ( A m , t n )   p ( A n , t n ) ,
where
p ( A n , t n ) = Tr A   ρ ^ A ( t n )   P ^ ( A n ) .
This equation reminds us the relation between the classical joint and conditional probabilities. Therefore, sometimes one is tempted to interpret the Wigner probability, p W ( A m , t n ) as a joint probability of two events, A n and A m , and the Lüders probability p L ( A m , t n ) as a conditional probability of the event A m under the event A n that previously happened. This interpretation, however, cannot pretend to provide a generalization of classical probabilities to quantum theory. This, first of all, because the derived relation is a very particular case of extremely slow processes, when g 0 , but not the general expression. Second, this relation is not valid for all times, but it only connects the terms at time t n . What is more important is that this relation contains the terms taken from different sides of t n ; that is, connecting the predicted a priori probability with the post-decision a posteriori probability,
p L ( A m , t n + 0 ) = p W ( A m , t n 0 ) p ( A n , t n 0 ) .
But the meaningful definition of a probability should be valid for any time t > t n . Third, the generalization of a classical expression assumes to contain the classical one as a particular case. But it is easy to check that the Lüders probability is
p L ( A m , t n ) = |     A m   |   A n     | 2 ,
which is symmetric with respect to the interchange between A m and A n . For commuting events, corresponding to the classical case, the Lüders probability becomes trivial δ m n . Contrary to this, the classical conditional probability is neither symmetric nor trivial, which is confirmed by numerous empirical observations [39,40]. In the best case, the Lüders probability is just a transition probability [18,25].
Sometimes one tries to save the interpretation of the Lüders probability as a conditional probability by resorting to the assumption of degenerate quantum states. This, nevertheless, does not save the situation because of several reasons: The origin of degeneracy is never defined, which makes the assumption groundless. The degeneracy is not important, since it always can be lifted by infinitesimally small variations of the problem [18,25]. Finally, the degeneracy does not make classical expressions a particular case of quantum formulae; i.e., there is no quantum-classical correspondence [18,25,41].
As we see, the consideration of realistic decision processes developing across time helps us to avoid misrepresentation of the obtained expressions. The use of formal relations, neglecting evolutionary processes, can lead to meaningless results like negative and complex probabilities [18,25,42].

5. Successive Decision Making

When we consider the alternatives from the same set, say { A n } , the probability of any A n is described by the approach elucidated in the previous sections, which leads to the set of the probabilities { p ( A n , t ) : n = 1 , 2 , , N A } . A rather different problem is the definition of successive decisions with respect to alternatives from different sets. Below we consider this problem by employing some of the methods from the theory of quantum measurements [21,42,43,44], essentially modifying them, however, in order to adjust to quantum decision theory [18,25].
Suppose there are two sets of alternatives, { A n : n = 1 , 2 , , N A } and { B k : k = 1 , 2 , , N B } . At the time interval [ t A , t A + τ A ] , a subject makes a decision with respect to the alternatives A n . Then, in the time interval [ t B + τ B ] , where t B > t A + τ A , the subject decides about B k .
According to the general approach of defining quantum joint probabilities [18,25], each set of alternatives is represented by the related set of alternative vectors and alternative projection operators,
A n     |   A n       P ^ ( A n ) |   A n     A n   | , B k     |   B k       P ^ ( B k ) |   B k     B k   | .
Respectively, there are two spaces of alternatives:
H A = span { |   n   } , H B = span { |   k   } ,
whose bases, in general, are different. Recollecting the existence of the subject space of mind H S , we have the decision space
H = H A H B H S .
The decision-maker state is subject to the time evolution (9). The evolution operator satisfies the evolution equation (Equation (10)), now with the evolution generator
H ^ ( t ) = H ^ A S ( t ) 1 ^ B   +   1 ^ A H ^ B S ( t ) .
Again it is convenient to choose the basis composed of the eigenfunctions of the evolution generator, such that
H ^ ( t )   |   n k α   = E n k α ( t )   |   n k α   .
This assumes the self-similarity of the decision maker in the form of the Lappo-Danilevsky condition (12), because of which we have
U ^ ( t )   |   n k α   = U n k α ( t )   |   n k α   ,
where
U n k α ( t ) = exp i 0 t E n k α ( t )   d t .
The joint probability that, first, a decision on A n is made (an event A n happens) and later a decision on B k is made (an event B k occurs) is defined as
p ( B k A n , t ) Tr H   ρ ^ ( t )   P ^ ( B k ) P ^ ( A n ) .
This also can be written as
p ( B k A n , t ) Tr A B   ρ ^ A B ( t )   P ^ ( B k ) P ^ ( A n ) ,
where
ρ ^ A B ( t ) Tr S   ρ ^ ( t ) = α   α   |   ρ ^ ( t )   |   α   .
Expanding this results in the expression for the sought joint probability of choosing first the alternative A n , and later, the alternative B k , yields
p ( B k A n , t ) = n 1 n 2   k 1 k 2   α   U n 1 k 1 α ( t )   α k 1 n 1   |   ρ ^ ( 0 )   |   n 2 k 2 α     U n 2 k 2 α * ( t ) × ×   k 2   |   P ^ ( B k )   |   k 1     n 2   |   P ^ ( A n )   |   n 1   .
The process of making decisions is supposed to be smooth, such that its impact does not result in finite-time divergences,
t A t A + τ A E n k α ( t )   d t   +   t B t B + τ A E n k α ( t )   d t     <   .
The quantity E n k α ( t ) characterizes the speed of the process of the subject deliberating about the given alternatives. For convenience, it is possible to define a rate parameter, entering the eigenvalue
E n k α ( t ) = E n k α ( t , g )
in such a way that a slow process would imply
E n k α ( t , g ) 0 ( g 0 ) ,
while a fast process would mean that
E n k α ( t , g ) ( g ) .
When the process is slow, we have
U n k α ( t ) 1 ( g 0 ) .
Then the corresponding probability becomes
p ( B k A n , t ) Tr A B   ρ ^ A B ( 0 )   P ^ ( B k ) P ^ ( A n ) ( g 0 ) ,
while the decision state reads as
ρ ^ A B ( t ) ρ ^ A B ( 0 ) ( g 0 ) .
It is useful to stress that even if at the initial moment of time the decisions on A n and B k formally look to be not explicitly connected, so that
ρ ^ ( 0 ) = ρ ^ A S ( 0 ) ρ ^ B S ( 0 ) ,
the decision state does not factorize anyway,
ρ ^ A B ( 0 ) = Tr S   ρ ^ A S ( 0 ) ρ ^ B S ( 0 ) ,
being entangled through the subject’s mind. Only in the extreme, hardly realistic cases when at the initial moment of time all processes are absolutely not correlated, so that
ρ ^ ( 0 ) = ρ ^ A ( 0 ) ρ ^ B ( 0 ) ρ ^ S ( 0 ) ,
the joint probability factorizes:
p ( B k A n , 0 ) = p ( B k , 0 )   p ( A n , 0 ) .
Thus, even deciding on different alternatives, the total decision state, generally, remains entangled, thereby producing entangled states in the process of decision making [45]. The produced entanglement can be measured [46,47,48,49,50,51].
For a fast process, we have
U n 1 k 1 α ( t )   U n 2 k 2 α ( t ) δ n 1 n 2 δ k 1 k 2 .
Then the decision state reduces to
ρ ^ A B ( t ) n k P ^ n P ^ k   ρ ^ A B ( 0 )   P ^ n P ^ k ( g ) ,
where
P ^ n = |   n     n   | , P ^ k = |   k     k   | .
It is important to emphasize that the joint probability (48) is real-valued,
p * ( B k A n , t ) = p ( B k A n , t ) .
It is not symmetric with respect to the interchange of A n and B k , since the evolution generator (44) depends on the order of decisions, the first being A n and the second being B k . As is postulated at the beginning of this section, at the time interval [ t A , t A + τ A ] , a subject makes a decision on the alternatives A n . Then, in the time interval [ t B + τ B ] , where t B > t A + τ A , the subject decides on B k . This means that the evolution generator can be presented as
H ^ ( t ) = H ^ A S ( t ) 1 ^ B , t t A + τ A 1 ^ A H ^ B S ( t ) , t > t B > t A + τ A .
From here its dependence on the order of decisions is evident. We may notice that the interchange of the decisions on A n and B k is similar to the inversion of time, hence
p ( B k A n , t ) = p ( A n B k , t ) .
Additionally, it is useful to compare the joint probability (49) with the Kirkwood distribution [52]. The latter is defined for two coinciding in time events, whose projectors pertain to the same space of alternatives H A = H B , which gives
p K ( B k A n ) Tr A   ρ ^ A   P ^ ( B k )   P ^ ( A n ) .
It easy to see from the complex conjugate expression
p K * ( B k A n ) = Tr A   ρ ^ A   P ^ ( A n )   P ^ ( B k ) = p K ( A n B k )
that
p K * ( B k A n ) p K ( B k A n ) ,
which tells us that the Kirkwood distribution, generally, is complex-valued. Thus, the complex Kirkwood distribution and the real joint probability (49) are principally different.

6. Dual Decision Process

Subjects rarely make decisions based solely on the usefulness of alternatives, as prescribed by utility theory [9], especially when decisions are to be made under uncertainty. The choice between alternatives is practically always not purely objective and based on well defined values that could be quantified, but strong subjective feelings, biases, and heuristics are involved in the process of making decisions [53,54]. Intuition and emotions are also subjective, but they help us to make decisions, making them part of the decision-making process [55]. Taking account of subjective feelings and emotions in making decisions is important for the problem of human-computer interaction [56] and the creation of artificial intelligence [57,58].
Understanding that human decision making involves two sides, the rational evaluation of utility and intuitive irrational attraction or repulsion towards each of the alternatives has been thoroughly investigated and elucidated in the approach called dual-process theory [59,60,61,62]. Quantum techniques seem to be the most appropriate for portraying the duality of human decision processes, since quantum theory presupposes the so-called wave-particle duality. Below we show how the dual-process approach [59,60,61,62] is formulated in quantum language [10,46,63,64,65].
Thus, to describe decision making by real subjects it is necessary to take into account the rational–irrational duality of decision making. First, we need to say several words on the meaning of the notion “rational” that can be understood in different senses [66]. It is necessary to distinguish between the philosophical and psychological meanings of this term. In philosophical writings, one gives the definition of “rational” as all which leads to the desired goal [67]. However, under that definition, any illogical uncontrolled feeling that leads to the goal should be named rational. In decision making, one uses the psychological meaning of the term “rational” as what can be explained and evaluated logically and what follows explicitly formulated rules. On the contrary, emotions, intuition, and moral feelings cannot be logically and explicitly formulated and quantified, especially at the moment of making a decision. Thus, “rational” in decision making is what can be explicitly formulated, based on clear rules, deterministic, logical, prescriptive, normative. While “irrational” is just the opposite to “rational”.
In this way, one has to separate the psychological and philosophical definitions of “rational” or “irrational”. The psychological definition assumes that the distinction between “rational” and “irrational” is done at the moment of making a decision. Conversely, the philosophical definition of “rational” as what leads to the goal is a definition that can work better afterwards, when the goal has been reached. Only then it becomes clear what was leading to the goal and what was not. At the moment of making a decision, it is not always clear what leads to the goal and what does not.
The separation of decision making into rational and irrational has a simple and well defined psychological meaning based on actual physiological processes in the brain [68]. One often calls rational processes conscious, while irrational processes as subconscious.
From the mathematical point of view, the above can be formulated as follows. Let us consider the choice between a set of alternatives A n . An alternative is represented by a vector | A n in a Hilbert space of alternatives H A and by a projector P ^ ( A n ) acting on this space. The subject space of mind is H S . Thus, the decision space is
H = H A H S .
As is explained above, when choosing between alternatives, in addition to the rational evaluation of the utility of each alternative, the decision maker experiences irrational feelings. This implies that each alternative A n , and its representation | A n in the space of alternatives H A , is complimented by a set of subjective feelings z n represented by a vector | z n in the subject space of mind H S . That is, in the process of decision making one compares not merely the alternatives A n , but the composite prospects
π n A n z n .
Being a member of the subject space of mind, the vector | z n allows for the expansion
|   z n   = α b n α   |   α   ,
in which b n α are random quantities, which signifies a non-deterministic character of irrational feelings and emotions.
We do not impose an excessive number of conditions; instead we try to limit ourselves by the minimal number of restrictions. The required conditions will be imposed on the resulting expressions describing probability. Therefore, the vectors | z n are not forced to be obligatory normalized, so that the scalar product
  z n   |   z n   = α   | b n α | 2
does not have to be one. Additionally, the vectors | z n with different labels are not orthogonal to each other. Respectively, the operator
P ^ ( z n ) |   z n     z n   | = α β b n α b n β *   |   α     β   |
is not a projector, since
P ^ ( z m ) P ^ ( z n ) =   α b m α * b n α   |   z m     z n   | .
The prospect A n z n is represented by the vector
|   A n z n   = |   A n   |   z n   = α b n α |   A n α  
in the decision space H . The prospect operator
P ^ ( A n z n ) |   A n z n     z n A n   | = P ^ ( A n ) P ^ ( z n ) ,
with the property
P ^ ( A m z m ) P ^ ( A n z n ) = δ m n   z n   |   z n   P ^ ( A n z n ) ,
is not idempotent and is not a projector.
The resolution of unity
n P ^ ( A n z n ) = 1
is understood in the weak sense as the equality on average
n P ^ ( A n z n ) = 1 ,
which implies
Tr H   ρ ^ ( t ) n P ^ ( A n z n ) = 1 .
The extended version of the latter equality reads as
n   α β b n α * b n β     α A n   |   ρ ^ ( t )   |   A n β   = 1 .
The set of the operators { P ^ ( A n z n ) } forms a kind of the positive operator-valued measure [69].
Thus, making a decision on an alternative A n , a decision maker, as a matter of fact, considers a prospect π n = A n z n , as far as, in addition to the rational quantification of the alternative utility, this alternative is subject to irrational subconscious evaluations. A decision maker not merely makes a decision on the usefulness of an alternative, but also experiences feelings of attraction or repulsion to this alternative. Thus, the probability of choosing an alternative is in fact a behavioral probability of the associated prospect comprising a decision on the utility and on the attractiveness of the alternative [65]. Keeping in mind that de facto, one practically always considers prospects associated with the given alternatives, and in order not to complicate notation, we shall denote the behavioral probability of a prospect
p ( π n , t ) Tr H   ρ ^ ( t )   P ^ ( π n ) p ( A n , t )
as identified with the alternative probability
p ( A n , t ) = Tr H   ρ ^ ( t )   P ^ ( A n z n ) .
The resolution of unity (70) or (71) guarantees the probability normalization
n p ( A n , t ) = 1 , 0 p ( A n , t ) 1 .
Separating in expression (73) diagonal terms
f ( A n , t ) α   |   b n α   | 2     α A n   |   ρ ^ ( t )   |   A n α  
from off-diagonal terms
q ( A n , t ) α β   b n α * b n β     α A n   |   ρ ^ ( t )   |   A n β   ,
we obtain the behavioral probability
p ( A n , t ) = f ( A n , t ) + q ( A n , t ) .
In the theory of quantum measurements, the diagonal term corresponds to classical theory, while the second, off-diagonal term is due to quantum interference and coherence. The transition from quantum to classical measurements and from quantum to classical probabilities is called decoherence [70]. Thus, the first term corresponds to a classical probability enjoying the properties
n f ( A n , t ) = 1 , 0 f ( A n , t ) 1 .
In decision theory, the classical probability is responsible for a rational choice of alternatives and can be named rational fraction or utility fraction, since its value is prescribed by the utility of the alternative [25,46,65,71,72].
The second term, entirely due to quantum effects, has the properties
n q ( A n , t ) = 0 , 1 q ( A n , t ) 1
following from Equations (77) and (78). The property (79) is named the alternation law [25,46,65,71,72]. From normalization (74), we also have
f ( A n , t ) q ( A n , t ) 1 f ( A n , t ) .
The quantum term q in decision theory is associated with irrational feelings characterizing the emotional attitude of the decision maker to the quality of considered alternatives, because of which it can be called irrational factor, attraction factor, or quality factor [25,46,65,71,72].
As is seen, for an alternative A i to be stochastically preferred over A j , so that p ( A i ) > p ( A j ) , it is not always sufficient to be more useful, but it is necessary to be sufficiently attractive. An optimal alternative is that possessing the largest probability among the set of the considered alternatives.
For any generalization of decision theory, it is very important to include as a particular case the classical decision theory based on expected utility. Quantum decision theory [25,46,65,71,72] satisfies this requirement. The return to classical decision theory happens when the quantum term q tends to zero. This can be called the quantum-classical correspondence principle:
p ( A n , t ) f ( A n , t ) , q ( A n , t ) 0 .
In the theory of quantum measurements, the disappearance of the quantum term, corresponding to quantum coherence, is called decoherence. The vanishing of this term can be due to the influence of surrounding environment [70,73,74,75] or to the action of measurements [35,76]. In decision theory, the decoherence can be caused by the influence of society providing information to the decision maker [77]. In both these cases, random disturbances from either surrounding or measurement devices, lead to the irreversibility of time arrow [78,79].

7. Evolution of Behavioral Probability

The temporal evolution of the behavioral probability (77) can be treated as in Section 3. The time dependence enters the matrix element
  α A n   |   ρ ^ ( t )   |   A n β   = n 1 n 2 U n 1 α ( t )   A n   |   n 1     α n 1   |   ρ ^ ( 0 )   |   n 2 β     U n 2 β * ( t )   n 2   |   A n   .
If the process in the space of mind is asymptotically slow, then
U n α ( t ) 1 ( g 0 )
and we get
  α A n   |   ρ ^ ( t )   |   A n β     α A n   |   ρ ^ ( 0 )   |   A n β   .
This means that
f ( A n , t ) f ( A n , 0 ) , q ( A n , t ) q ( A n , 0 ) ,
so that the probability practically does not change:
p ( A n , t ) p ( A n , 0 ) ( g 0 ) .
In the opposite case of a fast process, when
U m α ( t )   U n β * ( t ) δ m n δ α β ( g ) ,
we obtain
  α A n   |   ρ ^ ( t )   |   A n β   δ α β m     A n   |   m     α m   |   ρ ^ ( 0 )   |   m α     m   |   A n   ( g ) .
Then the rational fraction reads as
f ( A n , t ) α   |   b n α   | 2   α A n   |   ρ ^ ( t )   |   A n α   ,
where
ρ ^ ( t ) = m P ^ m   ρ ^ ( 0 )   P ^ m ( g ) ,
while the quantum term, characterizing irrational feelings, vanishes:
q ( A n , t ) 0 ( g ) .
Therefore, the behavioral probability reduces to the rational part,
p ( A n , t ) f ( A n , t ) ( g ) .
Successive decision making can be described similarly to Section 5. For two successive decisions, one has to consider the probability
p ( B k A n , t ) = Tr H   ρ ^ ( t )   P ^ ( B k z k ) P ^ ( A n z n ) .
The above results demonstrate the essential dependence of the irrational term q from the speed of making decisions. The fact that, under a slow decision process, the probability practically does not vary, remaining close to that existing at the initial time, is not surprising. What is more interesting is that under a fast decision process, the quantum term vanishes. This can be explained as follows. The quantum term is caused by the interference and entanglement of subconscious feelings in the subject consciousness. In order that these processes happen, they need some time. However, in the case of extremely fast decision making, the subject just has no time for deliberation when the acts of coherence and entanglement could develop.

8. Evaluation of Initial Probability

Before considering any evolutional process, it is necessary to prescribe initial conditions corresponding to the initial moment of time that can be set as t = 0 . The initial probability
p ( A n ) p ( A n , 0 ) = f ( A n ) + q ( A n )
is formed by the initial rational fraction and irrational factor,
f ( A n ) f ( A n , 0 ) , q ( A n ) q ( A n , 0 ) .
To estimate the value of the initial rational fraction, it is possible to use the Luce rule [80], according to which, if the alternative A n is characterized by an attribute a n , then the alternative weight can be written as
f ( A n ) = a n n a n ( a n 0 ) .
For concreteness, let us study the case where alternatives are represented by lotteries
A n = { x i , p n ( x i ) :   i = 1 , 2 , , N n } ,
where x i are outcomes and p n ( x i ) are the outcome probabilities normalized so that
i p n ( x i ) = 1 , 0 p n ( x i ) 1 .
Employing a utility function u ( x ) , one can define [9] the expected utility
U ( A n ) = i u ( x i ) p n ( x i ) .
The rational fraction describes the classical weight of an alternative that in the present case is connected with the expected utility in the following way [71,72]. If the expected utilities of all alternatives are semi-positive, they can be associated with the alternative attributes,
a n = U ( A n ) , U ( A n ) 0 ,
whereas if all utilities are negative, the alternative attributes can be defined by the inverse quantities
a n = 1 |   U ( A n )   | , U ( A n ) < 0 .
In the case wherein among the expected utilities there are both positive and negative utilities, it is possible to use the shift taking account of the available wealth U 0 . This is done in the following way. Finding the minimal negative utility
U m i n min n   U ( A n ) < 0 ,
one defines
U 0 |   U m i n   |
interpreted as the wealth available to decision makers before they make decisions. Then the shifted utilities
U ¯ ( A n ) U ( A n ) + U 0 0
are used instead of U ( A n ) , and we return to the case of semi-positive expected utilities.
The so defined rational fraction, or utility fraction, enjoys the natural properties: For semi-positive utilities, the fraction tends to zero, when its utility tends to zero,
f ( A n ) 0 , U ( A n ) + 0 ,
and tends to one, when its utility is very large,
f ( A n ) 1 , U ( A n ) + .
For negative expected utilities, we have
f ( A n ) 1 , U ( A n ) 0 ,
and respectively,
f ( A n ) 0 , U ( A n ) .
When considering empirical data, the rational fraction characterizes the fraction of decision makers making decisions on the basis of rational normative rules. A more elaborate expression for the rational fraction can be derived by employing the minimization of an information functional [65,71,72].
The irrational factor q, as has been explained above, is a random quantity distributed over the interval [ 1 , 1 ] . However, being a random variable does not preclude it to possess a typical average quantity. The ways of defining a non-informative prior for q are described in [46,65,71,72]. Below we present a slightly modified derivation of the non-informative prior for the irrational factor.
Let the related distribution function be denoted as φ ( x ) . This distribution is normalized
1 1 φ ( x )   d x = 1 .
Then the average value of a positive irrational factor is defined as
q + = 0 1 x φ ( x )   d x .
Respectively, the average value of a negative factor is
q = 1 0 x φ ( x )   d x .
Proposition 1.
The non-informative priors for the average irrational factors are
q + = 1 4 , q = 1 4 .
Proof. 
With the notation
λ + = 0 1 φ ( x )   d x , λ = 1 0 φ ( x )   d x ,
the normalization condition (103) takes the form
λ + + λ = 1 , 0 λ ± 1 .
In the average irrational factors (104) and (105), x is a monotonic function, while φ ( x ) is integrable. Therefore, employing the theorem of average yields
q + = x + λ + , q = x λ ,
where
0 x + 1 , 1 x 0 .
As a non-informative prior for the values of x ± and λ ± one takes the averages over the domains of their definition, that is,
x + = 1 2 , x = 1 2 , λ ± = 1 2 .
Substituting this into quantities (104) and (105) proves equalities (106). □
This property is termed the quarter law. The mentioned above alternation law and the quarter law can be used for estimating the aggregate values of the behavioral probabilities according to the rule
p ( A n ) = f ( A n ) ± 0.25 .
The sign of the irrational factor is prescribed in accordance to the alternative being attractive or repulsive [46,65,71,72].
This approach was employed for explaining a number of paradoxes in classical decision making [46,63,64,71,72,81,82] and was found to be in good agreement with a variety of experimental observations [65,71,72,83].

9. Quantum Intelligence Network

In the previous sections, we have considered quantum decision making by subjects that act independently of each other. Even then the probability of alternatives could vary with time due to the entangling properties of the evolution operator inducing entanglement in the decision space [50,51,84], in addition to that caused by the initial subject state of mind [85]. The situation becomes much more involved when subjects exchange their information with each other. When there is a society of subjects interacting through the exchange of information, we obtain a network. Since the agents make decisions, we have an intelligence network. The probabilities of alternatives will vary owing to the informational interaction. From empirical investigations, it is known that decision makers do alter their decisions as a result of mutual charing of information [86,87,88,89,90,91,92,93]. This type of intelligence networks plays an important role as prolegomena into the problem of artificial intelligence [94,95,96].
Assume that we are considering a society of N agents enumerated by the index i = 1 , 2 , , N , who decide with respect to alternatives A n , with n = 1 , 2 , , N A . Thus, the probability for an i-th agent deciding on an alternative A n at the moment of time t is p i ( A n , t ) . Each probability satisfies the standard normalization
n = 1 N A   p i ( A n , t ) = 1 , 0 p i ( A n , t ) 1 .
Accordingly, the related rational fraction f i ( A n , t ) is normalized as
n = 1 N A   f i ( A n , t ) = 1 , 0 f i ( A n , t ) 1 .
And for the irrational factor, one has
n = 1 N A   q i ( A n , t ) = 0 , f i ( A n , t ) q i ( A n , t ) 1 f i ( A n , t ) .
To give a realistic description for the temporal evolution of an intelligence network, it is required to take into account that decision making needs some time after receiving information. Denoting this delay time as τ , allows us to represent the dynamics of a behavioral probability as
p i ( A n , t + τ ) = f i ( A n , t ) + q i ( A n , t ) .
The rational fraction weakly varies with time, so that on the time scale shorter than the discounting time [97] it can be treated as constant. Its value is prescribed by the utilities of the given alternatives, as is explained in Section 8.
The time dependence of the irrational factor can be derived following Section 7, similarly to the derivation of the coherent interference term in the theory of quantum measurements, where the role of irrational subconscious feelings is played by the random influence of environment [35,76,78,79,98]. If a decision maker at time t accumulates the amount of information M i ( t ) from other members of the society, then the irrational factor is discounted from its initial value, becoming equal to
q i ( A n , t ) = q i ( A n , 0 ) exp { M i ( t ) } .
The amount of the accumulated information composes [77,99] the information-memory functional
M i ( t ) = t = 1 t   j = 1 N   J i j ( t , t ) μ i j ( t ) ,
in which J i j ( t , t ) is the interaction-memory function and μ i j ( t ) is the information gain by a subject i from a subject j at time t. To exclude self-action, one has to set either J i i = 0 or μ i i = 0 . At the initial moment of time there is no yet additional information, so that one has the initial condition
M i ( 0 ) = 0 .
The information gain can be modeled by the Kullback-Leibler [100,101] relative information
μ i j ( t ) = n = 1 N A   p i ( A n , t )   ln   p i ( A n , t ) p j ( A n , t ) .
The information gain (120) is semi-positive, μ i j 0 , due to the inequality ln x 1 1 / x . As is evident, μ i i = 0 . Depending on the range of the interactions between the agents and the longevity of their memory, there can arise different situations whose typical examples are as follows.
Long-range interactions have the form
J i j ( t , t ) = 1 N 1   J ( t , t ) ( i j ) .
Short-range interactions act only between the nearest neighbors,
J i j ( t , t ) = J ( t , t ) δ i j ,
where δ < i j > equals one for i and j being the nearest neighbours and is zero otherwise.
Long term memory, that lasts forever, does not depend on time,
J i j ( t , t ) = J i j .
Short-term memory, on the contrary, corresponds to the situation, when only the last step is remembered,
J i j ( t , t ) = J i j δ t t .
Combining these ultimate cases gives us the following four possibilities for the information-memory functional.
Long-range interactions and long-term memory:
M i ( t ) = J N 1 t = 1 t   j = 1 N   μ i j ( t ) .
Long-range interactions and short-term memory:
M i ( t ) = J N 1 j = 1 N   μ i j ( t ) .
Short-range interactions and long-term memory:
M i ( t ) = J t = 1 t   j = 1 N   δ i j μ i j ( t ) .
Short-range interactions and short-term memory:
M i ( t ) = J j = 1 N   δ i j μ i j ( t ) .
Keeping in mind modern human societies, we have to accept that long-range interactions are more realistic. This is because the modern-day information exchange practically does not depend on the location of interacting agents who are able to exchange information through phone, Skype, Zoom, etc.

10. Case of Two Alternatives

A very frequent situation is when agents have to decide between two suggested alternatives, that is, when N A = 2 . Then it is straightforward to simplify the notation by setting
p i ( A 1 , t ) p i ( t ) , p i ( A 2 , t ) = 1 p i ( t ) , f i ( A 1 , t ) f i ( t ) , f i ( A 2 , t ) = 1 f i ( t ) , q i ( A 1 , t ) q i ( t ) , q i ( A 2 , t ) = q i ( t ) .
Now the dynamics is governed by the equation
p i ( t + τ ) = f i ( t ) + q i ( t ) ,
with the irrational factor
q i ( t ) = q i ( 0 ) exp { M i ( t ) } .
The information gain takes the form
μ i j ( t ) = p i ( t )   ln   p i ( t ) p j ( t ) + [ 1 p i ( t ) ]   ln   1 p i ( t ) 1 p j ( t ) .
As is seen, μ i i = 0 . Setting initial conditions, it is necessary to obey the restriction
f i ( 0 ) q i ( 0 ) 1 f i ( 0 ) .
Another realistic simplification could be when the considered society consists of two types of agents essentially differing by their initial decisions. Then the question is: How the initial decisions would vary with time being caused by the mutual exchange of information? Will agents with different initial decisions come to a consensus, as it often happens after a number of interactions [102,103]?
When the society can be divided into two parts of typical agents, with each part having similar initial decisions within the group, but essentially different initial decisions between the groups, the situation becomes equivalent to the consideration of two typical agents with these different initial decisions. As is explained above, long-range interactions are more realistic for the information exchange in the modern society. Below we present the results of numerical calculations for this type of interactions. The behavior of the society strongly depends on the type of memory the agents have.
Long-term memory. In the case of long-term memory, the information-memory functionals for two groups are
M 1 ( t ) = J t = 1 t   μ 12 ( t ) , M 2 ( t ) = J t = 1 t   μ 21 ( t ) .
The rational fractions are kept constant in time and the parameters are set J = 1 and τ = 1 . The society dynamics is strongly influenced by the initial decisions. There can happen two types of behavior depending on the relations between the initial rational fractions and irrational factors.
(i) Rational group conventions. There is the rational-irrational accordance in the initial choice of both groups. Then at the initial moment of time, one group, say the first group, estimates the utility of the first alternative higher than the second group. Taking account of irrational feelings keeps the same preference with respect to behavioral probabilities:
f 1 ( 0 ) > f 2 ( 0 ) , p 1 ( 0 ) > p 2 ( 0 ) .
Recall that f i ( t ) f i ( A 1 , t ) and p i ( t ) p i ( A 1 , t ) . Respectively, if the first group estimates the utility of the first alternative lower than the second group, the irrational feelings do not change this preference:
f 1 ( 0 ) < f 2 ( 0 ) , p 1 ( 0 ) < p 2 ( 0 ) .
In the case of this rational-irrational accordance, independently of initial conditions, the behavioral probabilities tend to the respective rational fractions:
p i ( t ) f i ( 0 ) ( t ) .
(ii) Common convention. At the initial time, the inequalities between the rational fractions of the groups and the inequalities between their behavioral probabilities are opposite with each other, that is, one has either
f 1 ( 0 ) > f 2 ( 0 ) , p 1 ( 0 ) < p 2 ( 0 ) ,
or
f 1 ( 0 ) < f 2 ( 0 ) , p 1 ( 0 ) > p 2 ( 0 ) .
In this case, the behavioral probabilities tend with time to the common convention:
p i ( t ) p * ( t )
approximately equal to
p * = f 1 ( 0 ) q 2 ( 0 ) f 2 ( 0 ) q 1 ( 0 ) q 2 ( 0 ) q 1 ( 0 ) .
These results can be interpreted in the following way. In the situation of the rational-irrational accordance, the decision makers are more rational, while irrational feelings, such as emotions, play less important role. Under the prevalence of rational arguments, the agents are inclined to choose the alternative with a higher utility.
In the case of the rational-irrational discordance, the decision makers are forced to more efficiently exchange information, as a result of which they manage to develop a mutual convention.
Short-term memory. In this case, the information-memory functionals are
M 1 ( t ) = J μ 12 ( t ) , M 2 = J μ 21 ( t ) .
We set again J = 1 and τ = 1 . Numerical solution of the evolution equations reveals the existence of two types of possible dynamics.
(i) Group conventions. The behavioral probabilities for each group tend with time to their own limits not coinciding with the corresponding rational fractions:
p i ( t ) p i * ( t ) .
(ii) Everlasting fluctuations. The behavioral probabilities for both groups do not tend to any fixed point, but demonstrate everlasting oscillations. The details of the above numerical solutions can be found in [99].
In a society with short-term memory, there is no enough accumulated information for the formation of a common convention. Each group in such a society either develops their own goal, not necessarily rational, or constantly fluctuates without elaborating a consensus.

11. Dynamic Decision Inconsistencies

There are several so-called dynamic or time inconsistencies in decision making characterizing situations in which a decision-maker’s preferences change over time in such a way that a preference at one moment of time can become inconsistent with a preference at another point in time. Below we consider some of these inconsistencies and show that they find quite natural explanations in QDT.

11.1. Question Order Bias

The order that several questions are asked in a survey or study can influence the answers that are given as much as by 40 % [104,105]. That is because the human brain has a tendency to organize information into patterns. The earlier questions may provide information that subjects use as context in formulating their subsequent answers, or affect their thoughts, feelings, and attitudes towards the questioned problem. Sociological research gives a number of illustrations of this bias [105]. Thus, from a December 2008 poll we know that when people were asked, “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked, “Do you approve or disapprove of the way George W. Bush is handling his job as president?” 88 percent said they were dissatisfied, compared with only 78 percent without the context of the prior question.
In classical probability theory, the probability of joint events is symmetric. This is contrary to the quantum decision theory. Assume an interrogator first asks a question A, so that the answer suggests two alternatives: yes ( A 1 ) or no ( A 2 ). After this, the interrogator poses another question B, also with the possible answers: yes ( B 1 ) or no ( B 2 ). As follows from Section 5, the probability p ( A i B j ) does not equal the probability p ( B j A i ) , since they are defined through different decision states. The situation is similar to that occurring in the problem of quantum contextuality [106,107,108], where two random variables ( A i B j and B j A i ) cannot be characterized by a single density matrix.
In that way, taking account of dynamic evolution in QDT shows that, in general, the alternatives are not commutative, in the sense that p ( A i B j ) p ( B j A i ) , which is in agreement with the empirical data.

11.2. Planning Paradox

In classical decision theory, there is the principle of dynamic consistency according to which a decision taken at one moment of time should be invariant in time, provided no new information has become available and all other conditions are not changed. Then a decision maker, preferring an alternative at time t 1 should retain the choice at a later time t 2 > t 1 . However, this principle is often broken, which is called the effect of dynamic inconsistency.
A stylized example of dynamic inconsistency is the planning paradox, when a subject makes a plan for the future, while behaving contrary to the plan as soon as time comes to accomplish the latter. A typical case of this inconsistency is the stop-smoking paradox [109,110]. A smoker, well understanding the damage to health caused from smoking, plans to stop smoking in the near future, but time goes by, “the future” comes; however, the person does not stop smoking. Numerous observations [109] show that 85 % of smokers do plan to stop smoking in the near future; however, only 36 % really stop smoking during the next year after making the plan. It is possible to pose the question: would it be feasible to predict the percentage of subjects who will really stop smoking during the next year, knowing that at the present time 85 % of them plan on doing this. Below we show that QDT allows us to make this prediction [63].
Let us denote by A 1 the alternative to stop smoking in the near future, and by A 2 , the alternative to not stop smoking. Let us denote by B 1 the decision to really stop smoking, and by B 2 , the decision of refusing to really stop smoking. According to QDT, the corresponding probabilities are
p ( A 1 ) = f ( A 1 ) + q ( A 1 ) , p ( A 2 ) = f ( A 2 ) + q ( A 2 ) ,
p ( B 1 ) = f ( B 1 ) + q ( B 1 ) , p ( B 2 ) = f ( B 2 ) + q ( B 2 ) .
The utility factors do not change in time, so that the utility of planning to stop in the near future is the same as the utility to stop in reality and the utility to not stop in the near future is the same as that of not stopping in reality:
f ( A 1 ) = f ( B 1 ) , f ( A 2 ) = f ( B 2 ) .
Planning to stop smoking, subjects understand the attractiveness of this due to health benefits. Hence the average attraction factors, according to Section 8, are
q ( A 1 ) = 1 4 , q ( A 2 ) = 1 4 .
But as soon as one has to stop smoking in reality, one feels uneasy from the necessity to forgo the pleasure of smoking, because of which the attraction factors become
q ( B 1 ) = 1 4 , q ( B 2 ) = 1 4 .
This is to be complemented by the normalization conditions
p ( A 1 ) + p ( A 2 ) = 1 , p ( B 1 ) + p ( B 2 ) = 1 ,
f ( A 1 ) + f ( A 2 ) = 1 , f ( B 1 ) + f ( B 2 ) = 1 .
Since 85 % of subjects plan to stop smoking, we have
p ( A 1 ) = 0.85 , p ( A 2 ) = 0.15 .
Solving the above equations, the fraction of subjects who will really stop smoking is predicted to be
p ( B 1 ) = 0.35 , p ( B 2 ) = 0.65 .
This is in beautiful agreement with the observed fraction of smokers really stopping smoking during the next year after making the decision [109],
p e x p ( B 1 ) = 0.36 , p e x p ( B 2 ) = 0.64 .
Thus, knowing only the percentage of subjects planning to stop smoking, it is straightforward, by means of QDT, to predict the fraction of those who will stop smoking in reality. This case is also of interest because it gives an example of preference reversal: when planning to stop smoking, the relation between the probabilities is reversed as compared to the relation between the fractions of those who have really stopped smoking,
p ( A 1 ) > p ( A 2 ) , p ( B 1 ) < p ( B 2 ) .

11.3. Disjunction Effect

The disjunction effect is the violation of the sure-thing principle [111]. This principle states: if the alternative A 1 is preferred to the alternative A 2 , when an event B 1 occurs, and it is also preferred to A 2 , when an event B 2 occurs, then A 1 should be preferred to A 2 , when it is not known which of the events, either B 1 or B 2 , has occurred. This principle is easily illustrated for classical probability. Let B = B 1 + B 2 be the alternative when it is not known which of the events, either B 1 or B 2 , has occurred. For a classical probability, one has
f ( A n B ) = f ( A n B 1 ) + f ( A n B 2 ) ( n = 1 , 2 ) .
From here, it immediately follows that if
f ( A 1 B 1 ) > f ( A 2 B 1 ) , f ( A 1 B 2 ) > f ( A 2 B 2 ) ,
then
f ( A 1 B ) > f ( A 2 B ) .
However, empirical studies have discovered numerous violations of the sure-thing principle, which was called the disjunction effect [112]. Such violations are typical for two-step composite games of the following structure. First, a group of agents takes part in a game, where each agent can either win (event B 1 ) or lose (event B 2 ), with equal probability 0.5 . They are then invited to participate in a second game, having the right either to accept the second game (event A 1 ) or to refuse it (event A 2 ). The second stage is realized in different variants: One can either accept or decline the second game under the condition of knowing the result of the first game. Or one can either accept or decline the second game without knowing the result of the first game. The probabilities, as usual, are understood in the frequentist sense as the fractions of individuals making the corresponding decisions [99].
From the experiment of Tversky and Shafir [112], we have
f ( A 1 B 1 ) = 0.345 , f ( A 1 B 2 ) = 0.295 ,
f ( A 2 B 1 ) = 0.155 , f ( A 2 B 2 ) = 0.205 .
This shows that the alternative A 1 B is more useful than A 2 B , since
f ( A 1 B ) = 0.64 , f ( A 2 B ) = 0.36 ,
which seems to agree with the sure-thing principle. However, f ( A n B ) is not yet the whole probability that reads as
p ( A n B ) = f ( A n B ) + q ( A n B ) ( n = 1 , 2 ) .
Making a decision to play, without knowing the result of the first game, is less attractive, because of which q ( A 1 B ) = 1 / 4 , while q ( A 2 B ) = 1 / 4 . As a result, we find
p ( A 1 B ) = 0.39 , p ( A 2 B ) = 0.61 ,
which is in good agreement with the empirical data of Tversky and Shafir [112],
p e x p ( A 1 B ) = 0.36 , p e x p ( A 2 B ) = 0.64 .
The dynamic consistency of the disjunction effect can be analyzed as in Section 9 and Section 10. Details can be found in [99], where it is shown that, when decision makers are allowed to exchange information, the absolute values of the attraction factors diminish with time. This conclusion also is in good agreement with empirical observations [113].

12. Dynamic Preference Intransitivity

Transitivity is of central importance to both psychology and economics. It is the cornerstone of normative and descriptive decision theories [9]. Individuals, however, are not perfectly consistent in their choices. When faced with repeated choices between alternatives A and B, people often choose in some instances A and B in others. Such inconsistencies are observed even in the absence of systematic changes in the decision maker’s taste, which might be due to learning or sequential effects. The observed inconsistencies of this type reflect inherent variability or momentary fluctuation in the evaluative process. Then preference should be defined in a probabilistic fashion [80]. Nevertheless, there occur several choice situations where time transitivity may be violated even in a probabilistic form. Then one says that there is dynamic preference intransitivity [114,115,116,117,118,119,120].
The occurrence of preference intransitivity depends on the considered decision model and on the accepted definition of transitivity. But the general meaning is as follows. Suppose one evaluates three alternatives A ,   B ,   C , considering them in turn by pairs. One compares A and B, and, according to the selected definition of preference, concludes that A is preferred over B, which can be denoted as A > B . Then one compares B and C, finding that B > C . Finally, comparing C and A, one discovers that C > A . This results in the preference loop A > B > C > A signifying the intransitivity effect.
As a simple illustration of the intransitivity effect, we can adduce the Fishburn [116] example. Imagine that a person is about to change jobs. When selecting a job, the person evaluates the suggested salary and the prestige of the position. There are three choices: job A, with the salary 65,000$, but low prestige; job B, with the salary 58,000$ and medium prestige; and job C, with the salary 50,000$ and high prestige. The person chooses A over B because of the better salary, and with a small difference in prestige, B over C because of the same reason; and comparing C and A, the person prefers C because of the higher prestige, although a lower salary. Thus, one comes to the preference loop A > B > C > A .
Let us show how this problem can be resolved in QDT. Recall that the definition of the behavioral probability (72) is contextual, in the sense that it depends on the initial conditions for the decision state and on the given time. This means that the comparison of each pair of alternatives constitutes a separate contextual choice, even if the external conditions remain unchanged. The utility factors can be calculated as described in Section 8. Thus, considering the pair A and B in the Fishburn example, we have the utility factors
f 1 ( A ) = 0.528 , f 1 ( B ) = 0.472 ,
where the label marks the moment of time t 1 and an initial condition ρ ^ 1 ( t 1 ) for the decision state. Due to the close prestige of the both jobs, their attraction factors coincide, which, as follows from the alternation law in Section 6, gives
q 1 ( A ) = q 1 ( B ) = 0 .
This leads to the behavioral probabilities
p 1 ( A ) = 0.528 , p 1 ( B ) = 0.472 .
Since p 1 ( A ) > p 1 ( B ) , the job A at time t 1 is stochastically preferred over B.
Similarly, comparing the jobs B and C at time t 2 , we get the utility factors
f 2 ( B ) = 0.537 , f 2 ( C ) = 0.463 ,
and the attraction factors
q 2 ( B ) = q 2 ( C ) = 0 .
Hence, the probabilities are
p 2 ( B ) = 0.537 , p 2 ( C ) = 0.463 ,
which implies that B at time t 2 is stochastically preferred over C.
In the comparison of the jobs C and A at time t 3 , we find the utility factors
f 3 ( C ) = 0.435 , f 3 ( A ) = 0.565 .
Now the positions are of a very different quality, so that the attraction factors are
q 3 ( C ) = 1 4 , q 3 ( A ) = 1 4 .
Therefore, the probabilities become
p 3 ( C ) = 0.685 , p 3 ( A ) = 0.315 .
Then, at time t 3 , the job C is stochastically preferred over A.
However, we should not forget that these comparisons were accomplished at different moments of time and under different initial conditions. Therefore, there is nothing extraordinary such that differently defined probabilities can be intransitive. Even more, there are arguments [121] that such intransitivities can be advantageous for alive beings in the presence of irreducible noise during neural information processing.
The arising preference loops can be broken in two ways. First, the process of comparison requires time during which there can appear additional information. The attraction factors, as is explained in Section 9 and Section 10, vary with time, diminishing as time increases. When q 3 ( C ) and q 3 ( A ) tend to zero then
p 3 ( C )     f 3 ( C ) = 0.435 ,
p 3 ( A )     f 3 ( A ) = 0.565 .
Since now p 3 ( C ) < p 3 ( A ) , A becomes preferred over C; hence, the preference loop is broken.
The other very natural way is as follows. As soon as there appears a preference loop, this implies that decisions at different moments of time and under different contexts should not be compared. One has to reconsider the whole problem at one given moment of time t. Considering all three alternatives A , B , C in the frame of one given choice, we have
f ( A ) = 0.376 , f ( B ) = 0.335 , f ( C ) = 0.289 .
The classification of the related qualities can be estimated according to the QDT rule
q ( A ) = 1 4 , q ( B ) = 0 , q ( C ) = 1 4 .
Then we find
p ( A ) = 0.126 , p ( B ) = 0.335 , p ( C ) = 0.539 ,
which establishes the relation ( A B C ) between all alternatives, and no problems or paradoxes arise.

13. Conclusions

The basic ideas of quantum decision theory are presented, with the emphasis on the problems of time evolution of decision processes. The relations between operationally testable events and behavioral features of making decisions are elucidated. The interplay of rational and irrational sides of decision making is explained. The approach to describing quantum intelligence networks is developed. As illustrations, several time inconsistencies are analyzed.
The main points of quantum decision theory are based on the techniques used in the theory of quantum measurements. Therefore, the presented approach can be employed for characterizing evolutional processes in quantum measurements. The behavior of many self-organizing complex systems is similar to decision making [122], because of which the approach could be useful in applications to complex systems and to the problems of artificial intelligence.

Funding

This research received no external funding.

Acknowledgments

The author is grateful to E.P. Yukalova for useful discussions.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Khrennikov, A.Y. Ubiquitous Quantum Structure; Springer: Berlin, Germany, 2010. [Google Scholar]
  2. Busemeyer, J.R.; Bruza, P.D. Quantum Models of Cognition and Decision; Cambridge University: Cambridge, UK, 2012. [Google Scholar]
  3. Haven, E.; Khrennikov, A. Quantum Social Science; Cambridge University: Cambridge, UK, 2013. [Google Scholar]
  4. Bagarello, F. Quantum Dynamics for Classical Systems; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
  5. Yukalov, V.I.; Sornette, D. Processing information in quantum decision theory. Entropy 2009, 11, 1073–1120. [Google Scholar] [CrossRef] [Green Version]
  6. Agrawal, P.M.; Sharda, R. Quantum mechanics and human decision making. Oper. Res. 2013, 61, 1–16. [Google Scholar] [CrossRef]
  7. Sornette, D. Physics and financial economics (1776–2014): Puzzles, Ising and agent-based models. Rep. Prog. Phys. 2014, 77, 062001. [Google Scholar] [CrossRef] [PubMed]
  8. Ashtiani, M.; Azgomi, M.A. A survey of quantum-like approaches to decision making and cognition. Math. Soc. Sci. 2015, 75, 49–80. [Google Scholar] [CrossRef]
  9. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University: Princeton, NJ, USA, 1953. [Google Scholar]
  10. Yukalov, V.I.; Sornette, D. Quantum decision theory as quantum theory of measurement. Phys. Lett. A 2008, 372, 6867–6871. [Google Scholar] [CrossRef]
  11. Meyer, D.A. Quantum strategies. Phys. Rev. Lett. 1999, 82, 1052–1055. [Google Scholar] [CrossRef] [Green Version]
  12. Eisert, J.; Wilkens, M. Quantum games. J. Mod. Opt. 2000, 47, 2543–2556. [Google Scholar] [CrossRef]
  13. Piotrowski, E.W.; Sladkowski, J. An invitation to quantum game theory. Int. J. Theor. Phys. 2003, 42, 1089–1099. [Google Scholar] [CrossRef]
  14. Landsburg, S.E. Quantum game theory. Am. Math. Soc. 2004, 51, 394–399. [Google Scholar]
  15. Guo, H.; Zhang, J.; Koehler, G.J. A survey of quantum games. Decis. Support Syst. 2008, 46, 318–332. [Google Scholar] [CrossRef]
  16. Khan, F.S.; Phoenix, J.D. Gaming the quantum. Quant. Inf. Comput. 2013, 13, 231–244. [Google Scholar]
  17. Khan, F.S.; Phoenix, J.D. Mini-maximizing two qubit quantum computations. Quant. Inf. Process. 2013, 12, 3807–3819. [Google Scholar] [CrossRef] [Green Version]
  18. Yukalov, V.I.; Sornette, D. Quantum probabilities of composite events in quantum measurements with multimode states. Laser Phys. 2013, 23, 105502. [Google Scholar] [CrossRef] [Green Version]
  19. Yukalov, V.I.; Yukalova, E.P.; Sornette, D. Mode interference in quantum joint probabilities for multimode Bose-condensed systems. Laser Phys. Lett. 2013, 10, 115502. [Google Scholar] [CrossRef] [Green Version]
  20. Yukalov, V.I.; Sornette, D. Quantum theory of measurements as quantum decision theory. J. Phys. Conf. Ser. 2015, 549, 012048. [Google Scholar] [CrossRef] [Green Version]
  21. Von Neumann, J. Mathematical Foundations of Quantum Mechanics; Princeton University: Princeton, NJ, USA, 1955. [Google Scholar]
  22. Gleason, A.M. Measures on the closed subspaces of a Hilbert space. J. Math. Mech. 1957, 6, 885–893. [Google Scholar] [CrossRef]
  23. Bohr, N. Atomic Physics and Human Knowledge; Wiley: New York, NY, USA, 1958. [Google Scholar]
  24. Ballentine, L.E. Quantum Mechanics; World Scientific: Singapore, 2000. [Google Scholar]
  25. Yukalov, V.I.; Sornette, D. Quantum probability and quantum decision making. Philos. Trans. R. Soc. A 2016, 374, 20150100. [Google Scholar] [CrossRef] [Green Version]
  26. Lappo-Danilevsky, J.A. Memoires Sur La Theorie Des Systemes Des Equations Differentieles Lineaires; Chelsea: New York, NY, USA, 1953. [Google Scholar]
  27. Yukalov, V.I. Interplay between approximation theory and renormalization group. Phys. Part. Nucl. 2019, 50, 141–209. [Google Scholar] [CrossRef] [Green Version]
  28. Yukalov, V.I. Adiabatic theorems for linear and nonlinear Hamiltonians. Phys. Rev. A 2009, 79, 052117. [Google Scholar] [CrossRef] [Green Version]
  29. Yukalov, V.I. Existence of a wave function for a subsystem. Mosc. Univ. Phys. Bull. 1970, 25, 49–53. [Google Scholar]
  30. Van Kampen, N.G. A soluble model for quantum mechanical dissipation. J. Stat. Phys. 1995, 78, 299–310. [Google Scholar] [CrossRef]
  31. Shao, J.; Ge, M.L.; Cheng, H. Decoherence of quantum-nondemolition systems. Phys. Rev. E 1996, 53, 1243–1245. [Google Scholar] [CrossRef]
  32. Braginsky, V.B.; Khalili, F.Y. Quantum nondemolition measurements: The route from toys to tools. Rev. Mod. Phys. 1996, 68, 1–12. [Google Scholar] [CrossRef]
  33. Mozyrsky, D.; Privman, V. Adiabatic decoherence. J. Stat. Phys. 1998, 91, 787–799. [Google Scholar] [CrossRef]
  34. Yukalov, V.I. Stochastic instability of quasi-isolated systems. Phys. Rev. E 2002, 65, 056118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Yukalov, V.I. Decoherence and equilibration under nondestructive measurements. Ann. Phys. (N. Y.) 2012, 327, 253–263. [Google Scholar] [CrossRef] [Green Version]
  36. Fay, J. Darwinism in disguise? A comparison between Bohr’s view on quantum mechanics and QBism. Philos. Trans. R. Soc. A 2016, 374, 20150236. [Google Scholar] [CrossRef]
  37. Lüders, G. Concerning the state change due to the measurement process. Ann. Phys. (Leipzig) 1951, 8, 322–328. [Google Scholar] [CrossRef] [Green Version]
  38. Wigner, E. On the quantum correction for thermodynamic equilibrium. Phys. Rev. 1932, 40, 749–759. [Google Scholar] [CrossRef]
  39. Boyer-Kassem, T.; Duchêne, S.; Guerci, E. Testing quantum-like models of judgment for question order effect. Math. Soc. Sci. 2016, 80, 33–46. [Google Scholar] [CrossRef] [Green Version]
  40. Boyer-Kassem, T.; Duchêne, S.; Guerci, E. Quantum-like models cannot account for the conjunction fallacy. Theory Decis. 2016, 81, 479–510. [Google Scholar] [CrossRef] [Green Version]
  41. Yukalov, V.I.; Sornette, D. Conditions for quantum interference in cognitive sciences. Top. Cogn. Sci. 2014, 6, 79–90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Johansen, L.M. Quantum theory of successive projective measurements. Phys. Rev. A 2007, 76, 012119. [Google Scholar] [CrossRef] [Green Version]
  43. Johansen, L.M.; Mello, P.A. Quantum mechanics of successive measurements with arbitrary meter coupling. Phys. Lett. A 2008, 372, 5760–5764. [Google Scholar] [CrossRef] [Green Version]
  44. Kalev, A.; Mello, P.A. Quantum state tomography using successive measurements. J. Phys. A 2012, 45, 235301. [Google Scholar] [CrossRef] [Green Version]
  45. Yukalov, V.I.; Sornette, D. Entanglement production in quantum decision making. Phys. Atomic Nucl. 2010, 73, 559–562. [Google Scholar] [CrossRef] [Green Version]
  46. Yukalov, V.I.; Sornette, D. Decision theory with prospect interference and entanglement. Theory Decis. 2011, 70, 283–328. [Google Scholar] [CrossRef] [Green Version]
  47. Yukalov, V.I.; Yukalova, E.P.; Sornette, D. Quantum probabilities and entanglement for multimode quantum systems. J. Phys. Conf. Ser. 2014, 497, 012034. [Google Scholar] [CrossRef] [Green Version]
  48. Yukalov, V.I. Entanglement measure for composite systems. Phys. Rev. Lett. 2003, 90, 167905. [Google Scholar] [CrossRef] [Green Version]
  49. Yukalov, V.I. Quantifying entanglement production of quantum operations. Phys. Rev. A 2003, 68, 022109. [Google Scholar] [CrossRef] [Green Version]
  50. Yukalov, V.I. Evolutional entanglement in nonequilibrium processes. Mod. Phys. Lett. B 2003, 17, 95–103. [Google Scholar] [CrossRef] [Green Version]
  51. Yukalov, V.I.; Yukalova, E.P. Evolutional entanglement production. Phys. Rev. A 2015, 92, 052121. [Google Scholar] [CrossRef] [Green Version]
  52. Kirkwood, J.G. Quantum statistics of almost classical assemblies. Phys. Rev. 1933, 44, 31–37. [Google Scholar] [CrossRef]
  53. Tversky, A.; Kahneman, D. Judgment under uncertainty: Heuristics and biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
  54. Kahneman, D. Judgment Under Uncertainty: Heuristics and Biases; Cambridge University: Cambridge, UK, 1982. [Google Scholar]
  55. Minsky, M. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind; Simon and Schuster: New York, NY, USA, 2006. [Google Scholar]
  56. Picard, R. Affective Computing; Massachusetts Institute of Technology: Cambridge, UK, 1997. [Google Scholar]
  57. Yukalov, V.I.; Sornette, D. Scheme of thinking quantum systems. Laser Phys. Lett. 2009, 6, 833–839. [Google Scholar] [CrossRef] [Green Version]
  58. Yukalov, V.I.; Sornette, D. How brains make decisions. Springer Proc. Phys. 2014, 150, 37–53. [Google Scholar]
  59. Sun, R. Duality of the Mind; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2002. [Google Scholar]
  60. Paivio, A. Mind and Its Evolution: A Dual Coding Theoretical Approach; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2007. [Google Scholar]
  61. Stanovich, K.E. Rationality and the Reflective Mind; Oxford University: New York, NY, USA, 2011. [Google Scholar]
  62. Kahneman, D. Thinking, Fast and Slow; Straus and Giroux: New York, NY, USA, 2011. [Google Scholar]
  63. Yukalov, V.; Sornette, D. Physics of risk and uncertainty in quantum decision making. Eur. Phys. J. B 2009, 71, 533–548. [Google Scholar] [CrossRef]
  64. Yukalov, V.I.; Sornette, D. Mathematical structure of quantum decision theory. Adv. Compl. Syst. 2010, 13, 659–698. [Google Scholar] [CrossRef] [Green Version]
  65. Yukalov, V.I.; Sornette, D. Quantum probabilities as behavioral probabilities. Entropy 2017, 19, 112. [Google Scholar] [CrossRef] [Green Version]
  66. Black, D. On the rationale of group decision making. J. Polit. Econ. 1948, 56, 23–34. [Google Scholar] [CrossRef]
  67. Searle, J.R. Rationality in Action; Massachusetts Institute of Technology: Cambridge, UK, 2001. [Google Scholar]
  68. Ariely, D. Predictably Irrational; Harper: New York, NY, USA, 2008. [Google Scholar]
  69. Yukalov, V.I.; Sornette, D. Positive operator-valued measures in quantum decision theory. Lect. Notes Comput. Sci. 2015, 8951, 146–161. [Google Scholar]
  70. Joos, E.; Zeh, H.D.; Kiefer, C.; Giulini, D.J.W.; Kupsch, J.; Stamatescu, I. Decoherence and the Appearance of a Classical World in Quantum Theory; Springer: Berlin, Germany, 2003. [Google Scholar]
  71. Yukalov, V.I.; Sornette, D. Manipulating decision making of typical agents. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 1155–1168. [Google Scholar] [CrossRef] [Green Version]
  72. Yukalov, V.I.; Sornette, D. Quantitative predictions in quantum decision theory. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 366–381. [Google Scholar] [CrossRef] [Green Version]
  73. Zeh, H.D. On the interpretation of measurement in quantum theory. Found. Phys. 1970, 1, 69–76. [Google Scholar] [CrossRef]
  74. Joos, E.; Zeh, H.D. The emergence of classical properties through interaction with the environment. Zeit. Phys. B 1985, 59, 223–243. [Google Scholar] [CrossRef]
  75. Zurek, W.H. Decoherence, einselection, and the quantum origins of the classical. Rev. Mod. Phys. 2003, 75, 715–776. [Google Scholar] [CrossRef] [Green Version]
  76. Yukalov, V.I. Equilibration of quasi-isolated quantum systems. Phys. Lett. A 2012, 376, 550–554. [Google Scholar] [CrossRef] [Green Version]
  77. Yukalov, V.I.; Sornette, D. Role of information in decision making of social agents. Int. J. Inf. Technol. Decis. Mak. 2015, 14, 1129–1166. [Google Scholar] [CrossRef] [Green Version]
  78. Yukalov, V.I. Irreversibility of time for quasi-isolated systems. Phys. Lett. A 2003, 308, 313–318. [Google Scholar] [CrossRef] [Green Version]
  79. Yukalov, V.I. Expansion exponents for nonequilibrium systems. Phys. A 2003, 320, 149–168. [Google Scholar] [CrossRef] [Green Version]
  80. Luce, R.D. Individual Choice Behavior: A Theoretical Analysis; Wiley: New York, NY, USA, 1959. [Google Scholar]
  81. Yukalov, V.I.; Sornette, D. Preference reversal in quantum decision theory. Front. Psychol. 2015, 6, 1538. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  82. Yukalov, V.I.; Sornette, D. Inconclusive quantum measurements and decisions under uncertainty. Front. Phys. 2016, 4, 12. [Google Scholar] [CrossRef] [Green Version]
  83. Favre, M.; Wittwer, A.; Heinimann, H.R.; Yukalov, V.I.; Sornette, D. Quantum decision theory in simple risky choices. PLoS ONE 2016, 11, 0168045. [Google Scholar] [CrossRef] [PubMed]
  84. Yukalov, V.I.; Yukalova, E.P. Entanglement production by evolution operator. J. Phys. Conf. Ser. 2017, 826, 012021. [Google Scholar] [CrossRef] [Green Version]
  85. Yukalov, V.I.; Yukalova, E.P.; Yurovsky, V.A. Entanglement production by statistical operators. Laser Phys. 2019, 29, 065502. [Google Scholar] [CrossRef] [Green Version]
  86. Charness, G.; Rabin, M. Understanding social preferences with simple tests. Quart. J. Econ. 2002, 117, 817–869. [Google Scholar] [CrossRef] [Green Version]
  87. Blinder, A.; Morgan, J. Are two heads better than one? An experimental analysis of group versus individual decision-making. J. Money Credit Bank. 2005, 37, 789–811. [Google Scholar]
  88. Cooper, D.; Kagel, J. Are two heads better than one? Team versus individual play in signaling games. Am. Econ. Rev. 2005, 95, 477–509. [Google Scholar] [CrossRef] [Green Version]
  89. Charness, G.; Karni, E.; Levin, D. Individual and group decision making under risk: An experimental study of Bayesian updating and violations of first-order stochastic dominance. J. Risk Uncert. 2007, 35, 129–148. [Google Scholar] [CrossRef] [Green Version]
  90. Charness, G.; Rigotti, L.; Rustichini, A. Individual behavior and group membership. Am. Econ. Rev. 2007, 97, 1340–1352. [Google Scholar] [CrossRef] [Green Version]
  91. Chen, Y.; Li, S. Group identity and social preferences. Am. Econ. Rev. 2009, 99, 431–457. [Google Scholar] [CrossRef] [Green Version]
  92. Charness, G.; Karni, E.; Levin, D. On the conjunction fallacy in probability judgement: New experimental evidence regarding Linda. Games Econ. Behav. 2010, 68, 551–556. [Google Scholar] [CrossRef] [Green Version]
  93. Kühberger, A.K.; Komunska, D.; Perner, J. The disjunction effect: Does it exist for two-step gambles? Org. Behav. Hum. Decis. Process. 2001, 85, 250–264. [Google Scholar] [CrossRef] [PubMed]
  94. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: San Fransisco, CA, USA, 1998. [Google Scholar]
  95. Wooldridge, M.J. Introduction to Multi-Agent Systems; Wiley: New York, NY, USA, 2001. [Google Scholar]
  96. Russel, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Prentice Hall: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  97. Loewenstein, G.F.; Prelec, D. Preferences for sequences of outcomes. Psychol. Rev. 1993, 100, 91–108. [Google Scholar] [CrossRef]
  98. Yukalov, V.I. Equilibration and thermalization in finite quantum systems. Laser Phys. Lett. 2011, 8, 485–507. [Google Scholar] [CrossRef]
  99. Yukalov, V.I.; Yukalova, E.P.; Sornette, D. Information processing by networks of quantum decision makers. Physica A 2018, 492, 747–766. [Google Scholar] [CrossRef] [Green Version]
  100. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  101. Kullback, S. Information Theory and Statistics; Wiley: New York, NY, USA, 1959. [Google Scholar]
  102. Centola, D.; Baronchelli, A. The spontaneous emergence of conventions: An experimental study of cultural evolution. Proc. Natl. Acad. Sci. USA 2015, 112, 1989–1994. [Google Scholar] [CrossRef] [Green Version]
  103. Centola, D.; Becker, J.; Brackbill, D.; Baronchelli, A. Experimental evidence for tipping points in social conventions. Science 2018, 360, 1116–1119. [Google Scholar] [CrossRef] [Green Version]
  104. Lavrakas, P.J. Encyclopedia of Survey Research Methods; SAGE Publications: Thousand Oaks, CA, USA, 2008; pp. 664–665. [Google Scholar]
  105. Kiger, P.G. Questionnaire Design; Pew Research Center: Washington, DC, USA, 2017. [Google Scholar]
  106. Khrennikov, A. Bell-Boole inequality: Nonlocality or probabilistic incompatibility of random variables. Entropy 2008, 10, 19–32. [Google Scholar] [CrossRef] [Green Version]
  107. Khrennikov, A. Contextual Approach to Quantum Formalism; Springer: Berlin, Germany, 2009. [Google Scholar]
  108. Dzhafarov, E.N.; Kujala, J.V. Probabilistic foundations of contextuality. Fortschr. Phys. 2017, 65, 1600040. [Google Scholar] [CrossRef] [Green Version]
  109. Benfari, R.C.; Ockene, J.K. Control of cigarette smoking from a psychological perspective. Ann. Rev. Public Health 1982, 3, 101–128. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Westmaas, J.L. Social support in smoking cessation: Reconciling theory and evidence. Nicotine Tobacco Res. 2010, 12, 695–707. [Google Scholar] [CrossRef] [Green Version]
  111. Savage, L.J. The Foundations of Statistics; Wiley: New York, NY, USA, 1954. [Google Scholar]
  112. Tversky, A.; Shafir, E. The disjunction effect in choice under uncertainty. Psychol. Sci. 1992, 3, 305–309. [Google Scholar] [CrossRef]
  113. Charness, G.; Sutter, M. Groups make better self-interested decisions. J. Econ. Perspect. 2012, 26, 157–176. [Google Scholar] [CrossRef] [Green Version]
  114. Tversky, A. Intransitivity of preferences. Psychol. Rev. 1969, 76, 31–48. [Google Scholar] [CrossRef]
  115. Fishburn, P.C.; LaValle, I.H. Context-dependent choice with nonlinear and nontransitive preferences. Econometrica 1988, 56, 1221–1239. [Google Scholar] [CrossRef]
  116. Fishburn, P.C. Nontransitive preferences in decision theory. J. Risk Uncert. 1991, 4, 113–134. [Google Scholar] [CrossRef]
  117. Makowski, M.; Piotrowski, E.W. Transitivity of an entangled choice. J. Phys. A 2011, 44, 075301. [Google Scholar] [CrossRef] [Green Version]
  118. Makowski, M.; Piotrowski, E.W.; Sladkowski, J. Do transitive preferences always result in indifferent divisions? Entropy 2015, 17, 968–983. [Google Scholar] [CrossRef] [Green Version]
  119. Müller-Trede, J.; Sher, S.; McKenzie, C.R.M. Transitivity in context: A rational analysis of intransitive choice and context-sensitive preference. Decision 2015, 2, 280–305. [Google Scholar] [CrossRef]
  120. Panda, S.C. Rational choice with intransitive preferences. Stud. Microecon 2018, 6, 66–83. [Google Scholar] [CrossRef] [Green Version]
  121. Tsetsos, K.; Moran, R.; Moreland, J.; Chater, N.; Uscher, M.; Summerfield, C. Economic irrationality is optimal during noisy decision making. Proc. Natl. Acad. Sci. USA 2016, 113, 3102–3107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  122. Yukalov, V.I.; Sornette, D. Self-organization in complex systems as decision making. Adv. Compl. Syst. 2014, 17, 1450016. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Yukalov, V.I. Evolutionary Processes in Quantum Decision Theory. Entropy 2020, 22, 681. https://0-doi-org.brum.beds.ac.uk/10.3390/e22060681

AMA Style

Yukalov VI. Evolutionary Processes in Quantum Decision Theory. Entropy. 2020; 22(6):681. https://0-doi-org.brum.beds.ac.uk/10.3390/e22060681

Chicago/Turabian Style

Yukalov, Vyacheslav I. 2020. "Evolutionary Processes in Quantum Decision Theory" Entropy 22, no. 6: 681. https://0-doi-org.brum.beds.ac.uk/10.3390/e22060681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop