Next Article in Journal
Magnetically-Driven Quantum Heat Engines: The Quasi-Static Limit of Their Efficiency
Next Article in Special Issue
Entropy and the Self-Organization of Information and Value
Previous Article in Journal
Forecasting Energy Value at Risk Using Multiscale Dependence Based Methodology
Previous Article in Special Issue
An Evolutionary Game Theoretic Approach to Multi-Sector Coordination and Self-Organization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Resonance, Self-Organization and Information Dynamics in Multistable Systems

by
Grégoire Nicolis
1,*,† and
Catherine Nicolis
2,†
1
Interdisciplinary Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, Campus Plaine, CP 231, bd du Triomphe, Brussels 1050, Belgium
2
Institut Royal Météorologique de Belgique, 3 av. Circulaire, Brussels 1180, Belgium
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 15 March 2016 / Revised: 25 April 2016 / Accepted: 28 April 2016 / Published: 4 May 2016
(This article belongs to the Special Issue Information and Self-Organization)

Abstract

:
A class of complex self-organizing systems subjected to fluctuations of environmental or intrinsic origin and to nonequilibrium constraints in the form of an external periodic forcing is analyzed from the standpoint of information theory. Conditions under which the response of information entropy and related quantities to the nonequilibrium constraint can be optimized via a stochastic resonance-type mechanism are identified, and the role of key parameters is assessed.

1. Introduction

One of the principal features of complex self-organizing systems is the multitude of a priori available states [1]. This confers to their evolution an element of unexpectedness, reflected by the ability to choose among several outcomes and the concomitant difficulty of an observer to localize the actual state in state space. This is reminiscent of a central problem of information and communication theories [2], namely how to recognize a particular signal blurred by noise among the multitude of signals emitted by a source.
The connection between self-organization and information finds its origin in the pioneering works of Haken and Nicolis [3,4]. In the present work, we explore this connection in a class of multistable systems subjected to stochastic variability generated by fluctuations of intrinsic or environmental origin, as well as to a systematic nonequilibrium constraint in the form of a weak external periodic forcing. As is well known, stochasticity typically induces transitions between the states [5]. Furthermore, under appropriate conditions, one witnesses sharp, stochasticity-induced amplification of the response to the periodic forcing, referred to as stochastic resonance [6]. Our objective is to relate these phenomena to information processing.
A general formulation of the stochastic dynamics in the presence of a periodic forcing for multistable systems involving one variable is presented in Section 2, where, building on previous work by one of the present authors [7], the classical linear response theory of stochastic resonance in bistable systems is extended to the case of an arbitrary number of simultaneously stable states. In Section 3, a set of entropy-like quantities characterizing the complexity, variability and predictability of the system viewed as an information processor are introduced. Their dynamics as induced by the dynamics of the underlying multistable system is analyzed in Section 4. It is shown that by varying some key parameters the system can attain states of optimal response and predictability. The main conclusions are summarized in Section 5.

2. Self-Organization and Stochastic Resonance in a Periodically-Forced Multistable System

Consider a one-variable nonlinear system subjected to additive periodic and stochastic forcings. The evolution of such a system can be cast in a potential form [1,5],
d x d t = U ( x , t ) x + F ( t )
where x is the state variable and the stochastic forcing F ( t ) is assimilated to a Gaussian white noise of variance q 2 ,
< F ( t ) > = 0 , < F ( t ) F ( t ) > = q 2 δ ( t t )
We decompose the generalized potential U as:
U ( x , t ) = U 0 ( x ) ϵ x sin ω t
Here, U 0 is the potential in absence of the periodic forcing, and ϵ, ω stand for the amplitude and frequency of the forcing, respectively. In the classical setting of stochastic resonance, U 0 ( x ) possesses two minima (associated with two stable steady states of the system) separated by a maximum. In the present work, this setting is extended by allowing for the existence of an arbitrary number n of stable steady states and, thus, for a U 0 ( x ) possessing n minima 1 , , n separated by intermediately situated maxima. Furthermore, we stipulate that the leftmost and rightmost minima one and n are separated from the environment by impermeable boundaries, such that there are no probability fluxes directed from these states to the environment [7].
A simple implementation of this setting amounts to choosing U 0 ( x ) in such a way that the successive minima and maxima are equidistant and of equal depth and height, respectively. These conditions become increasingly difficult to fulfill for increasing n if U 0 ( x ) has a polynomial form. For the sake of simplicity, we will therefore adopt the following model for U 0 ( x ) :
U 0 ( x ) = cos x , 0 x 2 π n
with stable and unstable states located respectively at:
π , 3 π , 5 π , (3b) 2 π , 4 π , 6 π ,
Equations (1) and (2) describe a composite motion consisting of a combination of small-scale diffusion around each of the stable states and of large-scale transitions between neighboring stable states across the intermediate unstable state. The latter is an activated process whose rate depends sensitively on the potential barrier (cf. Equation (2b)):
Δ U = Δ U 0 ϵ Δ x sin ω t (4a) = U 0 ( x u n s t ) U 0 ( x s t ) ϵ ( x u n s t x s t ) sin ω t
As long as the noise is sufficiently weak in the sense of q 2 Δ U , the characteristic time scale of this motion is much slower than the characteristic time of diffusion around a given stable state, the corresponding rate being given by Kramers’ formula [1,5],
k ( t ) = 1 2 π | U ( x u n s t ) U ( x s t ) | exp [ 2 q 2 Δ U ]
where the accents denote derivatives with respect to x.
Placing ourselves in this limit, we can map Equation (1) into a discrete state process [5,7] describing the transfer of probability masses p i contained in the attraction basins of the stable states i ( i = 1 , , n ):
state 1 k 21 k 12 state 2 k 32 k 23 state 3 state n 1 k n , n 1 k n 1 , n state n
The corresponding kinetic equations read [7]:
d p i d t = T i j ( t ) p j ( i = 1 , , n )
where T i j is the conditional probability per unit time to reach state i starting from state j. The transfer operator T appearing in this equation is a tridiagonal matrix satisfying the normalization condition i T i j ( t ) = 0 , whose structure can be summarized as follows:
  • Elements along the principal diagonal:
    T 11 = k 12 ( t ) , T n n = k n , n 1 ( t ) (6a) T i i = ( k i , i 1 ( t ) + k i , i + 1 ( t ) ) 2 i n 1
  • Elements along the upper sub-diagonal:
    T i , i + 1 = k i + 1 , i ( t ) 1 i n 1
  • Elements along the lower sub-diagonal:
    T i 1 , i = k i , i 1 ( t ) 2 i n
The rate constants k i j can be evaluated from Equations (4a) and (4b),
k i , i ± 1 ( t ) = k i , i ± 1 ( 0 ) exp [ 2 ϵ q 2 Δ x ( i , i ± 1 ) sin ω t ]
with:
Δ x ( i , i ± 1 ) = x u n s t ( i ± 1 ) x s t ( i )
k i , i ± 1 ( 0 ) = 1 2 π | U 0 ( x u n s t ) U 0 ( x s t ) | exp [ 2 q 2 Δ U 0 ( i , i ± 1 ) ]
Equation (5) constitutes a linear system with time-periodic coefficients. In what follows, we focus on the linear response, which will provide us with both qualitative and quantitative insights into the role of the principal parameters involved in the problem.
The starting point is to expand Equation (7a) in ϵ,
k i , i ± 1 ( t ) = k i , i ± 1 ( 0 ) + ϵ Δ i , i ± 1 sin ω t
with:
Δ i , i ± 1 = 2 q 2 k i , i ± 1 ( 0 ) Δ x ( i , i ± 1 )
This induces a decomposition of the transfer operator T and of the probability vector p = ( p 1 , , p n ) T in Equation (5) in the form:
T ( t ) = T 0 + ϵ Δ sin ω t
p ( t ) = p 0 + δ p t
Here, T 0 and Δ are again tridiagonal matrices with elements given by Equations (6a)–(6c), where k i , i ± 1 ( t ) are replaced by k i , i ± 1 ( 0 ) and Δ x i , i ± 1 , respectively. p 0 is the invariant probability in absence of the periodic forcing and δ p the forcing-induced response. Notice that p 0 and δ p are normalized to unity and to zero, respectively. Furthermore, since in absence of the forcing all k’s are equal (cf. Equations (7c) and (3a)), the corresponding invariant probabilities p i ( 0 ) are uniform, p i ( 0 ) = 1 / n .
Substituting Equation (9) into Equation (5) and adopting for compactness a vector notation, we obtain to the first order in ϵ:
d δ p d t = T 0 δ p + ϵ sin ω t Δ · p 0
The solution of this equation in the long time limit is of the form:
δ p ( t ) = ϵ ( A cos ω t + B sin ω t )
where the components A i and B i of A and B determine the amplitudes and phases of the δ p i ’s with respect to the periodic forcing,
δ p i ( t ) = ( sign B i ) R i sin ω t + ϕ i
R i = ϵ ( A i 2 + B i 2 ) 1 2
ϕ i = arctan A i B i
Substituting Equation (11) into Equation (10) and identifying the coefficients of cos ( ω t ) and sin ( ω t ) , one obtains following the lines of [7] the following explicit expressions of A i and B i ,
A i = 1 N 2 4 π k 0 n 2 q 2 k e v e n cos ( k 1 ) π 2 n cos ( 2 i 1 ) ( k 1 ) π 2 n ω λ k 2 + ω 2 (12) B i = 1 N 2 4 π k 0 n 2 q 2 k e v e n cos ( k 1 ) π 2 n cos ( 2 i 1 ) ( k 1 ) π 2 n λ k λ k 2 + ω 2
where λ k is given by:
λ k = 2 k 0 ( 1 cos ( k 1 ) π n ) k = 1 , , n
and k 0 is the value of the unperturbed rates k i , i ± 1 ( 0 ) .
Figure 1a–c depicts the maxima R i and the phases of δ p i ( t ) as a function of i, keeping n and q 2 fixed as obtained by numerical evaluation of the analytic expressions (12). The plot of the coefficient B i as a function of i in Figure 1b shows that this coefficient is subjected to several changes in sign. This entails that the corresponding response (Equation (11b)) will be subjected to an additional phase shift of π in regions where B i is negative. As can be seen, the maximal response is obtained for the boundary states one and n. This is due to the fact that while the intermediate states are depleted by transferring probability masses to both of their neighbors, for the boundary states, the depletion is asymmetric. Furthermore, for given noise strength q 2 , the response is more pronounced in the range of low frequencies, as expected to be the case in stochastic resonance. Note that for n odd, the response in the middle state is strictly zero. Finally, varying q 2 for fixed ω provides an optimal value q opt 2 for which the amplitude of the response is maximized.

3. Multistability, Information Entropy, Information Production and Information Transfer

In this section, we introduce a set of quantities serving as measures of the choice and unexpectedness associated with self-organization and, in particular, with the multiplicity of available states of the system introduced in Section 2. We start with information (Shannon) entropy [1,2,3,4]:
S I = i = 1 n p i ln p i
where the probabilities p i of the various states are defined by Equations (5), (9b) and (11). As a reference, we notice that in the absence of the nonequilibrium constraint provided by the external forcing, the probabilities are uniform, p i = 1 / n , and S I in this state of full randomness attains its maximum value:
S I , max = ln n
The deviation from full randomness, and thus, the ability to reduce errors, is conveniently measured by the redundancy:
R = 1 S I S I , max = 1 S I ln n
We come next to the link with dynamics. Differentiating both sides of Equation (14) with respect to time and utilizing Equation (5), we obtain a balance equation for the rate of change of S I .
d S I d t = i d p i d t ln p i = i j T i j p j ln p i
Setting:
T i j = w i j i j (17) T i i = i j T j i = w i i
we can rewrite this equation in the more suggestive form:
d S I d t = i j ln p i ( w i j p j w j i p i ) = 1 2 i j ln p j p i ( w i j p j w j i p i )
Writing:
ln p j p i = ln w i j p j w j i p i ln w i j w j i
we finally obtain:
d S I d t = σ I + J I
where the information entropy production σ I and the associated flux J I are defined by [8,9]:
σ I = 1 2 i j ( w i j p j w j i p i ) ln w i j p j w j i p i 0
J I = 1 2 i j ( w i j p j w j i p i ) ln w i j w j i
We notice the bilinear structure of σ I in which the factors within the sum can be viewed as generalized (probability) fluxes and their associated generalized forces. This is reminiscent of the expression of entropy production of classical irreversible thermodynamics [10]. As a reference, in the state of equipartition realized in the absence of the external forcing, one has p i = 1 / n , w i j = k 0 , and σ I vanishes along with all individual generalized fluxes and forces. This property of detailed balance, characteristic of thermodynamic equilibrium, breaks down in the presence of the forcing, which introduces a differentiation in p i and an asymmetry in the w i j . σ I measures, therefore the distance between equilibrium and nonequilibrium on the one side and between direct (i to j) and reverse (j to i) process on the other. In this latter context σ I is also closely related to the Kullback information.
Finally, in an information theory perspective, one is led to consider the information transfer between a part of the system playing the role of “transmitting set” X and a “receiver set” Y separated by a “noisy channel” [2,4]. In the dynamical perspective developed in this work, the analogs are two states, say i and j, and a conditional probability matrix, W = { W j i } . The information transfer is then simply the sum of the Shannon entropy and the Kolmogorov–Sinai entropy [1,4]:
h = i j p i W j i ln W j i
To relate h with the quantities governing the evolution of our multistable system, we need to relate the transition probabilities W i j to the transition rates (probabilities per unit time) w i j featured in Equations (6) and (17). This requires in turn to map the continuous time process of the previous section to a discrete-time Markov chain. To this end, we introduce the discretized expression of the time derivative in Equation (5), utilize Equation (17) and choose the time step Δ t as a fraction of the Kramers time associated with the passage over the potential barriers as discussed in Section 2:
Δ t = ( k i , i + 1 + k i + 1 , i + k i , i 1 + k i 1 , i ) 1 = κ 1
This leads to the discrete master-type equation:
p i ( t + Δ t ) = j W i j p j ( t )
where the stochastic matrix W is defined by:
W i ± 1 , i = k i , i ± 1 κ (22a) W i i = k i + 1 , i + k i 1 , i κ 2 i n 1
W 21 = k 12 κ , W 11 = 1 k 12 κ
W n 1 , n = k n , n 1 κ , W n n = 1 k n , n 1 κ
Introducing again as a reference the state in the absence of the forcing, one sees straightforwardly that k i j = k 0 , p i = 1 / n and W i j = 1 / 4 for i j , W i i = 1 / 2 for 2 i n 1 and W 11 = W n n = 3 / 4 . Expression (19) reduces then to:
h ( 0 ) = n 2 2 · 3 2 ln 2 + 1 n ( 3 2 ln 3 + 4 ln 2 )
where the two terms on the right-hand side account, respectively, for the contributions of the intermediate states and of the boundary states one and n.
It is worth noting that for n, even the Markov chain associated with the forcing-free system is lumpable [11], in the sense that upon grouping the original states, one can reduce Equation (21) into a system of just two states a and b, with W a a = W b b = 3 / 4 and W a b = W b a = 1 / 4 . In other words, in the absence of the nonequilibrium constraint, the intermediate states play no role. The presence of the forcing will change this situation radically by inducing non-trivial correlations and information exchanges within the system.

4. Nonequilibrium Dynamics of Information and Stochastic Resonance

Our next step is to evaluate the quantities introduced in the preceding section in the presence of the nonequilibrium constraint provided by the external forcing with emphasis on the roles of key parameters, such as forcing amplitude and frequency, noise strength and number of states.

4.1. Information Entropy and Redundancy

Substituting expression Equation (9b) into Equation (14) and using the property i δ p i = 0 , one sees straightforwardly that the O ( ϵ ) contributions to S I cancel identically. Keeping the first non-trivial (i.e., O ( ϵ 2 ) ) parts, one obtains:
S I = S I ( 0 ) + Δ S I
where:
S I ( 0 ) = S I , max = ln n
and:
Δ S I = n 2 i δ p i 2
Using expression Equation (11b) for δ p i , we may further decompose Δ S I into its time average part Δ S I ¯ and a periodic modulation δ S I around the average:
Δ S I ¯ = ϵ 2 n 4 i R i 2
δ S I = ϵ 2 n 4 i R i 2 cos ( 2 ( ω t + ϕ i ) ) (25b) = R eff 2 cos ( 2 ω t + ψ eff )
where the effective amplitude R eff and phase ψ eff of the modulation are expressed in terms of R i and ϕ i .
The evaluation of the redundancy, Equation (16), follows straightforwardly from that of S I , leading to:
R = Δ S I ln n
where Δ S I is given by Equations (24c) and (25).
In Figure 2a,b, the time averaged excess entropy Δ S I ¯ and redundancy R ¯ are plotted as a function of the number n of states using expressions Equations (11)–(13). In all cases, Δ S I ¯ is negative and R ¯ positive, reflecting the enhancement of predictability induced by the nonequilibrium constraint. Furthermore, and similarly to Figure 1a, for given noise strength q 2 , the enhancement is more pronounced in the range of low frequencies ω and is further amplified when the conditions of stochastic resonance are met. Interestingly, for given q 2 and ω, the enhancement exhibits a clear-cut extremum for a particular value of the number of states. This unexpected result suggests that to optimize its function our multistable system, viewed as an information processor, should preferably be endowed with a number of states (essentially a “variety”) that is neither very small nor too large. Finally in Figure 3, the time evolution of the full Δ S I in the low frequency range is plotted using expression Equations (11)–(13) and (25).

4.2. Information Entropy Production

Our starting point is Equation (18b). We have shown in the preceding section that the zeroth order part of σ I vanishes, since it corresponds to a state where detailed balance holds. To obtain the first non-trivial contribution, we need therefore to expand both p i and w i j in the forcing amplitude ϵ. Actually, since each of the two factors in the expression of σ I vanishes for ϵ = 0 , it suffices to take each of them to O ( ϵ ) in order to obtain the dominant, O ( ϵ 2 ) contribution. Substituting Equation (9b) along with the analogous expressions for w i j :
w i j = w i j ( 0 ) + δ w i j ( j = i ± 1 )
where w i j ( 0 ) = k 0 and p 0 = 1 / n , we obtain:
Δ σ I = n 2 k 0 { i [ k 0 ( δ p i + 1 δ p i ) + 1 n ( δ w i , i + 1 δ w i + 1 , i ) ] 2 (28) + i [ k 0 ( δ p i 1 δ p i ) + 1 n ( δ w i , i 1 δ w i 1 , i ) ] 2 }
where δ p i is given by Equations (11)–(13) and (see Equations (6), (7) and (17)):
δ w i , i ± 1 = ± ϵ 2 q 2 k 0 π sin ω t
Taking the time average Δ σ I ¯ over a period of the forcing and denoting for compactness sign B i = s i , one finally obtains:
Δ σ I ¯ ϵ 2 / ( 2 k 0 ) = 16 k 0 2 π 2 ( n 1 ) n q 4 + k 0 2 n ( 1 2 ( R 1 2 + R 2 2 ) s 1 s 2 R 1 R 2 cos ( ϕ 2 ϕ 1 ) ) + 4 k 0 2 π q 2 ( s 2 R 2 cos ϕ 2 s 1 R 1 cos ϕ 1 ) + k 0 2 n ( 1 2 ( R n 1 2 + R n 2 ) s n 1 s n R n 1 R n cos ( ϕ n 1 ϕ n ) ) + 4 k 0 2 π q 2 ( s n 1 R n 1 cos ϕ n 1 s n R n cos ϕ n ) + i = 2 n 1 { k 0 2 n [ 1 2 ( R i + 1 2 + R i 1 2 + 2 R i 2 ) s i R i ( s i + 1 R i + 1 cos ( ϕ i + 1 ϕ i ) + s i 1 R i 1 cos ( ϕ i 1 ϕ i ) ) ] + 4 k 0 2 π q 2 ( s i R i + 1 cos ϕ i + 1 s i 1 R i 1 cos ϕ i 1 ) }
Figure 4 depicts the dependence of Δ σ I ¯ , scaled by the factor 2 k 0 ( ϵ π / q 2 ) 2 , as a function of a number of states n for different values of the forcing frequency. We observe a trend similar to that of Figure 2a,b: an enhancement under nonequilibrium conditions near stochastic resonance for an optimal number of intermediate states and practically no effect of the nonequilibrium constraint for higher frequencies. Since the time average of d S I / d t in Equation (18a) is necessarily zero, it follows that the information flux J I (Equation(18c)) will display a similar behavior albeit with an opposite sign, i.e., a pronounced dip for low frequencies and for an optimal number of states. In a sense, the excess information produced remains confined within the system at the expense of a negative excess information flux, in much the same way as in the entropy balance of classical irreversible thermodynamics [10].

4.3. Information Transfer and Kolmogorov–Sinai Entropy

We begin by decomposing p i and W i j in Equation (19) into a reference part and a deviation arising from the presence of the nonequilibrium constraint:
p i = p i ( 0 ) + δ p i W i j = W i j ( 0 ) + δ W i j
where p i ( 0 ) = 1 / n and the W i j ( 0 ) ’s have been evaluated in Section 3. Using Equations (7), (8) and (22), one can establish the following properties:
  • δ W i i vanish for the intermediate states 2 i n 1 .
  • The contributions of k i , i + 1 coming from second order terms in the expansion of Equation ((7a) in powers of ϵ do not contribute up to order ϵ 3 to δ W i , i ± 1 , which can therefore be limited for our purposes to its first order in ϵ part δ W i , i ± 1 ( 1 ) .
  • δ W i , i ± 1 ( 1 ) , δ W 11 and δ W n n satisfy the symmetry relations:
    δ W i , i + 1 ( 1 ) = δ W i , i 1 ( 1 ) = δ W δ W 11 = δ W n n = δ W
Substituting into Equation (19) and using the symmetry property δ p n = δ p 1 along with the normalization condition i = 1 n δ p i = 0 , one obtains after some straightforward manipulations:
h = h ( 0 ) + [ 2 ln 3 δ W δ p 1 4 n ( n 2 3 ) δ W 2 ]
with:
δ p 1 = ϵ ( sign B 1 ) R 1 sin ω t + ϕ 1 δ W = δ k i , i + 1 4 k 0 = ϵ 1 4 2 π q 2 sin ω t
Taking the average of Equation (33) over a period of the forcing leads to the following expression for the mean excess Kolmogorov–Sinai entropy:
Δ h ¯ = h ¯ h ( 0 ) = ϵ 2 { ln 3 π 2 q 2 sign B 1 R 1 cos ϕ 1 2 n ( n 2 3 ) π 2 4 q 4 }
In Figure 5, the dependence of Δ h ¯ , scaled by a factor α 2 = ϵ 2 4 π 2 / q 4 , on the number of states is plotted for various values of the forcing frequency. We find a trend similar to the one in Figure 2, Figure 3 and Figure 4, namely an optimal response for frequency values close to conditions of stochastic resonance and for a particular number of intermediate states. The negative values of Δ h ¯ reflect the reduction of randomness (of which h is a characteristic measure) induced by the nonequilibrium constraint.

5. Conclusions

In this work, a nonlinear system subjected to a nonequilibrium constraint in the form of a periodic forcing, giving rise to complex behavior in the form of fluctuation-induced transitions between multiple steady states and of stochastic resonance, was considered. Mapping the dynamics into a discrete-state Markov process allowed us to view the system as an information processor. Subsequently, the link between the dynamics and, in particular, the self-organization induced by the nonequilibrium constraint, on the one side, and quantities of interest in information theory, on the other side, was addressed. It was shown that the nonequilibrium constraint leaves a clear-cut signature on these quantities by reducing randomness and by enhancing predictability, which is maximized under conditions of stochastic resonance. Of special interest is the a priori unexpected existence of an optimum for a particular number of simultaneously stable states suggesting the existence of optimal alphabets on which information is to be generated and transmitted.
In summary, it appears that when viewed in a dynamical perspective, generalized entropy-like quantities as used in information theory can provide useful characterizations of self-organizing systems led to choose among a multiplicity of possible outcomes. Conversely and in line with the pioneering work in [3,4], information constitutes in turn one of the basic attributes emerging out of the dynamics of wide classes of self-organizing systems and conveying to them their specificity.

Acknowledgments

This work is supported, in part, by the Science Policy Office of the Belgian Federal Government.

Author Contributions

The authors contributed equally to the definition of the subject and to the analytic and computational parts of the work. They wrote jointly the paper and read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nicolis, G.; Nicolis, C. Foundations of Complex Systems, 2nd ed.; World Scientific: Singapore, Singapore, 2012. [Google Scholar]
  2. Ash, R.B. Information Theory; Dover: New York, NY, USA, 1990. [Google Scholar]
  3. Haken, H. Information and Self-Organization; Springer: Berlin, Germany, 1988. [Google Scholar]
  4. Nicolis, J.S. Chaos and Information Processing; World Scientific: Singapore, Singapore, 1999. [Google Scholar]
  5. Gardiner, C. Handbook of Stochastic Methods; Springer: Berlin, Germany, 1983. [Google Scholar]
  6. Gammaitoni, L.; Hänggi, P.; Jung, P.; Marchesoni, F. Stochastic resonance. Rev. Mod. Phys. 1998, 70, 223–287. [Google Scholar] [CrossRef]
  7. Nicolis, C. Stochastic resonance in multistable systems: The role of intermediate states. Phy. Rev. E 2010, 82, 011139. [Google Scholar] [CrossRef] [PubMed]
  8. Luo, J.L.; van den Broeck, C.; Nicolis, G. Stability criteria and fluctuations around nonequilibrium states. Z. Phys. B 1984, 56, 165–170. [Google Scholar]
  9. Gaspard, P. Time-reversed dynamical entropy and irreversibility in Markovian random processes. J. Stat. Phys. 2004, 117, 599–615. [Google Scholar] [CrossRef]
  10. De Groot, S.; Mazur, P. Nonequilibrium Thermodynamics; North Holland: Amsterdam, The Netherlands, 1962. [Google Scholar]
  11. Kemeny, J.; Snell, J. Finite Markov Chains; Springer: Berlin, Germany, 1976. [Google Scholar]
Figure 1. Amplitude scaled by α = 2 ϵ Δ x / q 2 : (a) of the response R i (Equation (11c)); (b) of the coefficient B i (Equation (12)); and (c) the behavior of the phase ϕ i (Equation (11d)) in the case of n = 30 coexisting stable states for different values of the ratio ω / k 0 = 0.01 (full lines), 0.1 (dashed lines) and 1 (dotted lines).
Figure 1. Amplitude scaled by α = 2 ϵ Δ x / q 2 : (a) of the response R i (Equation (11c)); (b) of the coefficient B i (Equation (12)); and (c) the behavior of the phase ϕ i (Equation (11d)) in the case of n = 30 coexisting stable states for different values of the ratio ω / k 0 = 0.01 (full lines), 0.1 (dashed lines) and 1 (dotted lines).
Entropy 18 00172 g001
Figure 2. Time average excess entropy Δ S I ¯ (Equation (25a)) (a) and redundancy R ¯ (Equation (26)); (b) as a function of the number n of states present for different values of the ratio ω / k 0 = 0.01 (full lines), 0.1 (dashed lines) and 1 (dotted lines). Normalization parameter α as in Figure 1.
Figure 2. Time average excess entropy Δ S I ¯ (Equation (25a)) (a) and redundancy R ¯ (Equation (26)); (b) as a function of the number n of states present for different values of the ratio ω / k 0 = 0.01 (full lines), 0.1 (dashed lines) and 1 (dotted lines). Normalization parameter α as in Figure 1.
Entropy 18 00172 g002
Figure 3. Time evolution of the full excess entropy Δ S I (Equation (24c)) in the case of n = 32 coexisting states with ω / k 0 = 0.01, corresponding to the minimum of the full line of Figure 2a.
Figure 3. Time evolution of the full excess entropy Δ S I (Equation (24c)) in the case of n = 32 coexisting states with ω / k 0 = 0.01, corresponding to the minimum of the full line of Figure 2a.
Entropy 18 00172 g003
Figure 4. Information entropy production averaged over the period of the forcing (Equation (30)), scaled by β = 2 k 0 ( ϵ π / q 2 ) 2 as a function of the number of states n present and for different values of the ratio ω / k 0 = 0.01 (full line), 0.1 (dashed line) and 1 (dotted line).
Figure 4. Information entropy production averaged over the period of the forcing (Equation (30)), scaled by β = 2 k 0 ( ϵ π / q 2 ) 2 as a function of the number of states n present and for different values of the ratio ω / k 0 = 0.01 (full line), 0.1 (dashed line) and 1 (dotted line).
Entropy 18 00172 g004
Figure 5. Kolmogorov–Sinai entropy averaged over the period of the forcing (Equation (35)) as a function of the number of states n present and for different values of the ratio ω / k 0 = 0.01 (full line), 0.1 (dashed line) and 1 (dotted line). Normalization parameter α as in Figure 1.
Figure 5. Kolmogorov–Sinai entropy averaged over the period of the forcing (Equation (35)) as a function of the number of states n present and for different values of the ratio ω / k 0 = 0.01 (full line), 0.1 (dashed line) and 1 (dotted line). Normalization parameter α as in Figure 1.
Entropy 18 00172 g005

Share and Cite

MDPI and ACS Style

Nicolis, G.; Nicolis, C. Stochastic Resonance, Self-Organization and Information Dynamics in Multistable Systems. Entropy 2016, 18, 172. https://0-doi-org.brum.beds.ac.uk/10.3390/e18050172

AMA Style

Nicolis G, Nicolis C. Stochastic Resonance, Self-Organization and Information Dynamics in Multistable Systems. Entropy. 2016; 18(5):172. https://0-doi-org.brum.beds.ac.uk/10.3390/e18050172

Chicago/Turabian Style

Nicolis, Grégoire, and Catherine Nicolis. 2016. "Stochastic Resonance, Self-Organization and Information Dynamics in Multistable Systems" Entropy 18, no. 5: 172. https://0-doi-org.brum.beds.ac.uk/10.3390/e18050172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop