Next Article in Journal
The Co-existence of Different Synchronization Types in Fractional-order Discrete-time Chaotic Systems with Non–identical Dimensions and Orders
Next Article in Special Issue
Harmonic Sierpinski Gasket and Applications
Previous Article in Journal
Morphogenesis of Urban Water Distribution Networks: A Spatiotemporal Planning Approach for Cost-Efficient and Reliable Supply
Previous Article in Special Issue
Combining Entropy Measures for Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Change-Point Detection Using the Conditional Entropy of Ordinal Patterns

1
Institute of Mathematics, University of Lübeck, 23562 Lübeck, Germany
2
Graduate School for Computing in Medicine and Life Sciences, University of Lübeck, 23562 Lübeck, Germany
3
Georg-Elias-Müller-Institute of Psychology, University of Goettingen, Goßlerstraße 14, 37073 Goettingen, Germany
4
Theoretical Neurophysics Group, Max Planck Institute for Dynamics and Self-Organization, Am Fassberg 17, 37077 Goettingen, Germany
5
Leibniz ScienceCampus Primate Cognition, Kellnerweg 4, 37077 Goettingen, Germany
*
Author to whom correspondence should be addressed.
Submission received: 27 July 2018 / Revised: 3 September 2018 / Accepted: 12 September 2018 / Published: 14 September 2018
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

Abstract

:
This paper is devoted to change-point detection using only the ordinal structure of a time series. A statistic based on the conditional entropy of ordinal patterns characterizing the local up and down in a time series is introduced and investigated. The statistic requires only minimal a priori information on given data and shows good performance in numerical experiments. By the nature of ordinal patterns, the proposed method does not detect pure level changes but changes in the intrinsic pattern structure of a time series and so it could be interesting in combination with other methods.

1. Introduction

Most of real-world time series are non-stationary, that is, some of their properties change over time. A model for some non-stationary time series is provided by a piecewise stationary stochastic process: its properties are locally constant except for certain time-points called change-points, where some properties change abruptly [1].
Detecting change-points is a classical problem being relevant in many applications, for instance in seismology [2], economics [3], marine biology [4], and in many other science fields. There are many methods for tackling the problem [1,5,6,7,8]. However, most of the existing methods have a common drawback: they require certain a priori information about the time series. It is necessary to know either a family of stochastic processes providing a model for the time series (see for instance [9] where autoregressive (AR) processes are considered) or at least to know which characteristics (mean, standard deviation, etc.) of the time series reflect the change (see [7,10]). In real-world applications, such information is often unavailable [11].
Here, we suggest a new method for change-point detection that requires minimal a priori knowledge: we only assume that the changes affect the evolution rule linking the past of the process with its future (a formal description of the considered processes is provided by Definition 4). A natural example of such change is an alteration of the increments distribution.
Our method is based on ordinal pattern analysis, a promising approach to real-valued time series analysis [12,13,14,15,16,17,18]. In ordinal pattern analysis, one considers order relations between values of a time series instead of the values themselves. These order relations are coded by ordinal patterns; specifically, an ordinal pattern of an order d N describes order relations between ( d + 1 ) successive points of a time series. The main step of ordinal pattern analysis is the transformation of an original time series into a sequence of ordinal patterns, which can be considered as an effective kind of discretization extracting structural features from the data. A result of this transformation is demonstrated in Figure 1 for order d = 1 . Note that the distribution of ordinal patterns contains much information on the original time series making them interesting for data analysis, especially for data from nonlinear systems (see [19,20]).
For detecting a change-point t * N in a time series x = x ( t ) t = 0 L with values in R , one generally considers x as a realization of a stochastic process X and computes for x a statistic S ( t ; x ) that should reach its maximum at t = t * . Here, we suggest a statistic on the basis of the conditional entropy of ordinal patterns introduced in [21]. The latter is a complexity measure similar to the celebrated permutation entropy [12] with particularly better performance (see [20,21]).
Let us provide an “obvious” example only to motivate our approach and to illustrate its idea.
Example 1.
Consider a time series x ( t ) t = 0 L , and its central part is shown in Figure 1. The time series is periodic before and after L / 2 , but, at L / 2 , there occurs a change (marked by a vertical line): the “oscillations” become faster. Figure 1 also presents the ordinal patterns π ( t ) of order d = 1 at times t underlying the time series. Note that there are only two ordinal patterns of order 1: the increasing (coded by 0) and the decreasing (coded by 1). Both ordinal patterns occur with the same frequency before and after the change-point.
However, the transitions between successive ordinal patterns change at L / 2 . Indeed, before the change-point L / 2 , both ordinal patterns have two possible successors (for instance, the ordinal pattern π ( L / 2 5 ) = 0 is succeeded by the ordinal pattern π ( L / 2 4 ) = 0 , which in turn is succeeded by the ordinal pattern π ( L / 2 3 ) = 1 ), whereas after the change-point the ordinal patterns 0 and 1 are alternating. A measure of diversity of transitions between ordinal patterns is provided by the conditional entropy of ordinal patterns. For the sequence π ( k ) k = 1 L of ordinal patterns of order 1, the (empirical) conditional entropy for t = 2 , 3 , , L is defined as follows:
eCE π ( k ) k = 1 t = i = 0 1 j = 0 1 n i , j ( t ) t 1 ln n i , j ( t ) n i ( t ) , with n i , j ( t ) = # { l = 1 , 2 , , t 1 π ( l ) = i , π ( l + 1 ) = j } n i ( t ) = # { l = 1 , 2 , , t 1 π ( l ) = i } ,
(throughout the paper, 0 ln 0 : = 0 and, more general, 0 · a : = 0 if a term a is not defined, and # A denotes the number of elements of a set A).
To detect change-points, we use a test statistic for d = 1 defined as follows:
CEofOP ( θ L ) = ( L 2 ) eCE π ( k ) k = 1 L ( θ L 1 ) eCE π ( k ) k = 1 θ L L ( θ L + 1 ) eCE π ( k ) k = θ L + 1 L ,
for θ ( 0 , 1 ) with θ L N . According to the properties of conditional entropy (see Section 2.2 for details), CEofOP ( θ L ) attains its maximum when  θ L coincides with a change-point. Figure 2 demonstrates this for the time series from Figure 1.
For simplicity and in view of real applications, in Example 1, we define ordinal patterns and the CEofOP statistic immediately for concrete time series. However, for theoretical consideration, it is clearly necessary to define the CEofOP statistic for stochastic processes. For this, we refer to Section 2.2.
To illustrate applicability of the CEofOP statistic, let us discuss a real-world data example. Note that here multiple change-points are detected as described below.
Example 2.
Here, we consider electroencephalogram (EEG) recording 14 from the sleep EEG dataset kindly provided by Vasil Kolev (see Section 5.3.2 in [22] for details and further results on this dataset). We employ the following procedure for an automatic discrimination between sleep stages from the EEG time series: first, we split time series into pseudo-stationary intervals by finding change-points with the CEofOP statistic (change-points are detected in each EEG channel separately), then we cluster all the obtained intervals. Figure 3 illustrates the outcome of the proposed discrimination for single EEG channel in comparison with the manual scoring by an expert; the automated identification of a sleep type (waking, REM, light sleep, deep sleep) is correct for 79.6% of 30-s epochs. Note that the borders of the segments (that is the detected change-points) in most cases correspond to the changes of sleep stage.
The  CEofOP statistic was first introduced in [18], where we have employed it as a component of a method for sleep EEG discrimination. However, no theoretical details of the method for change-point detection were provided there. This paper aims to fill in this gap and provides a justification for the CEofOP statistic. Numerical experiments given in the paper show better performance of our method than of a similar one based on the Corrected Maximum Mean Discrepancy (CMMD) statistic developed by one of the authors and collaborators [23,24]. A numerical comparison with the classical parametric Brodsky–Darkhovsky method [11] suggests good applicability of the method to nonlinear data, in particular if there is no level change. This is remarkable since our method is only based on the ordinal structure of a time series.
Matlab 2016 (MathWorks, Natick, MA, USA) scripts implementing the suggested method are available at [25].

2. Methods

This section is organized as follows. In Section 2.1, we provide a brief introduction into ordinal pattern analysis. In particular, we define the conditional entropy of ordinal patterns and discuss its properties. In Section 2.2, we introduce the CEofOP statistic. In Section 2.3, we formulate an algorithm for detecting multiple change-points by means of the CEofOP statistic.

2.1. Preliminaries

Central objects of the following are stochastic processes X = X ( t ) t = n m on a probability space ( Ω , A , P ) with values in R . Here, n N 0 and n < m N { } , allowing both finite and infinite lengths of processes. We consider only univariate stochastic processes to keep notation simple, however—with the appropriate adaptations—there are no principal restrictions on the dimension of a process. X = X ( t ) t = n m is stationary if, for all t 1 , t 2 , , t k , s with t 1 , t 2 , , t k , t 1 + s , t 2 + s , , t k + s { n , n + 1 , , m } , the distributions of ( X t i ) i = 1 k and ( X t i + s ) i = 1 k coincide.
Throughout this paper, we discuss detection of change-points in a piecewise stationary stochastic process. Simply speaking, a piecewise stationary stochastic process is obtained by “gluing” several pieces of stationary stochastic processes (for a formal definition of piecewise stationarity, see, for instance, ([26], Section 3.1)).
In this section, we recall the basic facts from ordinal pattern analysis (Section 2.1.1), present the idea of ordinal-patterns-based change-point detection (Section 2.1.2), and define the conditional entropy of ordinal patterns (Section 2.1.3).

2.1.1. Ordinal Patterns

Let us recall the definition of an ordinal pattern [14,17,18].
Definition 1.
For  d N , denote the set of permutations of { 0 , 1 , , d } by S d . We say that a real vector ( x 0 , x 1 , , x d ) has ordinal pattern OP ( x 0 , x 1 , , x d ) = ( r 0 , r 1 , , r d ) S d of order d N if
x r 0 x r 1 x r d
and
r l 1 > r l for x r l 1 = x r l .
As one can see, there are ( d + 1 ) ! different ordinal patterns of order d.
Definition 2.
Given a stochastic process X = X ( t ) t = 0 L for L N { } , the sequence Π d = Π ( t ) t = d L with
Π ( t ) = OP X ( t d ) , X ( t d + 1 ) , , X ( t )
is called the random sequence of ordinal patterns of order d N of the process X. Similarly, given x = ( x ( t ) ) t = 0 L a realization of X, the sequence of ordinal patterns of order d for x is defined as π d , L = π ( t ) t = d L with
π ( t ) = OP x ( t d ) , x ( t d + 1 ) , , x ( t ) .
For simplicity, we say that L N is the length of the sequence π d , L ; however, in fact, it consists of ( L d + 1 ) elements.
Definition 3.
A stochastic process X = X ( t ) t = 0 L for L N { } is said to be ordinal-d-stationary if for all i S d the probability P Π ( t ) = i does not depend on t for d t L . In this case, we call
p i = P Π ( t ) = i
the probability of the ordinal pattern i S d in X.
The idea of ordinal pattern analysis is to consider the sequence of ordinal patterns and the ordinal patterns distribution obtained from it instead of the original time series. Though implying the loss of nearly all the metric information, this often allows for extracting some relevant information from a time series, in particular, when it comes from a complex system. For example, ordinal pattern analysis provides estimators of the Kolmogorov–Sinai entropy [21,27,28] of dynamical systems, measures of time series complexity [12,18,29], measures of coupling between time series [16,30] and estimators of parameters of stochastic processes [13,31] (see also [15,32] for a review of applications to real-world time series). Methods of ordinal pattern analysis are invariant with respect to strictly-monotone distortions of time series [14] do not need information about range of measurements, and are computationally simple [17]. This qualifies it for application in the case that no much is known about the system behind a time series, possibly as a first exploration step.
For a discussion of the properties of ordinal patterns sequence, we refer to [13,31,33,34,35]. For the following, we need two results stated below.
Lemma 1
(Corollary 2 from [33]). Each process X = X ( t ) t N 0 with associated stationary increment process ( X ( t ) X ( t 1 ) ) t N is ordinal-d-stationary for each d N .
Probability distributions of ordinal patterns are known only for some special cases of stochastic processes [13,33,35]. In general, one estimates probabilities of ordinal patterns by their empirical probabilities. Consider a sequence π d , L of ordinal patterns. For any t { d + 1 , d + 2 , , L } , the frequency of occurrence of an ordinal pattern i S d among the first ( t d ) ordinal patterns of the sequence is given by
n i ( t ) = # { l { d , d + 1 , , t 1 } π ( l ) = i } .
Note that, in Equation (2), we do not count π ( l ) with l = t in order to be consistent with the conditional entropy following below and considering two successive ordinal patterns. A natural estimator of the probability of an ordinal pattern i in the ordinal-d-stationary case is provided by its relative frequency in the sequence π d , L :
p ^ i = n i ( L ) L d .

2.1.2. Stochastic Processes with Ordinal Change-Points

Sequences of ordinal patterns are invariant to certain changes in the original stochastic process X, such as shifts (adding a constant to the process) ([15], Section 3.4.3) and scaling (multiplying the process by a positive constant) [14]. However, in many cases, changes in the original process X affect also the corresponding random sequences of ordinal patterns and ordinal patterns distributions. On the one hand, this impedes application of ordinal pattern analysis to non-stationary time series. Namely, most of ordinal-patterns-based quantities require ordinal-d-stationarity of a time series [12,15,16] and may be unreliable when this condition fails. On the other hand, one often can detect change-points in the original process by detecting changes in the sequence of ordinal patterns.
Below, we consider piecewise stationary stochastic processes that are processes consisting of several stationary segments glued together. The time points where the signals are glued correspond to abrupt changes in the properties of the process and are called change-points. The first ideas of using ordinal patterns for detecting change-points were formulated in [23,24,34,36,37,38]. The advantage of the ordinal-patterns-based methods is that they require less information than most of the existing methods for change-point detection: it is assumed that the stochastic process is not from a specific family and that the change does not affect specific characteristics of the process. Instead, we consider further change-points with the following property.
Definition 4.
Let X ( t ) t = 0 L with L N { } be a piecewise stationary stochastic process with a change-point t * N . We say that t * is an ordinal change-point if there exist some m , n N with m < t * < n L and some d N such that X ( t ) t = m t * and X ( t ) t = t * + 1 n are ordinal-d-stationary but X ( t ) t = m n is not. A stochastic process of length less than d + 1 is ordinal-d-stationary by definition.
This approach seems to be natural for many stochastic processes and real-world time series. Note that a change-point where a change in mean occurs need not be ordinal, since the mean is irrelevant for the distribution of ordinal patterns ([15], Section 3.4.3). However, there are many methods that effectively detect changes in mean; the proposed method here is intended for use in a more complex case, when there is no classical method, or it is not clear, which of them to apply.
We illustrate Definition 4 by two examples. Piecewise stationary autoregressive processes considered in Example 3 are classical and provide models for linear time series. Since many real-world time series are nonlinear, we introduce in Example 4 a process originated from nonlinear dynamical systems. These two types of processes are used throughout the paper for empirical investigation of change-point detection methods.
Example 3.
A first order piecewise stationary autoregressive ( AR ) process with change-points t 1 * , t 2 * , , t N st 1 * is defined as
AR ( ϕ 1 , ϕ 2 , , ϕ N st ) , ( t 1 * , t 2 * , , t N st 1 * ) = AR ( t ) t = 0 L ,
where  ϕ 1 , ϕ 2 , , ϕ N st [ 0 , 1 ) are the parameters of the autoregressive model and
AR ( t ) = ϕ k AR ( t 1 ) + ϵ ( t ) ,
for all t { t k 1 * + 1 , t k 1 * + 2 , , t k * } for k = 1 , 2 , , N st , where  t 0 * : = 0 and t N st * : = L , with ϵ being the standard white Gaussian noise, and AR(0): = ϵ ( 0 ) . AR processes are often used for the investigation of methods for change-points detection (see, for instance, [23,24]), since they provide models for a wide range of real-world time series. Figure 4a illustrates a realization of a ‘two piece’ AR process with a change-point at L / 2 . By ([13], Proposition 5.3), the distributions of ordinal patterns of order d 2 reflect change-points for piecewise stationary AR processes. Figure 4c illustrates this for the realization from Figure 4a: empirical probability distributions of ordinal patterns of order d = 2 before and after the change-point L / 2 differ considerably.
Example 4.
A classical example of a nonlinear system is provided by the logistic map on the unit interval:
x ( t ) = r x ( t 1 ) 1 x ( t 1 ) ,
with t N , for x ( 0 ) [ 0 , 1 ] and r [ 1 , 4 ] . The behaviour of this map significantly varies for different value r; we are especially interested in r [ 3.57 , 4 ] with chaotic behaviour. In this case, there exists an invariant ergodic measure absolutely continuous with respect to the Lebesgue measure [39,40], therefore Equation (3) defines a stationary stochastic process NL 0 :
NL 0 ( t ) = r 1 NL 0 ( t 1 ) NL 0 ( t 1 ) ,
with NL 0 ( 0 ) [ 0 , 1 ] being a uniformly distributed random number. Note that, for almost all r [ 3.57 , 4 ] , either the map NL 0 is chaotic or hyperbolic roughly meaning that an attractive periodic orbit is dominating it. This is a deep result in one-dimensional dynamics (see [40] for details). In the hyperbolic case, after some transient behaviour, numerically, one only sees some periodic orbit, which has long periods in the interval r [ 3.57 , 4 ] . From the practical viewpoint, i.e., when considering short orbits, dynamics for that interval can be considered as chaotic since already small changes of r result in chaotic behaviour also in the theoretical sense.
Let us include some observational noise by adding standard white Gaussian noise ϵ to an orbit:
NL ( t ) = NL 0 ( t ) + σ ϵ ( t ) ,
where  σ > 0 is the level of noise.
Orbits of logistic maps, particularly with observational noise, are often used as a studying and illustrating tool of nonlinear time series analysis (see [41,42]). This justifies as a natural object for study a piecewise stationary noisy logistic (NL) process with change-points t 1 * , t 2 * , , t N st 1 * , defined as
NL ( r 1 , , r N st ) , ( σ 1 , , σ N st ) , ( t 1 * , t 2 * , , t N st 1 * ) = NL ( t ) t = 0 L ,
where  r 1 , , r N st [ 3.57 , 4 ] are the values of control parameter, σ 1 , , σ N st > 0 are the levels of noise, and
NL ( t ) = NL 0 ( t ) + σ k ϵ ( t ) ,
with
NL 0 ( t ) = r k 1 NL 0 ( t 1 ) NL 0 ( t 1 ) ,
for all t { t k 1 * + 1 , t k 1 * + 2 , , t k * } for k = 1 , 2 , , N st , with t 0 * : = 0 , t N st * : = L and NL 0 ( 0 ) [ 0 , 1 ] is a uniformly distributed random number.
Figure 4b shows a realization of a ‘two-piece’ NL process with a change-point at L / 2 ; as one can see in Figure 4d, the empirical distributions of ordinal patterns of order d = 2 before the change-point and after the change-point do not coincide. In general, the distributions of ordinal patterns of order d 1 reflect change-points for the NL processes (which can be easily checked).
The NL and AR processes have rather different ordinal patterns distributions, being the reason for using them for empirical investigation of change-point detection methods in Section 3.

2.1.3. Conditional Entropy of Ordinal Patterns

Here, we define the conditional entropy of ordinal patterns, which is a cornerstone of the suggested method for ordinal-change-point detection. Let us call a process X = X ( t ) t = 0 L for L N { } ordinal- d + -stationary if for all i , j S d the probability of pairs of ordinal patterns
p i , j = P Π ( t ) = i , Π ( t + 1 ) = j
does not depend on t for d t L 1 (compare with Definition 3). Obviously, ordinal- ( d + 1 ) -stationarity implies ordinal- d + -stationarity.
For an ordinal- d + -stationary stochastic process, consider the probability of an ordinal pattern j S d to occur after an ordinal pattern i S d . Similarly to Equation (1), it is given by:
p j | i = P Π ( t + 1 ) = j Π ( t ) = i = p i , j p i
for p i 0 . If p i = 0 , let p j | i = 0 .
Definition 5.
The conditional entropy of ordinal patterns of order d N of an ordinal d + -stationary stochastic process X is defined by:
CE ( X , d ) = i S d j S d p i p j | i ln ( p i p j | i ) + i S d p i ln p i = i S d j S d p i p j | i ln p j | i .
For brevity, we refer to CE ( X , d ) as the “conditional entropy” when no confusion can arise. The conditional entropy characterizes the mean diversity of successors j S d of a given ordinal pattern i S d . This quantity often provides a good practical estimation of the Kolmogorov–Sinai entropy for dynamical systems; for a discussion of this and other theoretical properties of conditional entropy, we refer to [21]. Here, we only note that the Kolmogorov–Sinai entropy quantifies unpredictability of a dynamical system.
One can estimate the conditional entropy from a time series by using the empirical conditional entropy of ordinal patterns [18]. Consider a sequence π d , L of ordinal patterns of order d N with length L N . Similarly to Equation (2), the frequency of occurrence of an ordinal patterns pair i , j S d is given by
n i , j ( t ) = # { l { d , d + 1 , , t 1 } π ( l ) = i , π ( l + 1 ) = j }
for t { d + 1 , d + 2 , , L } . The empirical conditional entropy of ordinal patterns for π d , L is defined by
eCE π d , L = 1 L d i S d j S d n i , j ( L ) ln n i , j ( L ) + 1 L d i S d n i ( L ) ln n i ( L ) = 1 L d i S d j S d n i , j ( L ) ln n i , j ( L ) n i ( L ) .
As a direct consequence of Lemma 1, the empirical conditional entropy approaches the conditional entropy under certain assumptions. Namely, the following holds.
Corollary 1.
For the sequence π d , of ordinal patterns of order d N of a realization of an ergodic stochastic process X = X ( t ) t N 0 with associated stationary increment process ( X ( t ) X ( t 1 ) ) t N , it holds almost surely that
lim L eCE π ( k ) k = d L = CE ( X , d ) .

2.2. A Statistic for Change-Point Detection Based on the Conditional Entropy of Ordinal Patterns

We now consider the classical problem of detecting a change-point t * on the basis of a realization x of a stochastic process X having at most one change-point, that is, it holds either N st = 1 or N st = 2 (compare [6]). To solve this problem, one estimates a tentative change-point t ^ * as the time-point that maximizes a test statistic S ( t ; x ) . Then, the value of S ( t ^ * ; x ) is compared to a given threshold in order to decide whether t ^ * is a change-point.
The idea of ordinal change-point detection is to find change-points in a stochastic process X by detecting changes in the sequence π d , L of ordinal patterns for a realization of X. Given at most one ordinal change-point t * in X, one estimates its position t ^ * by using the fact that
  • π ( d ) , π ( d + 1 ) , , π ( t * ) characterize the process before the change;
  • π ( t * + 1 ) , π ( t * + 2 ) , , π ( t * + d 1 ) correspond to the transitional state;
  • π ( t * + d ) , π ( t * + d + 1 ) , , π ( L ) characterize the process after the change.
Therefore, a position of a change-point can be estimated by an ordinal-patterns-based statistic S ( t ; π d , L ) that, roughly speaking, measures dissimilarity between the distributions of ordinal patterns for π ( k ) k = d t and for π ( k ) k = t + d L .
Then, an estimate of the change-point t * is given by
t ^ * = arg max t = d , d + 1 , , L S ( t ; π d , L ) .
A method for detecting one change-point can be extended to an arbitrary number of change-points using the binary segmentation [43]: one applies a single change-point detection procedure to the realization x; if a change-point is detected, then it splits x into two segments in each of which one is looking for a change-point. This procedure is repeated iteratively for the obtained segments until all of them either do not contain change-points or are too short.
The key problem is the selection of an appropriate test statistic S ( t ; π d , L ) for detecting changes on the basis of a sequence π d , L of ordinal patterns of a realization of the process for d , L N . We suggest to use the following statistic:
CEofOP t ; π d , L = ( L 2 d ) eCE π ( k ) k = d L ( t d ) eCE π ( k ) k = d t L ( t + d ) eCE π ( k ) k = t + d L
for all t N with d < t < L d . The intuition behind this statistic comes from the concavity of conditional entropy (not only for ordinal patterns but in general, see Section 2.1.3 in [44]). It holds
eCE π ( k ) k = d L t d L 2 d eCE π ( k ) k = d t + L ( t + d ) L 2 d eCE π ( k ) k = t + d L .
Therefore, if the probabilities of ordinal patterns change at some point t * , but do not change before and after t * , then CEofOP t ; π d , L tends to attain its maximum at t = t * . If the probabilities do not change at all, then for L being sufficiently large, Inequality (9) tends to hold with equality. More rigorously, when segments of a stochastic process before and after the change-point have infinite length, the following result takes place.
Corollary 2.
Let X = ( X t ) t N 0 be an ergodic d + -ordinal-stationary stochastic process on a probability space ( Ω , A , P ) . For  L N , let Π d , L be the random sequence of ordinal patterns of order d of ( X 0 , X 1 , , X L ) . Then, for any θ ( 0 , 1 ) , it holds
lim L CEofOP θ L ; Π d , L = 0 ,
P -almost sure.
Corollary 2 is a simple consequence of Theorem A1 (Appendix A.1). Another important property of the CEofOP statistic is its close connexion with the classical likelihood ratio statistic (see Appendix A.2 for details).
Let us now rewrite Equation (8) in a straightforward form. Let n i ( t ) and n i , j ( t ) be the frequencies of occurrence of an ordinal pattern i S d and of an ordinal patterns pair i , j S d (given by Equations (2) and (5), respectively). By setting m i ( t ) = n i ( L ) n i ( t + d ) and m i , j ( t ) = n i , j ( L ) n i , j ( t + d ) , we get using Equation (6)
CEofOP t ; π d , L = L 2 d L d i S d j S d n i , j ( L ) ln n i , j ( L ) n i ( L ) + i S d j S d n i , j ( t ) ln n i , j ( t ) n i ( t ) + i S d j S d m i , j ( t ) ln m i , j ( t ) m i ( t ) .
This statistic was first introduced and applied to the segmentation of sleep EEG time series in [18].
To demonstrate the “nonlinear” nature of the CEofOP statistic, we provide Example 5 concerning transition from a time series to its surrogate. Although being in a sense tailor-made, this example shows that CEofOP discerns changes that cannot be detected by conventional “linear” methods.
Remark 1.
The question whether a time series is linear or nonlinear often arises in data analysis. For instance, linearity should be verified before using such powerful methods as Fourier analysis. For this, one usually employs a procedure known as surrogate data testing [45,46,47]. It utilises the fact that a linear time series is statistically indistinguishable from any time series sharing some of its properties (for instance, second moments and amplitude spectrum). Therefore, one can generate surrogates having the certain properties of the original time series without preserving other properties, irrelevant for a linear system. If such surrogates are significantly different from the original series, then nonlinearity is assumed.
Example 5.
Consider a time series obtained by gluing a realisation of a noisy logistic process NL r , σ of length L / 2 (without changes) with its surrogate of the same length (to generate surrogates, we use the iterative amplitude adjusted Fourier transform (AAFT) algorithm suggested by [46] and implemented by [48]). This compound time series has a change-point at t * = L / 2 , whose conventional methods may fail to detect since the surrogate has the same autocorrelation function as the original process (for instance, this is the case for the Brodsky–Darkhovsky method considered further in Section 3). However, the ordinal pattern distributions for the original time series and its surrogate generally are significantly different. Therefore, the CEofOP statistic detects the change-point, which is illustrated by Figure 5.
Remark 2.
Although the idea that ordinal structure is a relevant indicator of time series linearity/nonlinearity is not new [12,15], to our knowledge, it was not rigorously proved that the distribution of ordinal patterns is altered by surrogates. This is clearly beyond the scope of this paper and will be discussed elsewhere as a separate study; here, it is sufficient for us to provide an empirical evidence for this.

2.3. Algorithm for Change-Point Detection via the CEofOP Statistic

Consider a sequence π d , L of ordinal patterns of order d N with length L N , corresponding to a realization of some piecewise stationary stochastic process. To detect a single change-point via the CEofOP statistic, we first estimate its possible position by
t ^ * = arg max t = T min + d , , L T min S ( t ; π d , L ) ,
where  T min is a minimal length of a sequence of ordinal patterns that is sufficient for a reliable estimation of empirical conditional entropy.
Remark 3.
From the representation CEofOPstat, it follows that, for a reasonable computation of the CEofOP statistic, a reliable estimation of eCE before and after the assumed change-point is required. For this, the stationary parts of a process should be sufficiently long. We take T min = ( d + 1 ) ! ( d + 1 ) , which is equal to the number of all possible pairs of ordinal patterns of order d (see [18] for details). Consequently, the length L of a time series should satisfy
L 2 T m i n = 2 ( d + 1 ) ! ( d + 1 ) .
Note that this does not impose serious limitations on the suggested method, since condition (12) is not too restrictive for d 3 . However, it implies using of either d = 2 or d = 3 , since d = 1 does not provide effective change-point detection (see Example 3 and Appendix A.1), while d > 3 in most applications demands too large sample sizes.
In order to check whether t ^ * is an actual change-point, we test between the hypotheses:
H 0
parts π ( d ) , π ( d + 1 ) , , π ( t ^ * ) and π ( t ^ * + d ) , , π ( L ) of the sequence π d , L come from the same distribution;
H A
parts π ( d ) , π ( d + 1 ) , , π ( t ^ * ) and π ( t ^ * + d ) , , π ( L ) of the sequence π d , L come from different distributions.
This test is performed by comparing CEofOP ( t ^ * ; π d , L ) to a threshold h, such that, if the value of CEofOP is above the threshold, one rejects H 0 in favour of H A . The choice of the threshold is ambiguous: the lower h, the higher the possibility of false rejection of H 0 in favour of H A (false alarm, meaning that the test indicates a change of the distribution although there is no actual change) is. On the contrary, the higher h, the higher the possibility of false rejection of the H A is.
As it is usually done, we consider the threshold h as a function of the desired probability α of false alarm. To compute h ( α ) , we shuffle blocks of ordinal patterns from the original sequence, in order to create new artificial sequences. Each such sequence has the same length as the original, but the segments on the left and on the right of the assumed change-point should have roughly the same distribution of ordinal patterns, even if the original sequence is not stationary. This procedure uses the ideas described in [49,50] and is similar to block bootstrapping [51,52,53,54]. The scheme of detecting at most one change-point via the CEofOP statistic, including the computing of a threshold h ( α ) is provided in Algorithm 1.
Algorithm 1 Detecting at most one change-point
Input: sequence π = π ( k ) k = t start t end of ordinal patterns of order d, nominal probability α of false alarm
Output: estimate of a change-point t ^ * if change-point is detected, otherwise return 0.
1:
functionDetectSingleCP( π , α )
2:
     T min ( d + 1 ) ! ( d + 1 ) ;
3:
    if t end t start < 2 T min then
4:
        return 0;           ▷ sequence is too short, no change-point can be detected
5:
    end if
6:
     t ^ * arg max t = t start + T min , , t end T min CEofOP ( t ; π ) ;
7:
     N boot 5 α ;         ▷ number of bootstrap samples for computing threshold
8:
    for l = 1 , 2 , , N boot do           ▷ computing threshold by reshuffling
9:
         ξ ← randomly shuffled blocks of length ( d + 1 ) from π ;
10:
         c j arg max t = t start + T min , , t end T min CEofOP ( t ; ξ ) ;
11:
    end for
12:
     c j ← Sort( c j ); ▷ sort the maximal values of CEofOP for bootstrap samples in decreasing order
13:
    h c α N boot
14:
    if S ( t ^ * ; π ) < h then
15:
        return 0;
16:
    else
17:
        return t ^ * ;
18:
    end if
19:
end function
To detect multiple change-points, we use an algorithm that consists of two steps:
Step 1:
preliminary estimation of boundaries of the stationary segments with a threshold h ( 2 α ) computed for doubled nominal probability of false alarm (that is, with a higher risk of detecting false change-points).
Step 2:
verification of the boundaries and exclusion of false change-points: a change-point is searched for a merging of every two adjacent intervals.
Details of these two steps are displayed in Algorithm 2. Step 1 is the usual binary segmentation procedure as suggested in [43]. Since this procedure detects change-points sequentially, they may be estimated incorrectly. To improve localization and eliminate false change-points, we introduce Step 2 following the idea suggested in [11].
Algorithm 2 Detecting multiple change-points
Input: sequence π = π ( k ) k = d L of ordinal patterns of order d, nominal probability α of false alarm.
Output: estimates of the number N ^ st of stationary segments and of their boundaries t ^ k * k = 0 N ^ st .
1:
functionDetectAllCP( π , α )
2:
     N ^ st 1 ; t ^ 0 * 0 ; t ^ 1 * L ; k 0                       ▷ Step 1
3:
    repeat
4:
         t ^ * ← DetectSingleCP( π ( l ) l = t ^ k * + d t ^ k + 1 * ), 2 α ;
5:
        if t ^ * > 0 then
6:
           Insert t ^ * to the list of change-points after t ^ k * and renumber change-points t ^ k + 1 * , , t ^ N ^ st * ;
7:
            N ^ st N ^ st + 1 ;
8:
        else
9:
           k k + 1 ;
10:
        end if
11:
    until k < N ^ st ;
12:
    k ← 0;                                   ▷ Step 2
13:
    repeat
14:
         t ^ * ← DetectSingleCP( π ( l ) l = t ^ k * + d t ^ k + 2 * , α );
15:
        if t ^ * > 0 then
16:
            t ^ k + 1 * t ^ * ;
17:
           k k + 1 ;
18:
        else
19:
           Delete t ^ k + 1 * from the change-points list and renumber change-points t ^ k + 2 * , , t ^ N ^ st * ;
20:
            N ^ st N ^ st 1 ;
21:
        end if
22:
    until k < N ^ st 1 ;
23:
    return N ^ st , t ^ k * k = 0 N ^ st ;
24:
end function

3. Numerical Simulations and Results

In this section, we empirically investigate performance of the method for change-point detection via the CEofOP statistic. We apply it to the noisy logistic processes and to autoregressive processes (see Section 2.1.2) and compare performances of change-point detection by the suggested method and by the following existing methods:
  • The ordinal-patterns-based method for detecting change-points via the CMMD statistic [23,24]: A time series is split into windows of equal lengths W N , empirical probabilities of ordinal patterns are estimated in every window. If there is a ordinal change-point in the time series, then the empirical probabilities of ordinal patterns should be approximately constant before the change-point and after the change-point, but they change at the window with the change-point. To detect this change, the CMMD statistic was introduced. (Note that the definition of the CMMD statistic in [23] contains a mistake, which is corrected in [24]. The results of numerical experiments reported in [23] also do not comply with the actual definition of the CMMD statistic (see Section 4.2.1.1 and 4.5.1.1 in [22] for details). In the original papers [23,24], authors do not estimate change-points, but only the corresponding window numbers; for the algorithm of change-point estimation by means of the CMMD statistic, we refer to Section 4.5.1 in [22].
  • Two versions of the classical Brodsky–Darkhovsky method [11]: the Brodsky–Darkhovsky method can be used for detecting changes in various characteristics of a time series x = x ( t ) t = 1 L , but the characteristic of interest should be selected in advance. In this paper, we consider detecting changes in mean, which is just the basic characteristic, and in correlation function corr ( x ( t ) , x ( t + 1 ) ) , which reflects relations between the future and the past of a time series and seems to be a natural choice for detecting ordinal change-points. Changes in mean are detected by the generalized version of the Kolmogorov–Smirnov statistic [11]:
    BD exp ( t ; x , δ ) = ( t ( L t ) L 2 ) δ | 1 t l = 1 t x ( l ) 1 L t l = t + 1 L x ( l ) | ,
    where the parameter δ [ 0 , 1 ] regulates properties of the statistic, δ = 0 is basically used (see [11] for details). Changes in the correlation function are detected by the following statistic:
    BD corr ( t ; x , δ ) = BD exp t ; x ( t ) x ( t + 1 ) t = 1 L 1 , δ .
Remark 4.
Note that we consider the statistic BD exp , which is intended to detect changes in mean, though ordinal-patterns-based statistics do not detect these changes. This is motivated by the fact that changes in the noisy logistic processes are on the one hand changes in mean, and, on the other hand, ordinal changes in the sense of Definition 4. Therefore, they can be detected both by BD exp and by ordinal-patterns-based statistics. In general, by the nature of ordinal time series analysis, changes in mean and in the ordinal structure are in some sense complementary.
We use orders d = 2 , 3 , 4 of ordinal patterns for computing the CEofOP statistic ( d = 1 provides worse results because of reduced sensitivity, while higher orders are applicable only to rather long time series due to condition (12)). For the CMMD statistic, we take d = 3 and the window size W = 256 . There are no special reasons for this choice except the fact that W = 256 is sufficient for estimating probabilities of ordinal patterns of order d = 3 inside the windows, since 256 > 120 = 5 ( d + 1 ) ! (Section 9.3 [15]). Results of the experiments remain almost the same for 200 W 1000 .
Nominal probability of false alarm α = 0.05 has been taken for all methods (in the case of the CMMD statistic, we have used the equivalent value 0.001 , see Section 4.3.2 in [22] for details).
In Section 3.1, we study how well the statistics for change-point detection estimate the position of a single change-point. Since we expect that performance of the statistics for change-point detection may strongly depend on the length of realization, we check this in Section 3.2. Finally, we investigate the performance of various statistics for detecting multiple change-points in Section 3.3.

3.1. Estimation of the Position of a Single Change-Point

Consider N = 10,000 realizations x j = x j ( t ) t = 0 L with j = 1 , , N for each of the processes listed in Table 1. A single change occurs at a random time t * uniformly distributed in { L 4 W , L 4 W + 1 , , L 4 + W } . For all processes, length L = 80 W of sequences of ordinal patterns is taken, with W = 256 .
To measure the overall accuracy of change-point detection via some statistic S as applied to the process X, we use three quantities. Let us first determine the error of the change-point estimation provided by the statistic S for the j-th realization of a process X:
err j ( S , X ) = t ^ * ( S ; x j ) t * ,
where t * is the actual position of the change-point and t ^ * ( S ; x j ) is its estimate obtained by using S. Then, the fraction of satisfactorily estimated change-points sE (averaged over N realizations) is defined by:
sE ( S , X ) = # j { 1 , 2 , , N } : | err j ( S , X ) | MaxErr N ,
where MaxErr is the maximal satisfactory error, we take MaxErr = W = 256 . The bias and the root mean squared error (RMSE) are respectively given by
B ( S ,   X ) = 1 N j = 1 N err j ( S ,   X ) , RMSE ( S ,   X ) = 1 N j = 1 N err j ( S ,   X ) 2 .
A large sE and a bias and RMSE close to zero are standing for a high accuracy of the estimation of a change-point. Results of the experiments are presented in Table 2 and Table 3 for NL and AR processes, respectively. For every process, the best values of performance measures are shown in bold.
Let us summarize: for the considered processes, the CEofOP statistic estimates change-point more accurately than the CMMD statistic. For the NL processes, the CEofOP statistic has almost the same performance as the Brodsky–Darkhovsky method; for the AR processes, performance of the classical method is better, though CEofOP has lower bias. In contrast to the ordinal-patterns-based methods, the Brodsky–Darkhovsky method is unreliable when there is a lack of a priori information about the time series. For instance, changes in NL processes only slightly influence the correlation function and BD corr does not provide a good indication of changes (cf. performance of BD corr and CEofOP in Table 2). Here, note that level shifts before and after a time point do not change BD corr .
Meanwhile, changes in the AR processes do not influence the expected value (see Example 3), which does not allow for detecting them using BD exp (see Table 3). Therefore, we do not consider the BD exp statistic in further experiments.
Note that performance of the CEofOP statistic is only slightly better for d = 3 than for d = 2 , and for d = 4 even decreases, although one can expect better change-point detection for higher d. As we show in the following session, this is due to the fact that the performance of the CEofOP statistic depends on the length L of the time series. In particular, L = 80 × 256 = 20,480 is not sufficient for applying the CEofOP statistic with d = 4 .

3.2. Estimating Position of a Single Change-Point for Different Lengths of Time Series

Here, we study how the accuracy of change-point estimation for the three considered statistics depends on the length L of a time series. We take N = 50,000 realizations of NL, 3.95 3.98 , σ = 0.2 and AR, 0.1 0.4 for realization lengths L = 24 W, 28 W, …, 120 W. Again, we consider a single change at a random time t * { L 4 W , L 4 W + 1 , , L 4 + W } . Results of the experiment are presented in Figure 6.
In summary, performance of the CEofOP statistic is generally better than for the CMMD statistic, but strongly depends on the length of time series. This emphasizes importance of condition (12). From the results of our experiments, we recommend choosing d, satisfying L > 50 · T min = 100 ( d + 1 ) ! ( d + 1 ) . In comparison with the classical Brodsky–Darkhovsky method, CEofOP has better performance for NL processes (see Figure 6a,b), and lower bias for AR processes (see Figure 6d).

3.3. Detecting Multiple Change-Points

Here, we investigate how well the considered statistics detect multiple change-points. Methods for change-point detection via the CEofOP and the CMMD statistics are implemented according to Section 2.3 and Section 4.5.1 in [22], respectively. We consider here CEofOP only for d = 3 , since it provided the best change-point detection in previous experiments. The Brodsky–Darkhovsky method is implemented according to [11] with only one exception: to compute a threshold for it, we use the shuffling procedure (Algorithm 1), which in our case provided better results than the technique described in [11].
We consider here two processes, AR ( 0.3 , 0.5 , 0.1 , 0.4 ) , ( t 1 * , t 2 * , t 3 * ) and NL ( ( 3.98 , 4 , 3.95 , 3.8 ) , ( 0.2 , 0.2 , 0.2 , 0.3 ) , ( t 1 * , t 2 * , t 3 * ) ) , with change-points t k * being independent and uniformly distributed in { t k * ¯ W , t k * ¯ W + 1 , , t k * ¯ + W } for k = 1 , 2 , 3 with t 1 * ¯ = 0.3 L , t 2 * ¯ = 0.7 L , t 3 * ¯ = 0.9 L , and L = 100 W. For both processes, we generate N = 10000 realizations x j with j = 1 , , N . We consider unequal lengths of stationary segments to study methods for change-point detection in more realistic conditions.
As we apply change-point detection via a statistic S to realization x j , we obtain estimates of the number N ^ st ( S ; x j ) of stationary segments and of change-points positions t ^ l * ( S ; x j ) for l = 1 , 2 , , N ^ st ( S ; x j ) 1 . Since the number of estimated change-points may be different from the actual number of changes, we suppose that the estimate for t k * is provided by the nearest t ^ l * ( S ; x j ) . Therefore, the error of estimation of the k-th change-point provided by S is given by
err k j ( S , X ) = min l = 1 , 2 , , N ^ st ( S ; x j ) 1 | t ^ l * ( S ; x j ) t k * | .
To assess the overall accuracy of change-point detection, we compute two quantities. The fraction sE k of satisfactory estimates of a change-point t k * , k = 1 , 2 , 3 is given by
sE k ( S , X ) = # j { 1 , 2 , , N } err k j ( S , X ) MaxErr N ,
where MaxErr is the maximal satisfactory error; we take MaxErr = W = 256 . The average number of false change-points is defined by:
fCP ( S , X ) = j = 1 N N ^ st ( S ; x j ) 1 # k { 1 , 2 , 3 } err k j ( S , X ) MaxErr N .
Results of the experiment are presented in Table 4 and Table 5, and the best values are shown in bold.
In summary, since distributions of ordinal patterns for NL and AR processes have different properties, results for them differ significantly. The CEofOP statistic provides good results for the NL processes. However, for the AR processes, its performance is much worse: only the most prominent change is detected rather well. Weak results for two other change-points are caused by the fact that the CEofOP statistic is rather sensitive to the lengths of stationary segments (we have already seen this in Section 3.2), and in this case they are not very long.

4. Conclusions and Open Points

In this paper, we have introduced a method for change-point detection via the CEofOP statistic and have tested it for time series coming from two classes of models with quite different behavior, namely piecewise stationary noisy logistic and autoregressive processes.
The empirical investigations suggest that the method proposed provides better detection of ordinal change-points than the ordinal-patterns-based method introduced in [23,24]. Performance of our method for the two model classes considered is particularly comparable to that for the classical Brodsky–Darkhovsky method, but, in contrast to it, ordinal-patterns-based methods require less a priori knowledge about the time series. This can be especially useful in the case of considering nonlinear models where the autocorrelation function does not describe distributions completely. Here, the point is that with exception of the mean much of the distribution is captured by its ordinal structure. Thus (together with methods finding changes in mean), the CEofOP statistic can be used at least for a first exploration step. It is remarkable that our method behaves well with respect to the bias of the estimation, possibly qualifying it to improve localization of change-points found by other methods.
Although numerical experiments and tests to real-world data cannot replace rigorous theoretical studies, the results of the current study show the potential of the change-point detection via the CEofOP statistic. However, there are some open points listed below:
  • A method for computing a threshold h for the CEofOP statistic without shuffling the original time series is of interest, since this procedure is rather time consuming. One possible solution is to utilize Theorem A1 (Appendix A.1) and to precompute thresholds using the values of Δ γ , θ d ( P , Q ) . However, this approach requires further investigation.
  • The binary segmentation procedure [43] is not the only possible method for detecting multiple change-points. In [8,55], an alternative approach is suggested: the number of stationary segments N ^ st is estimated by optimizing a contrast function, then the positions of the change-points are adjusted. Likewise, one can consider a method for multiple change-point detection based on maximizing the following generalization of CEofOP statistic:
    CEofOP ( t ) = ( L d N ^ st ) eCE π ( k ) k = d L l = 1 N ^ st t ^ l * t ^ l 1 * d eCE π t ^ l 1 * + d , , π t ^ l * ,
    where N ^ st N is an estimate of number of stationary segments, t ^ 1 * = 0 , t ^ N ^ st * = L and t ^ 1 * , t ^ 2 * , , t ^ N ^ st 1 * N are estimates of change-points. Further investigation in this direction could be of interest.
  • As we have seen in Section 3.2, CEofOP statistic requires rather large sample sizes to provide reliable change-point detection. This is due to the necessity of the empirical conditional entropy estimation (see Section 2.3). In order to reduce the required sample size, one may consider more effective estimates of the conditional entropy—for instance, the Grassberger estimate (see [56] and also Section 3.4.1 in [22]). However, elaboration of this idea is beyond the scope of this paper.
  • We did not use the full power of ordinal time series analysis, which often considers ordinal patterns taken from sequences of equidistant time points of some distance τ . This generalization of the case τ = 1 with successive points allows for addressing different scales and so to extract more information on the distribution of a time series [57], also being useful for change-point detection.
  • In this paper, only one-dimensional time series are considered, though there is no principal limitation for applying ordinal-patterns-based methods to multivariate data (see [28]). Discussion of using ordinal-patterns-based methods for detecting change-point in multivariate data (for instance, in multichannel EEG) is therefore of interest.
  • We have considered here only the “offline” detection of changes, which is used when the acquisition of a time series is completed. Meanwhile, in many applications, it is necessary to detect change-points “online”, based on a small number of observations after the change [1]. Development of online versions of ordinal-patterns-based methods for change-point detection may be an interesting direction of a future work.

Author Contributions

A.M.U. and K.K. conceived and designed the method, performed the numerical experiments and wrote the paper.

Funding

This work was supported by the Graduate School for Computing in Medicine and Life Sciences funded by Germany’s Excellence Initiative [DFG GSC 235/1].

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Appendix A. Theoretical Underpinnings of the CEofOP Statistic

Appendix A.1. Asymptotic Behavior of the CEofOP Statistic

Here, we consider the values of CEofOP for the case when segments of a stochastic process before and after the change-point have infinite length.
Let us first introduce some notation. Given an ordinal- d + -stationary stochastic process X for d N , the distribution of pairs of ordinal patterns is denoted by P = ( p i , j ) i , j S d , with p i , j = P ( Π ( t ) = i , Π ( t + 1 ) = j ) = p j | i p i for all i , j S d . One easily sees the following: the conditional entropy of ordinal patterns is represented as CE ( X , d ) = H ( P ) , where
H ( P ) = i S d j S d p i , j ln p i , j + i S d j S d p i , j ln j S d p i , j .
Here, recall that p i = j S d p i , j .
Theorem A1.
Let Y = ( Y t ) t N and Z = ( Z t ) t N be ergodic d + -ordinal-stationary stochastic processes on a probability space ( Ω , A , P ) with probabilities of pairs of ordinal patterns of order d N given by P = ( p i , j ) i , j S d and Q = ( q i , j ) i , j S d , respectively. For L N and γ ( 0 , 1 ) , let Π L , γ be the random sequence of ordinal patterns of order d of
( Y 1 , , Y γ L , Z γ L + 1 , Z γ L + 2 , , Z L ) .
Then, for all θ ( 0 , 1 ) , it holds that
lim L CEofOP θ L ; Π L , γ = L Δ γ , θ d ( P , Q ) ,
P -almost sure, where
Δ γ , θ d ( P , Q ) = H γ P + ( 1 γ ) Q θ H ( P ) ( 1 θ ) H γ θ 1 θ P + 1 γ 1 θ Q , θ < γ , H γ P + ( 1 γ ) Q θ H γ θ P + θ γ θ Q ( 1 θ ) H ( Q ) , θ γ .
By definition, Equation (A1) defines a stochastic process of length L + 1 with a potential ordinal change-point t * = θ L , i.e., the position of t * relative to L is principally the same for all L, and the statistics considered are stabilizing for increasing L. Equation (A1) can be particularly interpreted as a part of a stochastic process including exactly one ordinal chance point. We omit the proof of Theorem A1 since it is a simple computation.
Due to the properties of the conditional entropy, it holds that
max θ ( 0 , 1 ) Δ γ , θ d ( P , Q ) = Δ γ , γ d ( P , Q ) = H γ P + ( 1 γ ) Q γ H ( P ) ( 1 γ ) H ( Q ) .
Values of Δ γ , θ d ( P , Q ) can be computed for a piecewise stationary stochastic process with known probabilities of ordinal patterns before and after the change-point. To apply Theorem A1, probabilities of pairs of ordinal patterns of order d are also needed, but they can be calculated from the probabilities of ordinal patterns of order ( d + 1 ) . As one can verify, probability P i , j of any pair ( i , j ) of ordinal patterns i , j S d is equal either to the probability of a certain ordinal pattern of order ( d + 1 ) or to the sum of two such probabilities.
In [13], authors compute probabilities of ordinal patterns of orders d = 2 (Proposition 5.3) and d = 3 (Theorem 5.5) for stationary Gaussian processes (in particular, for autoregressive processes). Below, we use these results to illustrate Theorem A1.
Consider an autoregressive process AR ( ϕ 1 , ϕ 2 ) , t * with a single change-point t * = L / 2 for L N . Using the results from [13], we compute distributions P ϕ 1 , P ϕ 2 of ordinal pattern pairs for orders d = 1 , 2 and, on this basis, we calculate the values of Δ 0.5 , 0.5 d ( P ϕ 1 , P ϕ 2 ) for different values of ϕ 1 and ϕ 2 . The results are presented in Table A1 and Table A2.
Table A1. Values of 100 Δ 0.5 , 0.5 1 ( P ϕ 1 , P ϕ 2 ) for an autoregressive process (coefficient 100 here is only for the sake of readability).
Table A1. Values of 100 Δ 0.5 , 0.5 1 ( P ϕ 1 , P ϕ 2 ) for an autoregressive process (coefficient 100 here is only for the sake of readability).
ϕ 1 0.000.100.200.300.400.500.600.700.800.900.99
ϕ 2
0.0000.020.070.150.260.400.560.740.951.181.44
0.100.0200.020.060.140.250.370.530.710.911.13
0.200.070.0200.020.060.130.230.360.510.680.88
0.300.150.060.0200.010.060.130.220.340.490.66
0.400.260.140.060.0100.010.060.120.220.330.48
0.500.400.250.130.060.0100.010.050.120.210.33
0.600.560.370.230.130.060.0100.010.050.120.21
0.700.740.530.360.220.120.050.0100.010.050.12
0.800.950.710.510.340.220.120.050.0100.010.05
0.901.180.910.680.490.330.210.120.050.0100.01
0.991.441.130.880.660.480.330.210.120.050.010
Table A2. Values of 100 Δ 0.5 , 0.5 2 ( P ϕ 1 , P ϕ 2 ) for an autoregressive process.
Table A2. Values of 100 Δ 0.5 , 0.5 2 ( P ϕ 1 , P ϕ 2 ) for an autoregressive process.
ϕ 1 0.000.100.200.300.400.500.600.700.800.900.99
ϕ 2
0.0000.040.150.330.560.851.181.551.952.402.88
0.100.0400.040.140.310.530.801.121.481.892.34
0.200.150.0400.030.130.290.510.771.081.441.85
0.300.330.140.0300.030.130.280.490.751.061.43
0.400.560.310.130.0300.030.120.270.480.741.06
0.500.850.530.290.130.0300.030.120.270.480.74
0.601.180.800.510.280.120.0300.030.120.270.48
0.701.551.120.770.490.270.120.0300.030.120.28
0.801.951.481.080.750.480.270.120.0300.030.13
0.902.401.891.441.060.740.480.270.120.0300.03
0.992.882.341.851.431.060.740.480.280.130.030
According to Theorem A1, for π d , L = π ( k ) k = d L , being a sequence of ordinal patterns of order d for a realization of AR ( ϕ 1 , ϕ 2 ) , L / 2 , it holds almost certainly that
1 L max θ ( 0 , 1 ) CEofOP θ L ; π d , L L Δ 0.5 , 0.5 d ( P ϕ 1 , P ϕ 2 ) .
Figure A1 shows how fast that this convergence is. Note that the CEofOP statistic for orders d = 1 , 2 allows for distinguishing between change and no change in the considered processes for L 20 × 10 3 . For L = 10 5 , the values of the CEofOP statistic for order d = 2 are already very close to its theoretical values, whereas, for d = 1 , this length does not seem sufficient.
Figure A1. Empirical values of CEofOP statistics 1 L max θ ( 0 , 1 ) CEofOP θ L ; π d , L converge to the theoretical values Δ 0.5 , 0.5 d for autoregressive processes as L increases for d = 1 (a) and d = 2 (b). Here, AR, ϕ 1 ϕ 2 stands for the process AR ( ϕ 1 , ϕ 2 ) , L / 2 . The provided empirical values are obtained either as 5th percentile (for ϕ 1 ϕ 2 ) or as 95th percentile (for ϕ 1 = ϕ 2 ) from 1000 trials.
Figure A1. Empirical values of CEofOP statistics 1 L max θ ( 0 , 1 ) CEofOP θ L ; π d , L converge to the theoretical values Δ 0.5 , 0.5 d for autoregressive processes as L increases for d = 1 (a) and d = 2 (b). Here, AR, ϕ 1 ϕ 2 stands for the process AR ( ϕ 1 , ϕ 2 ) , L / 2 . The provided empirical values are obtained either as 5th percentile (for ϕ 1 ϕ 2 ) or as 95th percentile (for ϕ 1 = ϕ 2 ) from 1000 trials.
Entropy 20 00709 g0a1

Appendix A.2. CEofOP Statistic for a Sequence of Ordinal Patterns Forming a Markov Chain

In this subsection, we show that there is a connection between the CEofOP statistic and the classical likelihood ratio statistic. Though taking place only in a particular case, this connection reveals the nature of the CEofOP statistic.
First, we set up necessary notations. Consider a sequence π d , L of ordinal patterns for that transition probabilities of ordinal patterns may change at some t { d , d + 1 , L } . The basic statistic for testing whether there is a change in the transition probabilities is the likelihood ratio statistic ([1], Section 2.2.3):
LR t ; π d , L = 2 ln Lkl H 0 π d , L + 2 ln Lkl H A π d , L ,
where Lkl H π d , L is the likelihood of the hypothesis H given a sequence π d , L of ordinal patterns, and the hypotheses are given by
H 0 : p j | i ( t ) i , j S d = q j | i ( t ) i , j S d , H A : p j | i ( t ) i , j S d q j | i ( t ) i , j S d ,
where p j | i ( t ) , q j | i ( t ) are transition probabilities of ordinal patterns before and after t, respectively.
Proposition A1.
Random sequence Π d of ordinal patterns of order d N forms a Markov chain with at most one change-point; then, for a sequence π d , L = π ( k ) k = d L of ordinal patterns being a realization of Π d of length L N , it holds that
LR t ; π d , L = 2 CEofOP t ; π d , L + 2 d · eCE π d , L .
Proof. 
First, we estimate the probabilities and the transition probabilities before (p) and after (q) the change ([58], Section 2):
p ^ i ( t ) = n i ( t ) t d , p ^ j | i ( t ) = n i , j ( t ) n i ( t ) , q ^ i ( t ) = m i ( t ) L ( t + d ) , q ^ j | i ( t ) = m i , j ( t ) m i ( t ) .
Then, as one can see from ([58], Section 3.2), we have
Lkl H 0 π d , L = p ^ π ( d ) ( L ) l = d L 1 p ^ π ( l + 1 ) | π ( l ) ( L ) = p ^ π ( d ) ( L ) i S d j S d p ^ j | i ( L ) n i , j ( L ) , Lkl H A π d , L = p ^ π ( d ) ( t ) l = d t p ^ π ( l + 1 ) | π ( l ) ( t ) l = t + d L 1 q ^ π ( l + 1 ) | π ( l ) ( t ) = p ^ π ( d ) ( t ) i S d j S d p ^ j | i ( t ) n i , j ( t ) i S d j S d q ^ j | i ( t ) m i , j ( t ) .
Assume that the first ordinal pattern π ( d ) is fixed in order to simplify the computations. Then, p ^ π ( d ) ( L ) = p ^ π ( d ) ( t ) and it holds that:
LR t ; π d , L = 2 i S d j S d n i , j ( L ) ln n i , j ( L ) ln n i ( L ) + 2 i S d j S d n i , j ( t ) ln n i , j ( t ) ln n i ( t ) + 2 i S d j S d m i , j ( t ) ln m i , j ( t ) ln m i ( t ) .
Since j S d n i , j ( t ) = n i ( t ) , one finally obtains:
LR t ; π d , L = 2 ( L d ) eCE π ( k ) k = d L 2 ( t d ) eCE π ( k ) k = d t 2 L ( t + d ) eCE π ( k ) k = t + d L = 2 CEofOP t ; π d , L + 2 d · eCE π d , L .

Appendix A.3. Change-Point Detection by the CEofOP Statistic and from Permutation Entropy Values

One may ask whether special techniques of ordinal change-point detection make sense at all when one can simply compute permutation entropy [12] of a time series in sliding windows and then apply traditional methods for change-point detection to the resulting sequence of permutation entropy values. Indeed, permutation entropy that measures non-uniformity of ordinal patterns distribution is sensitive to the changes in this distribution and can be an indicator of ordinal change-points. However, we show in Figure A2, there is no straightforward way to detect change-point from permutation entropy values.
Figure A2. Permutation entropy computed in sliding windows (a,b) and values of the CEofOP statistics (c,d) for artificial time series with sequences of ordinal patterns π 1 d , L and π 2 d , L , respectively, where d = 3 , L = 240 . Both sequences of ordinal patterns have a change-point at t = L / 2 (indicated by red vertical line) and are given by π 1 d , L = ( 3 , 5 , 1 , 3 , 5 , 1 , 0 , 4 , 2 , 0 , 4 , 2 , , 0 , 4 , 2 ) and π 2 d , L = ( 3 , 5 , 1 , 3 , 5 , 1 , 0 , 4 , 5 , 1 , 0 , 4 , 5 , 1 , , 0 , 4 , 5 , 1 ) . Permutation entropy is computed in sliding windows of length 5 ( d + 1 ) ! = 30 . While peaks of the CEofOP statistics clearly indicate the change-points, there is no straightforward way to detect changes from the values of permutation entropy.
Figure A2. Permutation entropy computed in sliding windows (a,b) and values of the CEofOP statistics (c,d) for artificial time series with sequences of ordinal patterns π 1 d , L and π 2 d , L , respectively, where d = 3 , L = 240 . Both sequences of ordinal patterns have a change-point at t = L / 2 (indicated by red vertical line) and are given by π 1 d , L = ( 3 , 5 , 1 , 3 , 5 , 1 , 0 , 4 , 2 , 0 , 4 , 2 , , 0 , 4 , 2 ) and π 2 d , L = ( 3 , 5 , 1 , 3 , 5 , 1 , 0 , 4 , 5 , 1 , 0 , 4 , 5 , 1 , , 0 , 4 , 5 , 1 ) . Permutation entropy is computed in sliding windows of length 5 ( d + 1 ) ! = 30 . While peaks of the CEofOP statistics clearly indicate the change-points, there is no straightforward way to detect changes from the values of permutation entropy.
Entropy 20 00709 g0a2

References

  1. Basseville, M.; Nikiforov, I.V. Detection of Abrupt Changes: Theory and Application; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1993. [Google Scholar]
  2. Amorèse, D. Applying a change-point detection method on frequency-magnitude distributions. Bull. Seismol. Soc. Am. 2007, 97, 1742–1749. [Google Scholar] [CrossRef]
  3. Perron, P.M.; Bai, J. Computation and Analysis of Multiple Structural Change Models. J. Appl. Econ. 2003, 18, 1–22. [Google Scholar]
  4. Walker, K.; Aranis, A.; Contreras-Reyes, J. Possible Criterion to Estimate the Juvenile Reference Length of Common Sardine (Strangomera bentincki) off Central-Southern Chile. J. Mar. Sci. Eng. 2018, 6, 82. [Google Scholar] [CrossRef]
  5. Brodsky, B.E.; Darkhovsky, B.S. Nonparametric Methods in Change-Point Problems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1993. [Google Scholar]
  6. Carlstein, E.; Muller, H.G.; Siegmund, D. Change-Point Problems; Institute of Mathematical Statistics: Hayward, CA, USA, 1994. [Google Scholar]
  7. Brodsky, B.E.; Darkhovsky, B.S. Non-Parametric Statistical Diagnosis. Problems and Methods; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000. [Google Scholar]
  8. Lavielle, M.; Teyssière, G. Adaptive Detection of Multiple Change-Points in Asset Price Volatility. In Long Memory in Economics; Teyssière, G., Kirman, A.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 129–156. [Google Scholar]
  9. Davis, R.A.; Lee, T.C.M.; Rodriguez-Yam, G.A. Structural break estimation for nonstationary time series models. J. Am. Stat. Assoc. 2006, 101, 223–239. [Google Scholar] [CrossRef]
  10. Preuss, P.; Puchstein, R.; Dette, H. Detection of multiple structural breaks in multivariate time series. J. Am. Stat. Assoc. 2015, 110, 654–668. [Google Scholar] [CrossRef] [Green Version]
  11. Brodsky, B.E.; Darkhovsky, B.S.; Kaplan, A.Y.; Shishkin, S.L. A nonparametric method for the segmentation of the EEG. Comput. Methods Progr. Biomed. 1999, 60, 93–106. [Google Scholar] [CrossRef]
  12. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  13. Bandt, C.; Shiha, F. Order patterns in time series. J. Time Ser. Anal. 2007, 28, 646–665. [Google Scholar] [CrossRef]
  14. Keller, K.; Sinn, M.; Emonds, J. Time series from the ordinal viewpoint. Stoch. Dyn. 2007, 7, 247–272. [Google Scholar] [CrossRef]
  15. Amigó, J.M. Permutation Complexity in Dynamical Systems. Ordinal Patterns, Permutation Entropy and All That; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  16. Pompe, B.; Runge, J. Momentary information transfer as a coupling measure of time series. Phys. Rev. E 2011, 83, 051122. [Google Scholar] [CrossRef] [PubMed]
  17. Unakafova, V.A.; Keller, K. Efficiently measuring complexity on the basis of real-world data. Entropy 2013, 15, 4392–4415. [Google Scholar] [CrossRef]
  18. Keller, K.; Unakafov, A.M.; Unakafova, V.A. Ordinal patterns, entropy, and EEG. Entropy 2014, 16, 6212–6239. [Google Scholar] [CrossRef]
  19. Antoniouk, A.; Keller, K.; Maksymenko, S. Kolmogorov–Sinai entropy via separation properties of order-generated σ-algebras. Discret. Contin. Dyn. Syst. A 2014, 34, 1793–1809. [Google Scholar]
  20. Keller, K.; Mangold, T.; Stolz, I.; Werner, J. Permutation Entropy: New Ideas and Challenges. Entropy 2017, 19, 134. [Google Scholar] [CrossRef]
  21. Unakafov, A.M.; Keller, K. Conditional entropy of ordinal patterns. Physica D 2014, 269, 94–102. [Google Scholar] [CrossRef] [Green Version]
  22. Unakafov, A.M. Ordinal-Patterns-Based Segmentation and Discrimination of Time Series with Applications to EEG Data. Ph.D. Thesis, University of Lübeck, Lübeck, Germany, 2015. [Google Scholar]
  23. Sinn, M.; Ghodsi, A.; Keller, K. Detecting Change-Points in Time Series by Maximum Mean Discrepancy of Ordinal Pattern Distributions. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, 14–18 August 2012; pp. 786–794. [Google Scholar]
  24. Sinn, M.; Keller, K.; Chen, B. Segmentation and classification of time series using ordinal pattern distributions. Eur. Phys. J. Spec. Top. 2013, 222, 587–598. [Google Scholar] [CrossRef]
  25. Unakafov, A.M. Change-Point Detection Using the Conditional Entropy of Ordinal Patterns. 2017. Available online: https://mathworks.com/matlabcentral/fileexchange/62944-change-point-detection-using-the-conditional-entropy-of-ordinal-patterns (accessed on 13 September 2018).
  26. Stoffer, D.S. Frequency Domain Techniques in the Analysis of DNA Sequences. In Handbook of Statistics: Time Series Analysis: Methods and Applications; Rao, T.S., Rao, S.S., Rao, C.R., Eds.; Elsevier: New York, NY, USA, 2012; pp. 261–296. [Google Scholar]
  27. Bandt, C.; Keller, G.; Pompe, B. Entropy of interval maps via permutations. Nonlinearity 2002, 15, 1595–1602. [Google Scholar] [CrossRef]
  28. Keller, K. Permutations and the Kolmogorov–Sinai entropy. Discret. Contin. Dyn. Syst. 2012, 32, 891–900. [Google Scholar] [CrossRef]
  29. Pompe, B. The LE-statistic. Eur. Phys. J. Spec. Top. 2013, 222, 333–351. [Google Scholar] [CrossRef]
  30. Haruna, T.; Nakajima, K. Permutation complexity and coupling measures in hidden Markov models. Entropy 2013, 15, 3910–3930. [Google Scholar] [CrossRef]
  31. Sinn, M.; Keller, K. Estimation of ordinal pattern probabilities in Gaussian processes with stationary increments. Comput. Stat. Data Anal. 2011, 55, 1781–1790. [Google Scholar] [CrossRef] [Green Version]
  32. Amigó, J.M.; Keller, K. Permutation entropy: One concept, two approaches. Eur. Phys. J. Spec. Top. 2013, 222, 263–273. [Google Scholar] [CrossRef]
  33. Sinn, M.; Keller, K. Estimation of ordinal pattern probabilities in fractional Brownian motion. arXiv, 2008; arXiv:0801.1598. [Google Scholar]
  34. Bandt, C. Autocorrelation type functions for big and dirty data series. arXiv, 2003; arXiv:1411.3904. [Google Scholar]
  35. Elizalde, S.; Martinez, M. The frequency of pattern occurrence in random walks. arXiv, 2014; arXiv:1412.0692. [Google Scholar]
  36. Cao, Y.; Tung, W.; Gao, W.; Protopopescu, V.; Hively, L. Detecting dynamical changes in time series using the permutation entropy. Phys. Rev. E 2004, 70, 046217. [Google Scholar] [CrossRef] [PubMed]
  37. Yuan, Y.J.; Wang, X.; Huang, Z.T.; Sha, Z.C. Detection of Radio Transient Signal Based on Permutation Entropy and GLRT. In Wireless Personal Communications; Wesley, Addison Longman Incorporated: Boston, MA, USA, 2015; pp. 1–11. [Google Scholar]
  38. Schnurr, A.; Dehling, H. Testing for structural breaks via ordinal pattern dependence. arXiv, 2015; arXiv:1501.07858. [Google Scholar]
  39. Thunberg, H. Periodicity versus chaos in one-dimensional dynamics. SIAM Rev. 2001, 43, 3–30. [Google Scholar] [CrossRef]
  40. Lyubich, M. Forty years of unimodal dynamics: On the occasion of Artur Avila winning the Brin prize. J. Mod. Dyn. 2012, 6, 183–203. [Google Scholar] [CrossRef]
  41. Linz, S.; Lücke, M. Effect of additive and multiplicative noise on the first bifurcations of the logistic model. Phys. Rev. A 1986, 33, 2694. [Google Scholar] [CrossRef]
  42. Diks, C. Nonlinear Time series analysis: Methods and Applications; World Scientific: Singapore, 1999. [Google Scholar]
  43. Vostrikova, L.Y. Detecting disorder in multidimensional random processes. Sov. Math. Dokl. 1981, 24, 55–59. [Google Scholar]
  44. Han, T.S.; Kobayashi, K. Mathematics of Information and Coding; Translated from the Japanese by J. Suzuki; American Mathematical Society: Providence, RI, USA, 2002; p. 286. [Google Scholar]
  45. Theiler, J.; Eubank, S.; Longtin, A.; Galdrikian, B.; Farmer, J. Testing for nonlinearity in time series: The method of surrogate data. Phys. D Nonlinear Phenom. 1992, 58, 77–94. [Google Scholar] [CrossRef]
  46. Schreiber, T.; Schmitz, A. Improved surrogate data for nonlinearity tests. Phys. Rev. Lett. 1996, 77, 635. [Google Scholar] [CrossRef] [PubMed]
  47. Schreiber, T.; Schmitz, A. Surrogate time series. Phys. D Nonlinear Phenom. 2000, 142, 346–382. [Google Scholar] [CrossRef]
  48. Gautama, T. Surrogate Data. MATLAB Central File Exchange. 2005. Available online: https://www.mathworks.com/matlabcentral/fileexchange/4612-surrogate-data (accessed on 13 September 2018).
  49. Polansky, A.M. Detecting change-points in Markov chains. Comput. Stat. Data Anal. 2007, 51, 6013–6026. [Google Scholar] [CrossRef]
  50. Kim, A.Y.; Marzban, C.; Percival, D.B.; Stuetzle, W. Using labeled data to evaluate change detectors in a multivariate streaming environment. Signal Process. 2009, 89, 2529–2536. [Google Scholar] [CrossRef] [Green Version]
  51. Davison, A.C.; Hinkley, D.V. Bootstrap Methods and Their Applications; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  52. Lahiri, S.N. Resampling Methods for Dependent Data; Springer: New York, NY, USA, 2003. [Google Scholar]
  53. Härdle, W.; Horowitz, J.; Kreiss, J.P. Bootstrap methods for time series. Int. Stat. Rev. 2003, 71, 435–459. [Google Scholar] [CrossRef]
  54. Bühlmann, P. Bootstraps for time series. Stat. Sci. 2002, 17, 52–72. [Google Scholar] [CrossRef]
  55. Lavielle, M. Detection of multiple changes in a sequence of dependent variables. Stoch. Process. Their Appl. 1999, 83, 79–102. [Google Scholar] [CrossRef]
  56. Grassberger, P. Entropy estimates from insufficient samplings. arXiv, 2003; arXiv:physics/0307138. [Google Scholar]
  57. Riedl, M.; Müller, A.; Wessel, N. Practical considerations of permutation entropy. Eur. Phys. J. Spec. Top. 2013, 222, 249–262. [Google Scholar] [CrossRef]
  58. Anderson, T.W.; Goodman, L.A. Statistical inference about Markov chains. Ann. Math. Stat. 1957, 28, 89–110. [Google Scholar] [CrossRef]
Figure 1. A part of a piecewise stationary time series with a change-point at t = L / 2 (marked by a vertical line) and corresponding ordinal patterns of order d = 1 (below the plot).
Figure 1. A part of a piecewise stationary time series with a change-point at t = L / 2 (marked by a vertical line) and corresponding ordinal patterns of order d = 1 (below the plot).
Entropy 20 00709 g001
Figure 2. Statistic CEofOP ( θ L ) for the sequence of ordinal patterns of order 1 for the time series from Figure 1.
Figure 2. Statistic CEofOP ( θ L ) for the sequence of ordinal patterns of order 1 for the time series from Figure 1.
Entropy 20 00709 g002
Figure 3. Hypnogram (bold curve) and the results of ordinal-patterns-based discrimination of sleep EEG. Here, the y-axis represents the results of the expert classification: W stands for waking, stages S1, S2 and S3, S4 indicate light and deep sleep, respectively, REM stands for REM sleep and Error—for unclassified samples. Results of ordinal-patterns-based discrimination are represented by the background colour: white colour indicates epochs classified as waking state, light gray—as light sleep, gray—as deep sleep, dark gray—as REM, red colour indicates unclassified segments
Figure 3. Hypnogram (bold curve) and the results of ordinal-patterns-based discrimination of sleep EEG. Here, the y-axis represents the results of the expert classification: W stands for waking, stages S1, S2 and S3, S4 indicate light and deep sleep, respectively, REM stands for REM sleep and Error—for unclassified samples. Results of ordinal-patterns-based discrimination are represented by the background colour: white colour indicates epochs classified as waking state, light gray—as light sleep, gray—as deep sleep, dark gray—as REM, red colour indicates unclassified segments
Entropy 20 00709 g003
Figure 4. Upper row: parts of realizations of piecewise stationary autoregressive (AR) (a) and noisy logistic (NL) (b) processes with change-points marked by vertical lines, L = 20,000. Lower row: empirical probability distributions of ordinal patterns of order d = 2 in the realizations of AR (c) and NL (d) processes are different before and after the change-point.
Figure 4. Upper row: parts of realizations of piecewise stationary autoregressive (AR) (a) and noisy logistic (NL) (b) processes with change-points marked by vertical lines, L = 20,000. Lower row: empirical probability distributions of ordinal patterns of order d = 2 in the realizations of AR (c) and NL (d) processes are different before and after the change-point.
Entropy 20 00709 g004
Figure 5. Maximum of statistic CEofOP ( θ L ) detects the change-point t * = 2000 (indicated by the vertical line) in a time series, obtained by “gluing” a realization of a noisy logistic stochastic process NL 4 , 0.2 with its surrogate.
Figure 5. Maximum of statistic CEofOP ( θ L ) detects the change-point t * = 2000 (indicated by the vertical line) in a time series, obtained by “gluing” a realization of a noisy logistic stochastic process NL 4 , 0.2 with its surrogate.
Entropy 20 00709 g005
Figure 6. Measures of change-point detection performance for NL (a,b) and AR (c,d) processes with different lengths, where L is the product of window numbers given on the x-axis with window length W = 256 .
Figure 6. Measures of change-point detection performance for NL (a,b) and AR (c,d) processes with different lengths, where L is the product of window numbers given on the x-axis with window length W = 256 .
Entropy 20 00709 g006
Table 1. Processes used for investigation of the change-point detection.
Table 1. Processes used for investigation of the change-point detection.
Short NameComplete Designation
NL, 3.95 3.98 , σ = 0.2 NL ( 3.95 , 3.98 ) , ( 0.2 , 0.2 ) , t *
NL, 3.95 3.80 , σ = 0.3 NL ( 3.95 , 3.80 ) , ( 0.3 , 0.3 ) , t *
NL, 3.95 4.00 , σ = 0.2 NL ( 3.95 , 4.00 ) , ( 0.2 , 0.2 ) , t *
AR, 0.1 0.3 AR ( 0.1 , 0.3 ) , t *
AR, 0.1 0.4 AR ( 0.1 , 0.4 ) , t *
AR, 0.1 0.5 AR ( 0.1 , 0.5 ) , t *
Table 2. Performance of different statistics for estimating change-point in noisy logistic (NL) processes
Table 2. Performance of different statistics for estimating change-point in noisy logistic (NL) processes
NL, 3.95 3.98 NL, 3.95 3.80 NL, 3.95 4.00
Statistic σ = 0.2 σ = 0.3 σ = 0.2
sE B RMSE sE B RMSE sE B RMSE
CMMD0.3469816530.50−513060.68−13206
CEofOP , d = 2 0.4614711080.62−32670.8133147
CEofOP , d = 3 0.61533970.6512560.882099
CEofOP , d = 4 0.47−29820.46−4111620.832130
BD exp 0.62783510.78−61450.894396
BD corr 0.44856560.71132020.7743189
Table 3. Performance of different statistics for estimating change-point in autoregressive (AR) processes.
Table 3. Performance of different statistics for estimating change-point in autoregressive (AR) processes.
StatisticAR, 0.1 0.3 AR, 0.1 0.4 AR, 0.1 0.5
sE B RMSE sE B RMSE sE B RMSE
CMMD0.3261616260.54−143680.68−48184
CEofOP , d = 2 0.427410960.6762440.823129
CEofOP , d = 3 0.3912618380.6802340.860110
CEofOP , d = 4 0.08102866230.46−17616780.74−27214
BD exp 0.00> 10 3 > 10 4 0.00> 10 4 > 10 4 0.00> 10 4 > 10 4
BD corr 0.79311510.9221730.972150
Table 4. Performance of change-point detection methods for the process with three change-points NL ( 3.98 , 4 , 3.95 , 3.8 ) , ( 0.2 , 0.2 , 0.2 , 0.3 ) , ( t 1 * , t 2 * , t 3 * ) .
Table 4. Performance of change-point detection methods for the process with three change-points NL ( 3.98 , 4 , 3.95 , 3.8 ) , ( 0.2 , 0.2 , 0.2 , 0.3 ) , ( t 1 * , t 2 * , t 3 * ) .
StatisticNumber of False Change-PointsFraction sE k of Satisfactory Estimates
1st Change2nd Change3rd ChangeAverage
cMMD1.170.4650.6420.7470.618
CEofOP 0.620.7530.8820.9300.855
BD corr 1.340.2960.7370.7510.595
Table 5. Performance of change-point detection methods for the process with three change-points AR ( 0.3 , 0.5 , 0.1 , 0.4 ) , ( t 1 * , t 2 * , t 3 * ) .
Table 5. Performance of change-point detection methods for the process with three change-points AR ( 0.3 , 0.5 , 0.1 , 0.4 ) , ( t 1 * , t 2 * , t 3 * ) .
StatisticNumber of False Change-PointsFraction sE k of Satisfactory Estimates
1st Change2nd Change3rd ChangeAverage
CMMD1.170.3400.6400.3340.438
CEofOP 1.120.3680.8340.5170.573
BD corr 0.530.7830.9700.9310.895

Share and Cite

MDPI and ACS Style

Unakafov, A.M.; Keller, K. Change-Point Detection Using the Conditional Entropy of Ordinal Patterns. Entropy 2018, 20, 709. https://0-doi-org.brum.beds.ac.uk/10.3390/e20090709

AMA Style

Unakafov AM, Keller K. Change-Point Detection Using the Conditional Entropy of Ordinal Patterns. Entropy. 2018; 20(9):709. https://0-doi-org.brum.beds.ac.uk/10.3390/e20090709

Chicago/Turabian Style

Unakafov, Anton M., and Karsten Keller. 2018. "Change-Point Detection Using the Conditional Entropy of Ordinal Patterns" Entropy 20, no. 9: 709. https://0-doi-org.brum.beds.ac.uk/10.3390/e20090709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop