Next Article in Journal
Non-Local Parity Measurements and the Quantum Pigeonhole Effect
Previous Article in Journal
Analog Circuit Fault Diagnosis via Joint Cross-Wavelet Singular Entropy and Parametric t-SNE
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity

1
Algorithmic Dynamics Lab, Unit of Computational Medicine, Department of Medicine Solna, Center for Molecular Medicine, Karolinska Institute and SciLifeLab, SE-171 77 Stockholm, Sweden
2
Algorithmic Nature Group, Laboratoire de Recherche Scientifique (LABORES) for the Natural and Digital Sciences, 75005 Paris, France
3
Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
4
Posgrado en Ciencia e Ingeniería de la Computación, Universidad Nacional Autónoma de México (UNAM), Mexico City 04510, Mexico
5
Grupo de Lógica, Lenguaje e Información, Universidad de Sevilla, 41004 Seville, Spain
6
Instituto Nacional de Bioingeniería, Universidad Central de Venezuela, Caracas 1051, Venezuela
7
Unit of Computational Medicine, Department of Medicine Solna, Center for Molecular Medicine, SciLifeLab and Karolinska Institute, Stockholm SE-171 77, Sweden
8
Biological and Environmental Sciences and Engineering Division, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 28 April 2018 / Revised: 18 June 2018 / Accepted: 31 July 2018 / Published: 15 August 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.

1. Introduction

Capturing the “complexity” of an object, for purposes such as classification and object profiling, is one of the most fundamental challenges in science. This is so because one has to either choose a computable measure (e.g., Shannon entropy) that is not invariant to object descriptions and probability distributions [1] and lacks an invariance theorem—which forces one to decide on a particular feature shared among several objects of interest—or else estimate values of an uncomputable function in applying a “universal” measure of complexity that is invariant to object description (such as algorithmic complexity). This latter drawback has led to computable variants and the development of time- and resource-bounded algorithmic complexity/probability that is finitely computable [2,3,4]. A good introduction and list of references is provided in [5]. Here we study a measure that lies half-way between two universally used measures that enables the action of both at different scales by dividing data into smaller pieces for which the halting problem involved in an uncomputable function can be partially circumvented, in exchange for a huge calculation based on the concept of algorithmic probability. The calculation can, however, be precomputed and hence reused in future applications, thereby constituting a strategy for efficient estimations—bounded by Shannon entropy and by algorithmic (Kolmogorov–Chaitin) complexity—in exchange for a loss of accuracy.
In the past, lossless compression algorithms have dominated the landscape of applications of algorithmic complexity. When researchers have chosen to use lossless compression algorithms for reasonably long strings, the method has proven to be of value (e.g., [6]). Their successful application has to do with the fact that compressibility is a sufficient test for non-algorithmic randomness (though the converse is not true). However, popular implementations of lossless compression algorithms are based on estimations of entropy [7], and are therefore no more closely related to algorithmic complexity than is Shannon entropy by itself. They can only account for statistical regularities and not for algorithmic ones, though accounting for algorithmic regularities ought to be crucial, since these regularities represent the main advantage of using algorithmic complexity.
One of the main difficulties with computable measures of complexity such as Shannon entropy is that they are not robust enough [1,8]. For example, they are not invariant to different descriptions of the same object-unlike algorithmic complexity, where the so-called invariance theorem guarantees the invariance of an object’s algorithmic complexity. This is due to the fact that one can always translate a lossless description into any other lossless description simply with a program of a fixed length, hence in effect just adding a constant. Computability theorists are not much concerned with the relatively negligible differences between evaluations of Kolmogorov complexity which are owed to the use of different descriptive frameworks (e.g., different programming languages), yet these differences are fundamental in applications of algorithmic complexity.
Here we study a Block Decomposition Method (BDM) that is meant to extend the power of the so-called Coding Theorem Method (CTM). Applications of CTM include image classification [9] and visual cognition [10,11,12], among many applications in cognitive science. In these applications, other complexity measures, including entropy and lossless compressibility, have been outperformed by CTM. Graph complexity is another subject of active research [13,14,15,16,17], the method here presented has made a contribution to this subject proposing robust measures of algorithmic graph complexity [18,19].
After introducing the basics of algorithmic complexity, algorithmic probability, Shannon entropy and other entropic measures, and after exploring the limitations and abuse of the use of lossless compression to approximate algorithmic complexity, we introduce CTM, on which BDM heavily relies. After introducing BDM, we thoroughly study its properties and parameter dependency on size, the problem of the boundaries in the decomposition process, we prove theoretical bounds and provide numerical estimations after testing on actual data (graphs) followed by error estimations for the numerical estimations by BDM. We also introduce a normalized version of BDM.

1.1. Algorithmic Complexity

The Coding Theorem Method was first introduced as a method for dealing with the problem of compressing very short strings, for which no implementation of lossless compression gives reasonable results. CTM exploits the elegant and powerful relationship between the algorithmic frequency of production of a string and its algorithmic complexity [20].
The algorithmic complexity [21,22] K ( s ) of a string s is the length of the shortest program p that outputs the string s, when running on a universal (prefix-free, that is, the group of valid programs forms a prefix-free set or no element is a prefix of any other, a property necessary to keep 0 < m ( s ) < 1 . For details see [5,23].) Turing machine U.
A technical inconvenience of K as a function taking s to be the length of the shortest program that produces s, is that K is lower semi-computable. In other words, there is no effective algorithm which takes a string s as input and produces the integer K ( s ) as output. This is usually considered a major problem, but the theory of algorithmic randomness [24] ascribes uncomputability to any universal measure of complexity, that is, a measure that is at least capable of characterizing mathematical randomness [25]. However, because it is lower semi-computable, K ( s ) can be approximated from above, or in other words, upper bounds can be found, for example, by finding and exhibiting a small computer program (measured in bits) relative to the length of a bit string.

1.2. Algorithmic Probability

The classical probability of production of a bit string s among all possible 2 n bit strings of length n is given by P ( s ) = 1 / 2 n . The concept of algorithmic probability (also known as Levin’s semi-measure) replaces the random production of outputs by the random production of programs that produce an output. The algorithmic probability of a string s is thus a measure that estimates the probability of a random program p producing a string s when run on a universal (prefix-free) Turing machine U.
The algorithmic probability m ( s ) of a binary string s is the sum over all the (prefix-free) programs p for which a universal Turing machine U running p outputs s and halts [20,21,26]. It replaces n (the length of s) with | p | , the length of the program p that produces s:
m ( s ) = p : U ( p ) = s 1 / 2 | p |
m ( s ) can be considered an approximation to K ( s ) , because the greatest contributor to m ( s ) is the shortest program p that generates s using U. So if s is of low algorithmic complexity, then | p | < n , and will be considered random if | p | n .
The Coding Theorem [20,26] further establishes the connection between m ( s ) and K ( s ) .
| log 2 m ( s ) K ( s ) | < c
where c is a fixed constant, independent of s.
The Coding Theorem implies [27,28] that the output frequency distribution of random computer programs to approximate m ( s ) can be clearly converted into estimations of K ( s ) using the following rewritten version of Equation (2):
K ( s ) = log 2 m ( s ) + O ( 1 )
Among the properties of Algorithmic Probability and m ( s ) that makes it optimal is that the data does not need to be stationary or ergodic and is universal (stronger than ergodic) in the sense that it will work for any string and can deal with missing and multidimensional data [29,30,31,32], there is no underfitting or overfitting because the method is parameter free and the data need not to be divided into training and test sets.

1.3. Convergence Rate and the Invariance Theorem

One other fundamental property that provides the theory of algorithmic complexity with the necessary robustness to stand as a universal measure of (random) complexity is the so-called Invariance theorem [20,26], which guarantees the convergence of values despite the use of different reference universal Turing machines (UTMs) or e.g., programming languages.
Invariance theorem [21,22,26]: If U 1 and U 2 are two UTMs and K U 1 ( s ) and K U 2 ( s ) the algorithmic complexity of s for U 1 and U 2 , there exists a constant c such that:
| K U 1 ( s ) K U 2 ( s ) | < c U 1 , U 2
where c U 1 , U 2 is independent of s and can be considered to be the length (in bits) of a translating function between universal Turing machines U 1 and U 2 , or as a compiler between computer programming languages U 1 and U 1 .
In practice, however, the constant involved can be arbitrarily large, and the invariance theorem tells us nothing about the convergence (see Figure 1). One may perform the calculation of K ( s ) for a growing sequence s under U 1 in the expectation that for long s, K U 1 ( s ) = K ( s ) . However, there is no guarantee that this will be the case, and the size of c U 1 , U 2 in general is unknown.
It is a question whether there can be a natural universal Turing machine U N such that K U N ( s ) converges faster for s than for any other universal Turing machine (UTM), or whether specific conditions must be met if U 1 is to generate “well-behaved” (monotonic) behaviour in c [33]. The invariance theorem guarantees that such optimal “well-behaved” machines U N always exist-indeed their existence is implicit in the very sense of the theorem (meaning any universal machine can be optimal in the sense of the theorem)—but it tells nothing about the rate of convergence or about transitional behaviour (see Figure 1 for illustration).
The longer the string, the less important c is (i.e., the choice of programming language or UTM). However, in practice c can be arbitrarily large, thus having a great impact, particularly on short strings, and never revealing at which point one starts approaching a stable K or when one is diverging before finally monotonically converging, as is seen in the different possible behaviours illustrated in the sketches in Figure 1.
The invariance theorem tells us that it is impossible to guarantee convergence but it does not imply that one cannot study the behaviour of such a constant for different reference universal Turing machines nor that K cannot be approximated from above.

2. The Use and Misuse of Lossless Compression

Notice that the same problem affects compression algorithms as they are widely used to approximate K. They are not exempt from the same constant problem. Lossless compression is also subject to the constant involved in the invariance theorem, because there is no reason to choose one compression algorithm over another.
Lossless compression algorithms have traditionally been used to approximate the Kolmogorov complexity of an object (e.g., a string) because they can provide upper bounds to K and compression is sufficient test for non-randomness. In a similar fashion, our approximations are upper bounds based on finding a small Turing machine producing a string. Data compression can be viewed as a function that maps data onto other data using the same units or alphabet (if the translation is into different units or a larger or smaller alphabet, then the process is called an encoding).
Compression is successful if the resulting data are shorter than the original data plus the decompression instructions needed to fully reconstruct said original data. For a compression algorithm to be lossless, there must be a reverse mapping from compressed data to the original data. That is to say, the compression method must encapsulate a bijection between “plain” and “compressed” data, because the original data and the compressed data should be in the same units. By a simple counting argument, lossless data compression algorithms cannot guarantee compression for all input data sets, because there will be some inputs that do not get smaller when processed by the compression algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. Strings of data of length N or shorter are clearly a strict superset of the sequences of length N 1 or shorter. It follows therefore that there are more data strings of length N or shorter than there are data strings of length N 1 or shorter. In addition, it follows from the pigeonhole principle that it is not possible to map every sequence of length N or shorter to a unique sequence of length N 1 or shorter. Therefore there is no single algorithm that reduces the size of all data.
One of the more time consuming steps of implementations of, for example, LZ77 compression (one of the most popular) is the search for the longest string match. Most lossless compression implementations are based on the LZ (Lempev-Ziv) algorithm. The classical LZ77 and LZ78 algorithms enact a greedy parsing of the input data. That is, at each step, they take the longest dictionary phrase which is a prefix of the currently unparsed string suffix. LZ algorithms are said to be “universal” because, assuming unbounded memory (arbitrary sliding window length), they asymptotically approximate the (infinite) entropy rate of the generating source [34]. Not only does lossless compression fail to provide any estimation of the algorithmic complexity of small objects [33,35,36], it is also not more closely related to algorithmic complexity than Shannon entropy by itself [7], being only capable of exploiting statistical regularities (if the observer has no other method to update/infer the probability distribution) [8].
The greatest limitation of popular lossless compression algorithms, in the light of algorithmic complexity, is that their implementations only exploit statistical regularities (repetitions up to the size of the sliding window length). Thus in effect no general lossless compression algorithm does better than provide the Shannon entropy rate (c.f. Section 2.1) of the objects it compresses. It is then obvious that an exploration of other possible methods for approximating K is not only desirable but needed, especially methods that can, at least, in principle, and more crucially in practice, detect algorithmic features in data that statistical approaches such as Entropy and to some extent compression would miss.

2.1. Building on Block Entropy

The entropy H of a discrete random variable s with possible values s 1 , s n and probability distribution P ( s ) is defined as:
H ( s ) = i = 1 n P ( s i ) log 2 P ( s i )
In the case of P ( s i ) = 0 for some i, the value of the corresponding summand 0 log 2 ( 0 ) is taken to be 0.
It is natural to ask how random a string appears when blocks of finite length are considered.
For example, the string 01010101 01 is periodic, but for the smallest granularity (1 bit) or 1-symbol block, the sequence has maximal entropy, because the number of 0 s and 1 s is the same assuming a uniform probability distribution for all strings of the same finite length. Only for longer blocks of length 2 bits can the string be found to be regular, identifying the smallest entropy value for which the granularity is at its minimum.
When dealing with a given string s, assumed to originate from a stationary stochastic source with known probability density for each symbol, the following function H l gives what is variously denominated as block entropy and is Shannon entropy over blocks (or subsequences of s) of length l. That is,
H l ( s ) = b b l o c k s P l ( b ) log 2 P l ( b ) ,
where b l o c k s is the set resulting of decomposing s in substrings or blocks of size l and P l ( b ) is the probability of obtaining the combination of n symbols corresponding to the block b. For infinite strings assumed to originate from a stationary source, the entropy rate of s can be defined as the limit
lim l 1 l | s | = l H l ( s ) ,
where | s | = l indicates we are considering all the generated strings of length l. For a fixed string we can think on the normalized block entropy value where l better captures the periodicity of s.
Entropy was originally conceived by Shannon as a measure of information transmitted over an stochastic communication channel with known alphabets and it establishes hard limits to maximum lossless compression rates. For instance, the Shannon coding (and Shannon–Fano) sorts the symbols of an alphabet according to their probabilities, assigning smaller binary self-delimited sequences to symbols that appear more frequently. Such methods form the base of many, if not most, commonly used compression algorithms.
Given its utility in data compression, entropy is often used as a measure of the information contained in a finite string s = s 1 s 2 x s k . Let’s consider the natural distribution, the uniform distribution that makes the least number of assumptions but does consider every possibility equally likely and is thus uniform. Suggested by the set of symbols in s and the string length the natural distribution of s is the distribution defined by P ( x ) = n x | s | , where n x is the number of times the object x occurs in s (at least one to be considered), and the respective entropy function H l . If we consider blocks of size n > l 2 and the string s = 01010101 01 , where n is the length of the string, then s can be compressed in a considerably smaller number of bits than a statistically random sequence of the same length and, correspondingly, has a lower H l value. However, entropy with the natural distribution suggested by the object, or any other computable distribution, is a computable function therefore is an imperfect approximation to algorithmic complexity.
The best possible version of a measure based on entropy can be reached by partitioning an object into blocks of increasing size (up to half the length of the object) in order for Shannon entropy to capture any periodic statistical regularity. Figure 2 illustrates the way in which such a measure operates on 3 different strings.
However, no matter how sophisticated a version or variation of an Entropic measure will characterize certain algorithmic aspects of data that are not random but will appear to have maximal entropy if no knowledge about the source is known. In Figure 3 depicted is how algorithmic probability/complexity can find such patterns and ultimately characterize any, including statistical ones, thereby offering a generalization and complementary improvement to the application of Entropy alone.
BDM builds on block entropy’s decomposition approach using algorithmic complexity methods to obtain and combine its building blocks. The result is a complexity measure that, as shown in Section 4.3, approaches K in the best case and behaves like entropy in the worst case (see Section 7.1), outperforming H l in various scenarios. First we introduce the algorithm that conforms the building blocks of BDM, which are local estimations of algorithmic complexity. Specific examples of objects that not even Block entropy can characterize are found in Section 6 showing how our methods are a significant improvement over any measure based on entropy and traditional statistics.
For example, the following two strings were assigned near maximal complexity but they were found to have low algorithmic complexity by CTM/BDM given that we were able to find not only a small Turing machine that reproduces them but also many Turing machines producing them on halting thereby, by the Coding theorem, of low algorithmic complexity: 001010110101, 001101011010 (and their negations and reversions). These strings display nothing particularly special and they look in some sense typically random, yet this is what we were expecting, to find strings that would appear random but are actually not algorithmic random. This means that these strings would have been assigned higher randomness by Entropy and popular lossless compression algorithms but are assigned lower randomness when using our methods thus thus providing a real advantage over those other methods that can only exploit statistical regularities.

3. The Coding Theorem Method (CTM)

A computationally expensive procedure that is nevertheless closely related to algorithmic complexity involves approximating the algorithmic complexity of an object by running every possible program, from the shortest to the longest, and counting the number of times that a program produces every string object. The length of the computer program will be an upper bound of the algorithmic complexity of the object, following the Coding theorem (Equation (3)), and a (potentially) compressed version of the data itself (the shortest program found) for a given computer language or “reference” UTM. This guarantees discovery of the shortest program in the given language or reference UTM but entails an exhaustive search over the set of countable infinite computer programs that can be written in such a language. A program of length n has asymptotic probability close to 1 of halting in time 2 n [37], making this procedure exponentially expensive, even assuming that all programs halt or that programs are assumed never to halt after a specified time, with those that do not being discarded.
As shown in [33,35], an exhaustive search can be carried out for a small-enough number of computer programs (more specifically, Turing machines) for which the halting problem is known, thanks to the Busy Beaver problem [38]. This problem consists in finding the Turing machine of fixed size (states and symbols) that runs longer than any other machine of the same size. Values are known for Turing machines with 2 symbols and up to 4 states that can be used to stop a resource-bounded exploration, that is, by discarding any machine taking more steps than the Busy Beaver values. For longer strings we also proceed with an informed runtime cut-off, below the theoretical 2 n optimal runtime that guarantees an asymptotic drop of non-halting machines [37] but above any value to capture most strings up to any degree of accuracy as performed in [39].
The so called Coding Theorem Method (or simply CTM) [33,35] is a bottom-up approach to algorithmic complexity and, unlike common implementations of lossless compression algorithms, the main motivation of CTM is to find algorithmic features in data rather than just statistical regularities that are beyond the range of application of Shannon entropy and popular lossless compression algorithms [7].
CTM is rooted in the relation [33,35] provided by algorithmic probability between frequency of production of a string from a random program and its algorithmic complexity as described by Equation (3). Essentially it uses the fact that the more frequent a string is, the lower Kolmogorov complexity it has; and strings of lower frequency have higher Kolmogorov complexity. The advantage of using algorithmic probability to approximate K by application of the Coding Theorem 3 is that m ( s ) produces reasonable approximations to K based on an average frequency of production, which retrieves values even for small objects.
Let ( t , k ) denote the set of all Turing machines with t states and k symbols using the Busy Beaver formalism [38], and let T be a Turing machine in ( t , k ) with empty input. Then the empirical output distribution D ( t , k ) for a sequence s produced by some T ( t , k ) gives an estimation of the algorithmic probability of s, D ( t , k ) ( s ) defined by:
D ( t , k ) ( s ) = | { T ( t , k ) : T produces s } | | { T ( t , k ) : T halts } |
For small values t and k, D ( t , k ) is computable for values of the Busy Beaver problem that are known. The Busy Beaver problem [38] is the problem of finding the t-state, k-symbol Turing machine which writes a maximum number of non-blank symbols before halting, starting from an empty tape, or the Turing machine that performs a maximum number of steps before halting, having started on an initially blank tape. For t = 4 and k = 2 , for example, the Busy Beaver machine has maximum runtime S ( t ) = 107 [40], from which one can deduce that if a Turing machine with 4 states and 2 symbols running on a blank tape hasn’t halted after 107 steps, then it will never halt. This is how D was initially calculated by using known Busy Beaver values. However, because of the undecidability of the Halting problem, the Busy Beaver problem is only computable for small t , k values [38]. Nevertheless, one can continue approximating D for a greater number of states (and colours), proceeding by sampling, as described in [33,35], with an informed runtime based on both theoretical and numerical results.
Notice that 0 < D ( t , k ) ( s ) < 1 , D ( t , k ) ( s ) and is thus said to be a semi-measure, just as m ( s ) is.
Now we can introduce a measure of complexity that is heavily reliant on algorithmic probability m ( s ) , as follows:
Let ( t , k ) be the space of all t-state k-symbol Turing machines, t , k > 1 and D ( t , k ) ( s ) = the function assigned to every finite binary string s. Then:
C T M ( s , t , k ) = log b D ( t , k ) ( s )
where b is the number of symbols in the alphabet (traditionally 2 for binary objects, which we will take as understood hereafter).
That is, the more frequently a string is produced the lower its Kolmogorov complexity, with the converse also being true.
Table 1 shows the rule spaces of Turing machines that were explored, from which empirical algorithmic probability distributions were sampled and estimated.
We will designate as base string, base matrix, or base tensor the objects of size l for which CTM values were calculated such that the full set of k l objects have CTM evaluations. In other words, the base object is the maximum granularity of application of CTM.
Table 1 provides figures relating to the number of base objects calculated.
Validations of CTM undertaken before show the correspondence between CTM values and the exact number of instructions used by Turing machines when running to calculate CTM [41] (Figure 1 and Table 1) to produce each string, i.e., direct K complexity values for this model of computation (as opposed to CTM using algorithmic probability and the Coding theorem) under the chosen model of computation [38]. The correspondence in values found between the directly calculated K and CTM by way of frequency production was near perfect.
Results in [9] support the agreements in correlation using different rule spaces of Turing machines and different computing models altogether (cellular automata). In the same paper it is provided a first comparison to lossless compression. The sections “Agreement in probability” and “Agreement in rank” in [9] provide further material comparing rule space (5,2) to the rule space (4,3) previously calculated in [35,36]. The section “Robustness” in [33] provides evidence relating to the behaviour of the invariance theorem constant for a standard model of Turing machines [38].

4. The Block Decomposition Method (BDM)

Because finding the program that reproduces a large object is computationally very expensive and ultimately uncomputable, one can aim at finding short programs that reproduce small fragments of the original object, parts that together compose the larger object. In addition, this is what the BDM does.
BDM is divided into two parts. On the one hand, approximations to K are performed by CTM which values can then be used and applied in O ( 1 ) by exchanging time for memory in the population of a precomputed look-up table for small strings, which would diminish its precision as a function of object size (string length) unless a new iteration of CTM is precomputed again. On the other hand, BDM decomposes the original data into fragments for which CTM provides an estimation and then puts the values together based on classical information theory.
BDM is thus a hybrid complexity measure that combines Shannon Entropy in the long range but provides local estimations of algorithmic complexity. It is meant to improve the properties of Shannon Entropy that in practice are reduced to finding statistical regularities and to extend the power of CTM. It consists in decomposing objects into smaller pieces for which algorithmic complexity approximations have been numerically estimated using CTM, then reconstructing an approximation of the Kolmogorov complexity for the larger object by adding the complexity of the individual components of the object, according to the rules of information theory. For example, if s is an object and 10 s is a repetition of s ten times smaller, upper bounds can be achieved by approximating K ( s ) + log 2 ( 10 ) rather than K ( 10 s ) , because we know that repetitions have a very low Kolmogorov complexity, given that one can describe repetitions with a short algorithm.
Here we introduce and study the properties of this Block Decomposition Method based on a method advanced in [33,35] that takes advantage of the powerful relationship established by algorithmic probability between the frequency of a string produced by a random program running on a (prefix-free) UTM and the string’s Kolmogorov complexity. The chief advantage of this method is that it deals with small objects with ease, and it has shown stability in the face of changes of formalism, producing reasonable Kolmogorov complexity approximations. BDM must be combined with CTM if it is to scale up properly and behave optimally for upper bounded estimations of K. BDM + CTM is universal in the sense that it is guaranteed to converge to K due to the invariance theorem, and as we will prove later, if CTM no longer runs, then BDM alone approximates the Shannon entropy of a finite object.
Like compression algorithms, BDM is subject to a trade-off. Compression algorithms deal with the trade-off of compression power and compression/decompression speed.

4.1. l-Overlapping String Block Decomposition

Let us fix values for t and k and let D ( t , k ) be the frequency distribution constructed from running all the Turing machines with n states and k symbols. Following Equation (5), we have it that— log D is an approximation of K (denoted by C T M ). We define the BDM of a string or finite sequence s as follows,
B D M ( s , l , m ) = i C T M ( s i , m , k ) + log ( n i )
where n i is the multiplicity of s i and s i the subsequence i after decomposition of s into subsequences s i , each of length l, with a possible remainder sequence y < | l | if | s | is not a multiple of the decomposition length l.
The parameter m goes from 1 to the maximum string length produced by CTM, where m = l means no overlapping inducing a partition of the decomposition of s, m is thus an overlapping parameter when m < l for which we will investigate its impact on BDM (in general, the smaller m a greater overestimation of BDM).
The parameter m is needed because of the remainder. If | s | is not a multiple of the decomposition length l then the option is to either ignore the remainder in the calculation of BDM or define a sliding window with overlapping m l .
The choice of t and k for CTM in BDM depend only on the available resources for running CTM, which involves running the entire ( t , k ) space of Turing machines with t symbols and k states.
BDM approximates K in the following way: if p i is the minimum program that generates each base string s i , then C T M ( s i ) | p i | and we can define an unique program q that runs each p i , obtaining all the building blocks. How many times each block is present in s can be given in log n i bits. Therefore, BDM is the sum of the information needed to describe the decomposition of s in base strings. How close is this sum to K is explored in Section 4.3.
The definition of BDM is interesting because one can plug in other algorithmic distributions, even computable ones, approximating some measure of algorithmic complexity even if it is not the one defined by Kolmogorov–Chaitin such as, for example, the one defined by Calude et al. [23] based on finite-state automata. BDM thus allows the combination of measures of classical information theory and algorithmic complexity.
For example, for binary strings we can use t = 2 and k = 2 to produce the empirical output distribution ( 2 , 2 ) of all machines with 2 symbols and 2 states by which all strings of size l = 12 are produced, except two (one string and its complement). However, we assign them values max { C T M ( y , 2 , 2 ) + r : | y | = 12 } where e is different from zero because the missing strings were not generated in ( 2 , 2 ) and therefore have a greater algorithmic random complexity than any other string produced in ( 2 , 2 ) of the same length. Then, for l = 12 and m = 1 , B D M ( s , l , m ) decomposes s = 010101010101010101 of length | s | = 18 into the following subsequences:
010101010101 101010101010 010101010101 101010101010 010101010101 101010101010 010101010101
with 010101010101 having multiplicity 4 and 101010101010 multiplicity 3.
We then get the CTM values for these sequences:
C T M ( 010101010101 , 2 , 2 ) = 26.99073
C T M ( 101010101010 , 2 , 2 ) = 26.99073
To calculate BDM, we then take the sum of the CTM values plus the sum of the log b of the multiplicities, with b = 2 because the string alphabet is 2, the same as the number of symbols in the set of Turing machines producing the strings. Thus:
log 2 ( 3 ) + log 2 ( 4 ) + 26.99 + 26.99 = 57.566

4.2. 2- and w-Dimensional Complexity

To ask after the likelihood of an array, we can consider a 2-dimensional Turing machine. The Block Decomposition Method can then be extended to objects beyond the unidimensionality of strings, e.g., arrays representing bitmaps such as images, or graphs (by way of their adjacency matrices). We would first need CTM values for 2- and w-dimensional objects that we call base objects (e.g., base strings or base matrices).
A popular example of a 2-dimensional tape Turing machine is Langton’s ant [42]. Another way to see this approach is to take the BDM as a way of deploying all possible 2-dimensional deterministic Turing machines of a small size in order to reconstruct the adjacency matrix of a graph from scratch (or smaller pieces that fully reconstruct it). Then, as with the Coding theorem method (above), the algorithmic complexity of the adjacency matrix of the graph can be estimated via the frequency with which it is produced from running random programs on the (prefix-free) 2-dimensional Turing machine. More specifically,
B D M ( X , { x i } ) = ( r i , n i ) A d j ( X ) { x i } C T M ( r i ) + log ( n i ) ,
where the set A d j ( X ) { x i } is composed of the pairs ( r , n ) , r is an element of the decomposition of X (as specified by a partition { x i } , where x i is a submatrix of X) in different sub-arrays of size up to d 1 × × d w (where w is the dimension of the object) that we call base matrix (because C T M values were obtained for them) and n is the multiplicity of each component. C T M ( r ) is a computable approximation from below to the algorithmic information complexity of r, K ( r ) , as obtained by applying the coding theorem method to w-dimensional Turing machines. In other words, { r i } is the set of base objects.
Because string block decomposition is a special case of matrix block decomposition, and square matrix block decomposition is a special case of w-block decomposition for objects of w dimensions, let us describe the way in which BDM deals with boundaries on square matrices, for which we can assume CTM values are known, and that we call base strings or base matrices.
Figure 4 shows that the number of permutations is a function of the complexity of the original object, with the number of permutations growing in proportion to the original object’s entropy—because the number of different resulting blocks determines the number of different n objects to distribute among the size of the original object (e.g., 3 among 3 in Figure 4 (top) or only 2 different 4 × 4 blocks in Figure 4 (bottom)). This means that the non-overlapping version of BDM is not invariant vis-à-vis the variation of the entropy of the object, on account of which it has a different impact on the error introduced in the estimation of the algorithmic complexity of the object. Thus, non-overlapping objects of low complexity will have little impact, but with random objects non-overlapping increases inaccuracy. Overlapping decomposition solves this particular permutation issue by decreasing the number of possible permutations, in order to avoid trivial assignment of the same BDM values. However, overlapping has the undesired effect of systematically overestimating values of algorithmic complexity by counting almost every object of size n, n 1 times, hence overestimating at a rate of about n ( n 1 ) for high complexity objects of which the block multiplicity will be low, and by n log ( n ) for low complexity objects.
Applications to graph theory [18], image classification [10] and human behavioural complexity have been produced in the last few years [11,12].

4.3. BDM Upper and Lower Absolute Bounds

In what follows we show the hybrid nature of the measure. We do this by setting lower and upper bounds to BDM in terms of the algorithmic complexity K ( X ) , the partition size and the approximation error of C T M , such that these bounds are tighter in direct relation to smaller partitions and more accurate approximations of K. These bounds are independent of the partition strategy defined by { x i } .
Proposition 1.
Let B D M be the function defined in Equation (8) and let X be an array of dimension w. Then K ( X ) B D M ( X , { x i } ) + O ( log 2 | A | ) + ϵ and B D M ( X , { x i } ) | A d j ( X ) { x i } | K ( X ) + O ( | A d j ( X ) { x i } | log | A d j ( X ) { x i } | ) ϵ , where A is a set composed of all possible ways of accommodating the elements of A d j ( X ) { x i } in an array of dimension w, and ϵ is the sum of errors for the approximation C T M over all the sub-arrays used.
Proof. 
Let A d j ( X ) { x i } = { ( r 1 , n 1 ) , , ( r k , n k ) } and { p j } , { t j } be the sequences of programs for the reference prefix-free UTM U such that, for each ( r j , n j ) A d j ( X ) { x i } , we have U ( p j ) = r j , U ( t j ) = n j , K ( r j ) = | p j | and | t j | 2 log ( n j ) + c . Let ϵ j be a positive constant such that C T M ( r j ) + ϵ j = K ( X ) ; this is the error for each sub-array. Let ϵ be the sum of all the errors.
For the first inequality we can construct a program q w , whose description only depends on w, such that, given a description of the set A d j ( X ) { x i } and an index l, it enumerates all the ways of accommodating the elements in the set and returns the array corresponding to the position given by l.
Please note that | l | , | A d j ( X ) { x i } | and all n j ’s are of the order of log | A | . Therefore U ( q w q 1 p 1 t 1 p j t j l ) = X and
K ( X ) | q w p 1 t 1 p j t j l | | q w | + 1 k ( | q j | + | p j | ) + | l | B D M ( X , { x i } ) + ϵ + | q w | + ( log | A | + c ) | A d j ( X ) { x i } | + O ( log | A | ) B D M ( X , { x i } ) + O ( log 2 | A | ) + ϵ ,
which gives us the inequality.
Now, let q X be the smallest program that generates X. For the second inequality we can describe a program q { x i } which, given a description of X and the index j, constructs the set A d j ( X ) { x i } and returns r j , i.e., U ( q { x i } q X j ) = r j . Please note that each | j | is of the order of log | A d j ( X ) { x i } | . Therefore, for each j we have K ( r j ) + ϵ j = | p j | | q { x i } | + | q X | + O ( log | A d j ( X ) { x i } | ) and K ( r j ) + ϵ j + log ( n i ) | q { x i } | + | q X | + O ( log | A d j ( X ) { x i } | ) + log ( n i ) . Finally, by adding all the terms over the j’s we find the second inequality:
B D M ( X , { x i } ) + ϵ | A d j ( X ) { x i } | ( | q X | + | q { x i } | + log ( n j ) + O ( log | A d j ( X ) { x i } | ) ) | A d j ( X ) { x i } | K ( X ) + O ( | A d j ( X ) { x i } | log | A d j ( X ) { x i } | ) .
 ☐
Corollary 1.
If the partition defined by { x i } is small, that is, if | A d j ( X ) { x i } | is close to 1, then B D M ( X , { x i } ) K ( X ) .
Proof. 
Given the inequalities presented in Proposition 1, we have it that
K ( X ) O ( log 2 | A | ) ϵ B D M ( M , { x i } )
and
B D M ( M , { x i } ) | A d j ( X ) { x i } | K ( X ) + O ( | A d j ( X ) { x i } | log | A d j ( X ) { x i } | ) + ϵ
which at the limit leads to K ( X ) ϵ B D M ( X ) K ( X ) ϵ and B D M ( X ) = K ( X ) ϵ . From [33], we can say that the error rate ϵ is small, and that by the invariance theorem it will converge towards a constant value. ☐

5. Dealing with Object Boundaries

Because partitioning an object—a string, array or tensor—leads to boundary leftovers not multiple of the partition length, the only two options to take into consideration such boundaries in the estimation of the algorithmic complexity of the entire object is to either estimate the complexity of the leftovers or to define a sliding window allowing overlapping in order to include the leftovers in some of the block partitions. The former implies mixing object dimensions that may be incompatible (e.g., CTM complexity based on 1-dimensional TMs versus CTM based on higher dimensional TMs). Here we explore these strategies to deal with the object boundaries. Here we introduce a strategy for partition minimization and base object size maximization that we will illustrate for 2-dimensionality. The strategies are intended to overcome under- or over-fitting complexity estimations that are due to conventions, not just technical limitations (due to, e.g., uncomputability and intractability).
Notice that some of the explorations in this section may give the wrong impression to use and introduce ad-hoc methods to deal with the object boundaries. However, this is not the case. What we will do in this section is to explore all possible ways we could conceive in which we can estimate K according to BDM taking into considerations the boundaries that may require special treatment when they are not of multiple size to the partition length from the decomposition of the data after BDM. Moreover, we show that in all cases, the results are robust because the errors found are convergent and can thus be corrected, so any possible apparently ad-hoc condition has little to no implications in the calculation of BDM in the limit and only very limited at the beginning.

5.1. Recursive BDM

In Section 4.3, we have shown that using smaller partitions for B D M yields more accurate approximations to the algorithmic complexity K. However, the computational costs for calculating C T M are high. We have compiled an exhaustive database for square matrices of size up to 4 × 4 . Therefore it is in our best interest to find a method to minimize the partition of a given matrix into squares of size up to d × d = l for a given l.
The strategy consists in taking the biggest base matrix multiple of d × d on one corner and dividing it into adjacent square submatrices of the given size. Then we group the remaining cells into 2 submatrices and apply the same procedure, but now for ( d 1 ) × ( d 1 ) . We continue dividing into submatrices of size 1 × 1 .
Let X be a matrix of size m × n with m , n d . Let’s denote by quad = { U L , L L , D R , L R } the set of quadrants on a matrix and by quad d the set of vectors of quadrants of dimension l. We define a function p a r t ( X , d , q i ) , where q 1 , , q d quad d , as follows:
p a r t ( X , l , q i ) = m a x ( X , d , q i )   p a r t ( r e s L ( X , d , q i ) , d 1 , q i + 1 )   p a r t ( r e s R ( X , d , q i ) , d 1 , q i + 1 )   p a r t ( r e s L R ( X , d , q i ) , d 1 , q i + 1 ) ,
where m a x ( X , d , q i ) is the largest set of adjacent submatrices of size d × d that can be clustered in the corner corresponding to the quadrant q i , r e s R ( X , d 1 , q i ) is the submatrix composed of all the adjacent rightmost cells that could not fit on m a x ( X , d , q i ) and are not part of the leftmost cells, r e s L ( X , d 1 , q i ) is an analogue for the leftmost cells and r e s L R ( X , d 1 , q i ) is the submatrix composed of the cells belonging to the rightmost and leftmost cells. We call the last three submatrices residual matrices.
By symmetry, the number of matrices generated by the function is invariant with respect to any vector of quadrants q 1 , , q d . However, the final BDM value can (and will) vary according to the partition chosen. Nevertheless, with this strategy we can evaluate all the possible BDM values for a given partition size and choose the partition that yields the minimum value, the maximum value, or compute the average for all possible partitions.
The partition strategy described can easily be generalized and applied to strings (1 dimension) and tensors (objects of n-dimensions).

5.2. Periodic Boundary Conditions

One way to avoid having remaining matrices (from strings to tensors) of different sizes is to embed a matrix in a topological torus (see Figure 5) such that no more object borders are found. Then let X be a square matrix of arbitrary size m. We screen the matrix X for all possible combinations to minimize the number of partitions maximizing block size. We then take the combination of smallest BDM for fixed base matrix size d and we repeat for d 1 until we have added all the components of the decomposed X. This procedure, will, however, overestimate the complexity values of all objects (in unequal fashion along the complexity spectra) but will remain bounded, as we will show in Section 7.
Without loss of generality the strategy can be applied to strings (1 dimension) and tensors (any larger number of dimensions, e.g., greater than 2), the former embedded in a cylinder while tensors can be embedded in n-dimensional tori (see Figure 5).

6. BDM versus Shannon Entropy

Let us address the task of quantifying how many strings with maximum entropy rate are actually algorithmically compressible, i.e., have low algorithmic complexity. That is, how many strings are actually algorithmically (as opposed to simply statistically) compressible but are not compressed by lossless compression algorithms, which are statistical (entropy rate) estimators [7]. We know that most strings have both maximal entropy (most strings look equally statistically disordered, a fact that constitutes the foundation of thermodynamics) and maximal algorithmic complexity (according to a pigeonhole argument, most binary strings cannot be matched to shorter computer programs as these are also binary strings). However, the gap between those with maximal entropy and low algorithmic randomness diverges and is infinite at the limit (for an unbounded string sequence). That is, there is an infinite number of sequences that have maximal entropy but low algorithmic complexity.
The promise of BDM is that, unlike compression, it does identify some cases of strings with maximum entropy that actually have low algorithmic complexity. Figure 6 and Figure A1 show that indeed BDM assigns lower complexity to more strings than entropy, as expected. Unlike entropy, and implementations of lossless compression algorithms, BDM recognizes some strings that have no statistical regularities but have algorithmic content that makes them algorithmically compressible.
Examples of strings with lower randomness than that assigned by entropy and Block entropy are 101010010101 (and its complement) or the Thue-Morse sequence 011010011001… (or its complement) obtained by starting with 0 and successively appending the Boolean complement [43], the first with low CTM = 29 and the Thue-Morse with CTM = 33.13 (the max CTM value in the subset is 37.4 for the last string in this table). The Morse sequence is uniformly recurrent without being periodic, not even eventually periodic, so will remain with high entropy and Block entropy.
CTM and BDM as functions of the object’s size (and therefore the size of the Turing machine rule space that has to be explored) have the following time complexity:
  • CTM is uncomputable but for decidable cases runs in e x p time.
  • Non-overlapping string BDM and LD runs in linear time and n d polynomial time for d-dimensional objects.
  • Overlapping BDM runs in n s time with m the overlapping offset.
  • Full overlapping with m = 1 runs in 2 n polynomial time as a function of the number of overlapping elements n.
  • Smooth BDM runs in linear time.
  • Mutual Information BDM runs in e x p time for strings and d exponential for dimension d.
So how does this translate into real profiling power of recursive strings and sequences that are of low algorithmic complexity but appear random to classical information theory? Figure 6, and Figure A1 in the Appendix A provide real examples showing how BDM can outperform the best versions of Shannon entropy.
Figure 6 shows the randomness estimation of 2 known low algorithmic complexity objects and CTM to BDM transitions of the mathematical constant π and the Thue-Morse sequence both of which numerical estimations by CTM assign lower randomness than the suggested by both entropy and its best version Block entropy. It is expected to find that CTM does much better at characterizing the low algorithmic randomness of a sequence like the Thue-Morse sequence (beyond the fact that it is not Borel normal [43]) given that every part of the sequence is algorithmically obtained from another part of the sequence (by logical negation or the substitution system 0→01, 1→10 starting from 0) while the digits of π have been shown to be independent (at least in powers of 2) from each other [44] and only algorithmic in the way they are produced from any of the many known generating formulas.
One of the most recent found formula producing any digits of any segment of the mathematical constant π (in base 2 k , is given by a very short symbolic summation [44]: n = 1 ( 4 / ( 8 n + 1 ) 2 / ( 8 n + 4 ) 1 / ( 8 n + 5 ) 1 / ( 8 n + 6 ) ) / k n To produce such a string of 80 ASCII characters (less than 1K bits) has a probability of 1 / 256 83 × f or 1.30642 × 10 200 × f to be produced by chance typing “random formulae”, where f is a multiplying factor quantifying the number other formulas of fixed (small) size that can also produce π and for which there are many known since the works of Vieta, Leibniz, Wallis, Euler, Ramanujan, and others. In theory, classical probability is exponentially divergent from the much higher algorithmic probability 1 / 2 n that is the classical probability to produce an initial segment of π (in binary) of length n. A good source of formulae producing digits of π can be found at e.g., the Online Encyclopedia of Integer Sequences (OEIS) (https://oeis.org/A000796) listing more than 50 references and at Wolfram MathWorld listing around a hundred (http://mathworld.wolfram.com/PiFormulas.html).
Unlike classical probability, algorithmic probability quantifies the production likelihood of the object (Figure 6, and Figure A1 in the Appendix A) by indirect algorithmic/recursive means rather than by direct production (the typical analogy is writing on a computer equipped with a language compiler program versus writing on a typewriter).
To the authors knowledge, no other numerical method is known to suggest low algorithmic randomness of statistical random-looking constants such as π and the Thue-Morse sequence (Figure 6, and Figure A1 in the Appendix A) from an observer perspective with no access to prior information, probability distributions or knowledge about the nature of the source (i.e., a priori deterministic).
There are two sides of BDM. On the one hand, it is based on a non-computable function but once approximations are computed we build a lookup table of values which makes BDM computable. Lossless compression is also computable but it is taken as able to make estimations of an uncomputable function like K because they can provide upper bounds and estimate K from above just as we do with CTM (and thus BDM) by exhibiting a short Turing machine capable of reproducing the data/string.
Table 2, summarizes the range of application, with CTM and BDM preeminent in that they can more efficiently deal with short, medium and long sequences and other objects such as graphs, images, networks and higher dimensionality objects.

7. Error Estimations

One can estimate the error in different calculations of BDM, regardless of the error estimations of CTM (quantified in [33,35]), in order to calculate their departure and deviation both from granular entropy and algorithmic complexity, for which we know lower and upper bounds. For example, a maximum upper bound for binary strings is the length of the strings themselves. This is because no string can have an algorithmic complexity greater than its length, simply because the shortest computer program (in bits) to produce the string may be the string itself.
In the calculation of BDM, when an object’s size is not a multiple of the base object of size d, boundaries of size < d will be produced, and there are various ways of dealing with them to arrive at a more accurate calculation of an object that is not a multiple of the base. First we will estimate the error introduced by ignoring the boundaries or dealing with them in various ways, and then we will offer alternatives to take into consideration in the final estimation of their complexity.
If a matrix X of size k × j is not a multiple of the base matrix of size d × d , it can be divided into a set of decomposed blocks of size d × d , and R, L, T and B residual matrices on the right, left, top and bottom boundaries of M, all of smaller size than d.
Then boundaries R, L, T and B can be dealt with in the following way:
  • Trimming boundary condition: R, L, T and B are ignored, then B D M ( X ) = B D M ( X , R , L , T , B ) , with the undesired effect of general underestimation for objects not multiples of d. The error introduced (see Figure 7) is bounded between 0 (for matrices divisible by d) and k 2 / e x p ( k ) , where k is the size of X. The error is thus convergent ( e x p ( k ) grows much faster than k 2 ) and can therefore be corrected, and is negligible as a function of array size as shown in Figure 7.
  • Cyclic boundary condition (Figure 5 bottom): The matrix is mapped onto the surface of a torus such that there are no more boundaries and the application of the overlapping BDM version takes into consideration every part of the object. This will produce an over-estimation of the complexity of the object but will for the most part respect the ranking order of estimations if the same overlapping values are used with maximum overestimation d 1 × max { C T M ( b ) | b X } , where K ( b ) is the maximum CTM value among all base matrices b in X after the decomposition of X.
  • Full overlapping recursive decomposition: X is decomposed into ( d 1 ) 2 base matrices of size d × d by traversing X with a sliding square block of size d. This will produce a polynomial overestimation in the size of the object of up to ( d 1 ) 2 , but if consistently applied it will for the most part preserve ranking.
  • Adding low complexity rows and columns (we call this “add col”): If a matrix of interest is not multiple the base matrices, we add rows and columns until completion to the next multiple of the base matrix, then we correct the final result by substracting the borders that were artificially added.
The BDM error rate (see Figure 8) is the discrepancy of the sum of the complexity of the missed borders, which is an additive value of, at most, polynomial growth. The error is not even for a different complexity. For a tensor of d dimensions, with all 1 s as entries, the error is bounded by log ( k d ) for objects with low algorithmic randomness and by k d d k for objects with high algorithmic randomness.
Ultimately there is no optimal strategy for making the error disappear, but in some cases the error can be better estimated and corrected Figure 8 and all cases are convergent Figure 9, hence asymptotically negligible, and in all cases complexity ranking is preserved and under- and over-estimations bounded.

7.1. BDM Worse-Case Convergence towards Shannon Entropy

Let { x i } be a partition of X defined as in the previous sections for a fixed d. Then the Shannon entropy of X for the partition { x i } is given by:
H { x i } ( X ) = ( r j , n j ) A d j ( X ) { x i } n j | { x i } | log ( n j | { x i } | ) ,
where P ( r j ) = n j | { x i } | and the array r j is taken as a symbol itself. The following proposition establishes the asymptotic relationship between H { x i } and B D M .
Proposition 2.
Let M be a 2-dimensional matrix and { x i } a partition strategy with elements of maximum size d × d . Then:
| B D M { x i } ( X ) H { x i } ( X ) | O ( log ( | { x i } | ) )
Proof. 
First we note that n j = | { x i } | and, given that the set of matrices of size d × d is finite and so is the maximum value for C T M ( r j ) , there exists a constant c d such that | A d j ( X ) { x i } | C T M ( r j ) < c d . Therefore:
B D M { x i } ( X ) H { x i } ( X ) = ( C T M ( r j ) + log ( n j ) + n j | { x i } | log ( n j | { x i } | ) ) c d + ( log ( n j ) + n j | { x i } | log ( n j | { x i } | ) ) = c d + ( log ( n j ) n j | { x i } | log ( | { x i } | n j ) ) = c d + 1 | { x i } | ( | { x i } | log ( n j ) n j log ( | { x i } | n j ) ) = c d + 1 | { x i } | log ( n j | { x i } | + n j | { x i } | n j )
Now, let’s recall that the sum of n j ’s is bounded by | { x i } | . Therefore there exists c d such that
1 | { x i } | log ( n j | { x i } | + n j | { x i } | n j ) c d | { x i } | log ( | { x i } | | { x i } | + c d | { x i } | | { x i } | c d | { x i } | ) = c d | { x i } | log ( | { x i } | | { x i } | ) = c d log ( | { x i } | ) .
 ☐
Now, is important to note that the previous proof sets the limit in terms of the constant c d , which minimum value is defined in terms of matrices for which the C T M value has been computed. The smaller this number is, the tighter is the bound set by Proposition 2. Therefore, in the worst case, this is when C T M has been computed for a comparatively small number of matrices, or the larger base matrix have small algorithmic complexity, the behavior of B D M is similar to entropy. In the best case, when C T M is updated by any means, BDM approximates algorithmic complexity (Corollary 1).
Furthermore, we can think on c d log ( | { x i } | ) as a measure of the deficit in information incurred by both, entropy and BDM, in terms of each other. Entropy is missing the number of base objects needed in order to get an approximation of the compression length of M, while BDM is missing the position of each base symbol. In addition, giving more information to both measures wont necessarily yield a better approximation to K.

8. Normalized BDM

A normalized version of BDM is useful for applications in which a maximal value of complexity is known or desired for comparison purposes. The chief advantage of a normalized measure is that it enables a comparison among objects of different sizes, without allowing size to dominate the measure. This will be useful in comparing arrays and objects of different sizes. First, for a square array of size n × n , we define:
M i n B D M ( n ) d × d = n / d + min x M d ( { 0 , 1 } ) C T M ( x )
where M d ( { 0 , 1 } ) is the set of binary matrices of size d × d . For any n, M i n B D M ( n ) d × d returns the minimum value of Equation (10) for square matrices of size n, so it is the minimum BDM value for matrices with n nodes. It corresponds to an adjacency matrix composed of repetitions of the least complex d × d square. It is the all-1 or all-0 entries matrix, because 0 d , d and 1 d , d are the least complex square base matrices (hence the most compressible) of size d.
Secondly, for the maximum complexity, Equation (10) returns the highest value when the result of dividing the adjacency matrix into the d × d base matrices contains the highest possible number of different matrices (to increase the sum of the right terms in Equation (10)) and the repetitions (if necessary) are homogeneously distributed along those squares (to increase the sum of the left terms in Equation (10)) which should be the most complex ones in M d ( { 0 , 1 } ) . For n , d N , we define a function
f n , d : M d ( { 0 , 1 } ) N
that verifies:
r M d ( { 0 , 1 } ) f n , d ( r ) = n / d 2
max r M d ( { 0 , 1 } ) f n , d ( r ) 1 + min r M d ( { 0 , 1 } ) f n , d ( r )
C T M ( r i ) > C T M ( r j ) f n , d ( r i ) f n , d ( r j )
The value f n , d ( r ) indicates the number of occurrences of r M d ( { 0 , 1 } ) in the decomposition into d × d squares of the most complex square array of size n × n . Condition Equation (11) establishes that the total number of component squares is n / d 2 . Condition Equation (12) reduces the square repetitions as much as possible, so as to increase the number of differently composed squares as far as possible and distribute them homogeneously. Finally, Equation (13) ensures that the most complex squares are the best represented. Then, we define:
M a x B D M ( n ) d × d = r M d ( { 0 , 1 } ) , f n , d ( r ) > 0 log 2 ( f n , d ( r ) ) + C T M ( r )
Finally, the normalized BDM value of an array X is as follows:
Given an square matrix X of size n, N B D M ( X ) d is defined as
C T M ( X ) M i n B D M ( n ) d × d M a x B D M ( n ) d × d M i n B D M ( n ) d × d
This way we take the complexity of an array X to have a normalized value which is not dependent on the size of X but rather on the relative complexity of X with respect to other arrays of the same size. Figure 10, provides an example of high complexity for illustration purposes. The use of M i n B D M ( n ) d × d in the normalization is relevant. Please note that the growth of M i n B D M ( n ) d × d is linear with n, and the growth of M a x B D M ( n ) d × d exponential. This means that for high complexity matrices, the result of normalizing by using just C T M ( X ) / M a x B D M ( n ) d × d would be similar to N B D M ( X ) d . However, it would not work for low complexity arrays, as when the complexity of X is close to the minimum, the value of C T M ( X ) / M a x B D M ( n ) d × d drops exponentially with n. For example, the normalized complexity of an empty array (all 0s) would drop exponentially in size. To avoid this, Equation (14) considers not only the maximum but also the minimum.
Notice the heuristic character of f n , d . It is designed to ensure a quick computation of M a x B D M ( n ) d × d , and the distribution of complexities of squares of size d { 3 , 4 } in D ( 5 , 2 ) ensures that M a x B D M ( n ) d × d is actually the maximum complexity of a square matrix of size n, but for other distributions it could work in a different way. For example, condition (12) assumes that the complexities of the elements in M d ( { 0 , 1 } ) are similar. This is the case for d { 3 , 4 } in D ( 5 , 2 ) , but it may not be true for other distributions. However, at any rate it offers a way of comparing the complexities of different arrays independent of their size.

9. CTM to BDM Transition

How BDM scales CTM remains a question, as does the rate at which BDM loses the algorithmic estimations provided by CTM. Also unknown is what the transition between CTM and CTM + BDM looks like, especially in the face of applications involving objects of medium size between the range of application of CTM (e.g., 10 to 20 bit strings) and larger objects (e.g., longer sequences in the hundreds of bits).
We perform a Spearman correlation analysis to test the strength of a monotonic relationship between CTM values and BDM values with various block sizes and block overlap configurations in all 12 bit strings. We also test the strength of this relationship with CTM on Shannon entropy and compression length.
Figure 11 shows the agreement between BDM and CTM for strings for which we have exact CTM values, against which BDM was tested. The results indicate an agreement between CTM and BDM in a variety of configurations, thereby justifying BDM as an extension of the range of application of CTM to longer strings (and to longer objects in general).
In the set of all 12 bit strings, the correlation is maximal when block size = 11 and overlap = 10 (b11o10, ρ = 0.69 ); Shannon entropy has ρ = 1 with BDM when strings are divided in blocks of size = 1 and overlap = 0 (b1o0, ρ = 0.42 ), as is expected from what is described in Section 7.1.
The Spearman rank test performed on the first 4096 binary strings has p-values < 1 × 10 15 , while the Spearman rank test on the 2048 strings with CTM below the median has p-values < 1 × 10 9 . Finally, the Spearman rank test on the 2048 strings with CTM value above the median has p-values < 1 × 10 5 , in all cases except those corresponding to b4o1, b4o0, and b3o0, where ρ < 0.03 and 0.045 ≤ p-value 0.25 . The lower ρ coefficients in above median CTM strings indicates that there is a greater difficulty in estimating the algorithmic complexity of highly irregular strings through either BDM, entropy, or compression length than in detecting their regularity. Figure 11 shows that for block size >6 the Spearman ρ of BDM is always higher than the correlation of CTM with either Shannon entropy or compression length. Some block configurations of size <6 (e.g., b2o1) also have higher ρ than both Shannon entropy and compression.
While BDM approximates the descriptive power of CTM and extends it over a larger range, we prove in Section 4.3 that BDM approximates Shannon entropy if base objects are no longer generated with CTM, but if CTM approximates algorithmic complexity, then BDM does.

9.1. Smooth BDM (and “Add Col”)

As an alternative method for increasing accuracy while decreasing computational cost, is the use of a weighted function as penalization parameter in BDM. Let the base matrix size be 4 × 4 . We first partition the matrix into sub matrices of the matrix base size 4 × 4 . If the matrix size is not divisible by 4 we (1) use a smooth BDM with full overlap boundary condition (we call this method simply “smooth” BDM) or (2) add an artificial low complexity boundary to “complete” the matrix to the next multiple of 4 and apply “smooth” (we call this approach “add col” in future sections).
When using the BDM full overlap boundary condition, we screen the entire matrix by moving a sliding square of size 4 × 4 over it (as it is done for “recursive BDM”). When adding artificial low complexity boundaries we only calculate non overlapping sub-matrices of size 4 × 4 because the expanded matrix of interest is of multiple of 4. These artificial low complexity boundary are columns and rows of one symbols (zeroes or ones). We then correct the final result by subtracting the information added to the boundaries from log ( | R | ) + log ( | C | ) .
To prevent the undesired introduction of false patterns in the “completion” process (add col), we use the minimum BDM of the extended matrix for both cases (column and rows of zeroes and ones denoted by B D M 1 ( X ) and B D M 0 ( X ) respectively).
In both cases, to distinguish the occurrence of rare and thus highly complex patterns, we assign weights to each base matrix based on the probability of seeing each pattern, denoted by W i , where i is the index of the base matrix. We thereby effectively “smooth” the transition to decide matrix similarity, unlike the previous versions of BDM which counts multiplicity of equal matrices. Thus the main difference introduced in the “smooth” version of BDM is the penalization by base matrix (statistical) similarity rather than only perfect base matrix match.
To simplify notation, in what follows let us denote the adjacency matrix A d j ( X ) of a matrix M simply as M. The smooth version of BDM is then calculated as follows:
B D M ( X ) = min ( B D M 0 , B D M 1 )
B D M f ( X ) = ( r i , n i ) A d j ( X ) { x i } B D M ( r i ) × W i + log ( n i )

9.2. Weighted Smooth BDM with Mutual Information

The Smooth BDM version assigns a weight to each base matrix depending on its statistical likelihood, which is equivalent to assigning a weight based on the entropy of the base matrix over the distribution of all base matrices of 4 × 4 . An equivalent version that is computationally more expensive is the use of classical mutual information. This is arrived at by measuring the statistical similarity between base matrices precomputed by mutual information.
Mutual information is a measure of the statistical dependence of a random variable X on a random variable Y in the joint distribution of X and Y relative to the joint distribution of X and Y under an assumption of independence. If M I ( X , Y ) = 0 , then X and Y are statistically independent, but if the knowledge of X fully determines Y, M I ( X , Y ) = 1 , then X and Y are not independent. Because M I is symmetric M I ( X , Y ) = M I ( Y , X ) ; if M I ( X , Y ) = 1 , then knowing all about Y also implies knowing all about X. In one of its multiple versions MI of X and Y can be defined as:
M I ( X , Y ) = H ( X ) H ( X | Y )
where H ( X ) is the Shannon entropy of X and H ( X | Y ) the conditional Shannon entropy of X given Y.
In this way, statistically similar base matrices are not counted as requiring 2 completely different computer programs, one for each base matrix, but rather a slightly modified computer program producing 2 similar matrices accounting mostly for one and for the statistical difference of the other. More precisely, BDM can be defined by:
B D M ( X ) = ( r i , n i ) A d j ( X ) { x i } M I B D M ( r i ) + log n i
where MIBDM is defined by:
M I B D M ( r i ) = min { M I ( r i , r j ) C T M ( r i ) + ( 1 M I ( r j , r i ) ) C T M ( r j ) , M I ( r i , r j ) C T M ( r j ) + ( 1 M I ( r j , r i ) ) C T M ( r i ) }
and where M I ( r i , r j ) is a weight for each CTM value of each base matrix such that j is the index of the matrix that maximizes M I (or maximizes statistical similarity) over the distribution of all the base matrices such that M I ( r i , r j ) M I ( r i , r k ) for all k { 1 , , N } , N = | A d j ( X ) { x i } | .
However, this approach requires N × N comparisons M I ( r i , r j ) between all base matrices r with indexes i { 1 , , N } and j { 1 , , N } .
Notice that because M I is symmetric, then M I ( r i , r j ) = M I ( r j , r i ) , but the min in Equation (19) is because we look for the minimum CTM value (i.e., the length of the shortest program) for the 2 cases in which one base matrix is the one helping define the statistical similarities of the other and vice versa.

10. Testing BDM and Boundary Condition Strategies

A test for both CTM and BDM can be carried out using objects that have different representations or may look very different but are in fact algorithmically related. First we will prove some theorems relating to the algorithmic complexity of dual and cospectral graphs and then we will perform numerical experiments to see if CTM and BDM perform as theoretically expected.
A dual graph of a planar graph G is a graph that has a vertex corresponding to each face of G, and an edge joining two neighbouring faces for each edge in G. If G is a dual graph of G, then A ( G ) = A ( G ) , making the calculation of the Kolmogorov complexity of graphs and their dual graphs interesting—because of the correlation between Kolmogorov complexity and A ( G ) , which should be the same for A ( G ) . One should also expect the estimated complexity values of graphs to be the same as those of their dual graphs, because the description length of the dual graph generating program is O ( 1 ) .
Cospectral graphs, also called isospectral graphs, are graphs that share the same graph spectrum. The set of graph eigenvalues of the adjacency matrix is called the spectrum S p e c ( G ) of the graph G. This cospectrality test for complexity estimations is interesting because two non-isomorphic graphs can share the same spectrum.
We have demonstrated that isomorphic graphs have similar complexity as a function of graph automorphism group size [18]. We have also provided definitions for the algorithmic complexity of labelled and unlabelled graphs based on the automorphism group [19]. In the Appendix we prove several theorems and corollaries establishing the theoretical expectation that dual and cospectral graphs have similar algorithmic complexity values, and so we have a theoretical expectation of numerical tests with BDM to compare with.
Compression lengths and BDM values in Table 3 and Figure A2 (Appendix A) are obtained from the adjacency matrices of 113 dual graphs and 193 cospectral graphs from Mathematica’s GraphData[] repository. Graphs and their dual graphs were found by BDM to have estimated algorithmic complexities close to each other. While entropy and entropy rate do not perform well in any test compared to the other measures, compression retrieves similar values for cospectral graphs as compared to BDM, but it is outperformed by BDM on the duality test. The best BDM version for duals was different from that for cospectrals. For the duality test, the smooth, fully overlapping version of BDM outperforms all others, but for cospectrality, overlapping recursive BDM outperforms all others. In [18], we showed that BDM behaves in agreement with the theory with respect to the algorithmic complexity of graphs and the size of the automorphism group to which they belong. This is because the algorithmic complexity K ( G ) of G is effectively a tight upper bound on K ( A u t ( G ) ) .

11. Conclusions

We have introduced a well-grounded, theoretically sound and robust measure of complexity that beautifully connects 2 of the main branches of information theory, classical and algorithmic. We have shown that the methods are scalable in various ways, including native n-dimensional variations of the same measure. The properties and numerical experiments are in alignment with theoretical expectations and represent the only truly different alternative and more accurate measure of algorithmic complexity currently available. We have also shown that BDM is computationally efficient, hence complementing the effective use of lossless compression algorithms for calculation of upper bounds of Kolmogorov complexity.
There are thus three methods available today for approximating K (two of which have been advanced by us, one being completely novel: BDM; and one that was known but had never been calculated before: CTM). Here they are described by their range of application:
  • CTM deals with all bit strings of length 1–12 (and for some 20–30 bits).
  • BDM deals with 12 bits to hundreds of bits (with a cumulative error that grows by the length of the strings—if not applied in conjunction with CTM). The worst case occurs when substrings share information content with other decomposed substrings and BDM just keeps adding their K values individually.
  • CTM + BDM (deals with any string length but it is computationally extremely expensive)
  • Lossless compression deals with no less than 100 bits and is unstable up to about 1 K bits.
While CTM cannot produce estimations of longer bitstrings, estimating the algorithmic complexity of even bitstrings can be key to many problems. Think of the challenge posed by a puzzle of 1000 pieces, if you were able to put together only 12 local pieces at a time, you would be able to put all the puzzle together even without ever looking at the whole piece and thus not even requiring to see possible non-local long-range algorithmic patterns.
Because BDM locally estimates algorithmic complexity via algorithmic probability based on CTM, it is slightly more independent of object description than computable measures such as Shannon entropy, though in the “worst case” it behaves like Shannon entropy. We have also shown that the various flavours of BDM are extremely robust, both by calculating theoretical errors on tensors and by numerical investigation, establishing that any BDM version is fit for use in most cases. Hence the most basic and efficient one can be used without much concern as to the possible alternative methods that could have been used in its calculation, as we have exhaustively and systematically tested most, if not all, of them.
Ready to use online and offline methods and data are being released alongside this paper. An animated video is also available at http://www.complexitycalculator.com/HowItWorks.html.

Author Contributions

Conceptualization, H.Z., N.A.K., and F.S.-T.; Methodology, H.Z., N.A.K., F.S.-T., A.R.-T., and S.H.-O.; Software, H.Z., N.A.K., A.R.-T., F.S.-T., and S.H.-O.; Validation, H.Z., N.A.K., A.R.-T., F.S.-T., and S.H.-O.; Formal Analysis, H.Z., N.A.K., F.S.-T., S.H.-O.; Investigation, H.Z., N.A.K., F.S.-T., A.R.-T., S.H.-O.; Resources, H.Z., N.A.K., F.S.-T., J.T.; Data Curation, H.Z., N.A.K., F.S.-T., S.H.-O.; Writing—Original Draft Preparation, H.Z., N.A.K., A.R.-T., F.S.-T., S.H.O.; Writing—Review & Editing, H.Z., N.A.K., S.H.O., A.R.-T., F.S.-T.; Visualization, H.Z., N.A.K., A.R.-T., F.S.-T., S.H.-O.; Supervision, H.Z., N.A.K., J.T.; Project Administration, H.Z., N.A.K., J.T.; Funding Acquisition, H.Z., J.T.

Funding

This research was funded by Swedish Research Council (Vetenskapsrådet) grant number [2015-05299].

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
BDMBlock Decomposition Method
CTMCoding Theorem Method
TMTuring machine
KKolmogorov complexity

Appendix A. Entropy and Block Entropy v CTM

Figure A1. The randomness of the digits of π as measured by Shannon entropy, Block entropy and CTM. Strengthening the claim made in Figure 6, and Figure A1 in the Appendix A, here we show the trend of average movement of Entropy and Block Entropy towards 1 and CTM’s average remaining the same but variance slightly reduced. The stronger the colour the more digits into consideration. The direction of Block entropy is the clearest, first from a sample of 100 segments of length 12 bits from the first 1000 decimal digits of π converted to binary (light orange) followed by a second run of 1000 segments of length 12 bits from the first 1 million decimal digits of π . When running CTM over longer period of time, the invariance theorem guarantees convergence to 0.
Figure A1. The randomness of the digits of π as measured by Shannon entropy, Block entropy and CTM. Strengthening the claim made in Figure 6, and Figure A1 in the Appendix A, here we show the trend of average movement of Entropy and Block Entropy towards 1 and CTM’s average remaining the same but variance slightly reduced. The stronger the colour the more digits into consideration. The direction of Block entropy is the clearest, first from a sample of 100 segments of length 12 bits from the first 1000 decimal digits of π converted to binary (light orange) followed by a second run of 1000 segments of length 12 bits from the first 1 million decimal digits of π . When running CTM over longer period of time, the invariance theorem guarantees convergence to 0.
Entropy 20 00605 g0a1

Appendix A.1. Duality and Cospectral Graph Proofs and Test Complement

Theorem A1.
Let G be the dual graph of G. Then K ( G ) K ( G ) .
Proof. 
Let p denote the finite program that, for any graph G, replaces every edge in G by a vertex and every vertex in G by an edge. The resulting graph produced by p is then G (uniqueness), which implies that | K ( G ) K ( G ) | < | p | because we did not assume that p was the shortest program. Thus, K ( G ) + | p | = K ( G ) or K ( G ) K ( G ) up to a constant factor. ☐
Let K ( A u t ( G ) ) be the algorithmic complexity of the automorphism group A u t ( G ) of the graph G (i.e., all possible relabellings that preserve graph isomorphism), that is, the length of the shortest program that generates all the graphs in A u t ( G ) .
Theorem A2.
Let G be an isomorphic graph of G. Then K ( G ) K ( G ) for all K ( G ) A u t ( G ) , where A u t ( G ) is the automorphism group of G.
The idea is that if there is a significantly shorter program p for generating G compared to a program p generating A u t ( G ) , we can use p to generate A u t ( G ) via G and a relatively short program c that tries, e.g., all permutations, and checks for isomorphism. Let’s assume that there exists a program p such that | | p | | p | | > c , i.e., the difference is not bounded by any constant, and that K ( G ) = | p | . We can replace p by p + c to generate A u t ( G ) such that K ( A u t ( G ) ) = p + c , where c is a constant independent of G that represents the size of the shortest program that generates A u t h ( G ) , given any G. Then we have it that | K ( A u t ( G ) ) K ( G ) | < c , which is contrary to the assumption.
Corollary A1.
K ( A u t ( G ) ) < K ( G ) + O ( 1 ) for any G A u t ( G ) .
Proof. 
Let G be in A u t ( G ) such that G G . There exists (An algorithm (so far known to be in class NP) that produces all relabellings—the simplest one is brute force permutation—and can verify graph isomorphism in time class P [18]).) a computer program p that produces A u t ( G ) for all G. With this program we can construct A u t ( G ) from any graph G A u t ( G ) and G from A u t ( G ) and the corresponding label n. Therefore K ( G ) | p | + K ( A u t ( G ) ) + l o g ( n ) + O ( 1 ) and K ( A u t ( G ) ) < = | p | + K ( G ) . ☐
Theorem A3.
If G and G are cospectral graphs, then | K ( G ) K ( G ) | < c + log ( n ) , i.e., K ( G ) K ( G ) up to a constant and small logarithmic term.
Proof. 
The strategy is similar: by brute force permutation one can produce all possible adjacency matrices after row and column permutation. Let K ( G ) be the algorithmic complexity of G and S p e c ( G ) be the spectrum of G. Let p be the program that permutes all rows and columns and tests for cospectrality, and | p | its program length. Let S p e c ( G ) = S p e c ( G ) . Then K ( G ) K ( G ) + | p | + log ( n ) where n is the size of G that indicates the index of the right column and row permutation that preserves S p e c ( G ) among all graphs of size n. ☐
Table A1. List of strings with high entropy and high Block entropy but low algorithmic randomness detected and sorted from lowest to greatest values by CTM.
Table A1. List of strings with high entropy and high Block entropy but low algorithmic randomness detected and sorted from lowest to greatest values by CTM.
101010010101010101101010101111000010010000111101111111000000
000000111111100101011010011010100101101100110010010011001101
111100000011110000001111001111110000000011111100111110100000
000001011111111101000001111100000101101000001111100000101111
011111010000010111110000000011111010000010111110110111000100
001000111011110111000001100000111011011111000100001000111110
110100010011110010001011001101110100001011101100111110000010
101111100000010000011111000001111101100000111110011111000001
110101000011110000101011001111010100001010111100111100101000
111010110000000101001111000011010111111100110000000011001111
110000111010101000111100010111000011001111000101111100001010
101011110000010100001111000011110101111011000010101111001000
010000110111000100111101111000001011110100000111110011100010
101110001100100011001110011100110001010001110011001100011101
001011111000000111110100111010000011110000010111001111101000
000101111100110011010001100010110011011101001100001100101110
110101001100110011010100001100101011001010110011111000110010
110010100011110001010011101100111000010011000111001110101100
001101011100000111001101101100001110100011110010011100001101
010011110001111000100011110001000111001110111000000111011100
110000011101101110000011010001111100001111100010111101010000
000010101111111010001100110011101000001100010111000101110011
111000101100110010111000001101000111000111010011111011001000
000100110111
Figure A2. Scatterplots comparing the various BDM versions tested on dual and cospectral graphs that theoretically have the same algorithmic complexity up to a (small) constant. x-axis values for each top row plot are sorted by BDM for one of the dual and for the cospectral graph series. Bottom rows: on top of each corresponding scatterplot are the Spearman ρ values.
Figure A2. Scatterplots comparing the various BDM versions tested on dual and cospectral graphs that theoretically have the same algorithmic complexity up to a (small) constant. x-axis values for each top row plot are sorted by BDM for one of the dual and for the cospectral graph series. Bottom rows: on top of each corresponding scatterplot are the Spearman ρ values.
Entropy 20 00605 g0a2
Figure A3. Scatterplots comparing other measures against the best BDM performance. x-axis values for each top row plot are sorted by BDM for one of the dual and for the cospectral graph series. Bottom rows: on top of each corresponding scatterplot are the Spearman ρ values.
Figure A3. Scatterplots comparing other measures against the best BDM performance. x-axis values for each top row plot are sorted by BDM for one of the dual and for the cospectral graph series. Bottom rows: on top of each corresponding scatterplot are the Spearman ρ values.
Entropy 20 00605 g0a3aEntropy 20 00605 g0a3b

Appendix A.2. The Online Algorithmic Complexity Calculator and Language Implementations

The Online Algorithmic Complexity Calculator (or OACC) available at http://www.complexitycalculator.com has now been updated to incorporate 2-dimensional data, in preparation for this paper. The OACC implements the most important methods and tools explained here. Documentation is available at https://cran.r-project.org/web/packages/acss/acss.pdf, as the OACC is an online app based on the acss [45] R package.
Figure A4. The Online Algorithmic Complexity Calculator available at http://www.complexitycalculator.com. Full code for the R Shiny web server used is available at [46].
Figure A4. The Online Algorithmic Complexity Calculator available at http://www.complexitycalculator.com. Full code for the R Shiny web server used is available at [46].
Entropy 20 00605 g0a4
We are releasing actual implementations of BDM in, C++, Pascal, Perl, Python, Mathematica and Matlab. Table A2 provides an overview of the various implementations. We are also releasing the data generated by CTM which BDM requires to run. This consists in the Kolmogorov complexity evaluations of all strings up to length 11 (12 with the completion of a single one and its complement) as well as for some longer ones, and of all the square arrays of length up to 4 × 4 as calculated by the canonical 2-dimensional Turing machine model (replacing the unidimensional tape with Rado’s 2-dimensional model). The data is also released with a program (a Wolfram Language “Demonstration”) available on-line at the Wolfram’s Demonstrations Project website [47].
Table A2. Computer programs in different languages implementing various BDM versions. We have shown that all implementations agree with each other in various degrees, with the only differences having to do with under- or over-estimated values and time complexity and scalability properties. They are extremely robust, thereby establishing that the use of the most basic versions (1D n-o, 2D n-o) are justified in most cases. “WL” stands for Wolfram Language, the language behind e.g., the Mathematica platform, “Online” for the online calculator, “Cyc” for Cyclic, “Norm” stands for normalized, “Rec” for recursive, “Smo” for “Smooth”, “N-o” for Nonoverlapping and “addcol” for the method that adds rows and columns of lowest complexity to the borders up to the base string/array/tensor size. If not stated then it supports overlapping. All programs are available at https://www.algorithmicdynamics.net/software.html.
Table A2. Computer programs in different languages implementing various BDM versions. We have shown that all implementations agree with each other in various degrees, with the only differences having to do with under- or over-estimated values and time complexity and scalability properties. They are extremely robust, thereby establishing that the use of the most basic versions (1D n-o, 2D n-o) are justified in most cases. “WL” stands for Wolfram Language, the language behind e.g., the Mathematica platform, “Online” for the online calculator, “Cyc” for Cyclic, “Norm” stands for normalized, “Rec” for recursive, “Smo” for “Smooth”, “N-o” for Nonoverlapping and “addcol” for the method that adds rows and columns of lowest complexity to the borders up to the base string/array/tensor size. If not stated then it supports overlapping. All programs are available at https://www.algorithmicdynamics.net/software.html.
Lang1D n-o1D 2D n-o2D Rec2D Cyc2D Smo2D Norm1-D LDAddcol
Online×××××
WL××
R××××
Matlab×××
Haskell××××
Perl×××××××
Python×××××××
Pascal×××××××
C++×××××××

References

  1. Zenil, H.; Kiani, N.A.; Tegnér, J. Low-algorithmic-complexity entropy-deceiving graphs. Phys. Rev. E 2017, 96, 012308. [Google Scholar] [CrossRef] [PubMed]
  2. Daley, R.P. An Example of Information and Computation Trade-Off. J. ACM 1973, 20, 687–695. [Google Scholar] [CrossRef]
  3. Levin, L.A. Universal sequential search problems. Probl. Inf. Transm. 1973, 9, 265–266. [Google Scholar]
  4. Schmidhuber, J. The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. In Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Sydney, Australia, 8–10 July 2002; Kivinen, J., Sloan, R.H., Eds.; Springer: Berlin, Germany, 2002; pp. 216–228. [Google Scholar]
  5. Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications, 3rd. ed.; Springer: Heidelberg, Germany, 2009. [Google Scholar]
  6. Cilibrasi, R.; Vitányi, P.M. Clustering by compression. IEEE Trans. Inf. Theory 2005, 51, 1523–1545. [Google Scholar] [CrossRef]
  7. Zenil, H.; Badillo, L.; Hernández-Orozco, S.; Hernández-Quiroz, F. Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability. Int. J. Parallel Emerg. Distrib. Syst. 2018, 1–20. [Google Scholar] [CrossRef]
  8. Zenil, H. Algorithmic Data Analytics, Small Data Matters and Correlation versus Causation. In Berechenbarkeit der Welt? Philosophie und Wissenschaft im Zeitalter von Big Data (Computability of the World? Philosophy and Science in the Age of Big Data); Ott, M., Pietsch, W., Wernecke, J., Eds.; Springer: Berlin, Germany, 2017; pp. 453–475. [Google Scholar] [Green Version]
  9. Zenil, H.; Soler-Toscano, F.; Delahaye, J.P.; Gauvrit, N. Two-dimensional Kolmogorov complexity and an empirical validation of the Coding Theorem Method by compressibility. PeerJ Comput. Sci. 2015, 1, e23. [Google Scholar] [CrossRef]
  10. Gauvrit, N.; Soler-Toscano, F.; Zenil, H. Natural scene statistics mediate the perception of image complexity. Vis. Cognit. 2014, 22, 1084–1091. [Google Scholar] [CrossRef] [Green Version]
  11. Gauvrit, N.; Singmann, H.; Soler-Toscano, F.; Zenil, H. Algorithmic complexity for psychology: A user-friendly implementation of the coding theorem method. Behav. Res. Methods 2016, 48, 314–329. [Google Scholar] [CrossRef] [PubMed]
  12. Kempe, V.; Gauvrit, N.; Forsyth, D. Structure emerges faster during cultural transmission in children than in adults. Cognition 2015, 136, 247–254. [Google Scholar] [CrossRef] [PubMed]
  13. Emmert-Streib, F.; Dehmer, M. Exploring statistical and population aspects of network complexity. PLoS ONE 2012, 7, e34523. [Google Scholar] [CrossRef] [PubMed]
  14. Dehmer, M.M. A novel method for measuring the structural information content of networks. Cybern. Syst. 2008, 39, 825–842. [Google Scholar] [CrossRef]
  15. Dehmer, M.M.; Barbarini, N.N.; Varmuza, K.K.; Graber, A.A. Novel topological descriptors for analyzing biological networks. BMC Struct. Biol. 2010, 10, 18. [Google Scholar] [CrossRef] [PubMed]
  16. Mowshowitz, A.; Dehmer, M.M. Entropy and the complexity of graphs revisited. Entropy 2012, 14, 559–570. [Google Scholar] [CrossRef]
  17. Holzinger, A.; Ofner, B.; Stocker, C.; Valdez, A.C.; Schaar, A.K.; Ziefle, M.; Dehmer, M. On graph entropy measures for knowledge discovery from publication network data. In International Conference on Availability, Reliability, and Security; Springer: Berlin/Heidelberg, Germany, 2013; pp. 354–362. [Google Scholar]
  18. Zenil, H.; Soler-Toscano, F.; Dingle, K.; Louis, A. Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks. Physica A 2014, 404, 341–358. [Google Scholar] [CrossRef]
  19. Zenil, H.; Kiani, N.A.; Tegnér, J. Methods of information theory and algorithmic complexity for network biology. In Seminars in Cell & Developmental Biology; Academic Press: Cambridge, MA, USA, 2016; Volume 51, pp. 32–43. [Google Scholar]
  20. Levin, L.A. Laws of information conservation (nongrowth) and aspects of the foundation of probability theory. Probl. Pereda. Inf. 1974, 10, 30–35. [Google Scholar]
  21. Chaitin, G.J. On the length of programs for computing finite binary sequences. J. ACM 1966, 13, 547–569. [Google Scholar] [CrossRef]
  22. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inf. Transm. 1965, 1, 1–7. [Google Scholar] [CrossRef]
  23. Calude, C.S.; Salomaa, K.; Roblot, T.K. Finite state complexity. Theor. Comput. Sci. 2011, 412, 5668–5677. [Google Scholar] [CrossRef]
  24. Downey, R.G.; Hirschfeldt, D.R. Algorithmic Randomness and Complexity; Springer: Berlin, Germany, 2010. [Google Scholar]
  25. Martin-Löf, P. The definition of random sequences. Inf. Control 1966, 9, 602–619. [Google Scholar] [CrossRef]
  26. Solomonoff, R.J. A formal theory of inductive inference. Part I. Inf. Control 1964, 7, 1–22. [Google Scholar] [CrossRef]
  27. Calude, C.S. Information and Randomness: An Algorithmic Perspective; Springer: Berlin, Germany, 2002. [Google Scholar]
  28. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  29. Kirchherr, W.; Li, M.; Vitányi, P. The Miraculous Universal Distribution. Math. Intell. 1997, 19, 7–15. [Google Scholar] [CrossRef]
  30. Solomonoff, R.J. Complexity–Based Induction Systems: Comparisons and Convergence Theorems. IEEE Trans. Inf. Theory 1978, 24, 422–432. [Google Scholar] [CrossRef]
  31. Solomonoff, R.J. The Application of Algorithmic Probability to Problems in Artificial Intelligence. In Uncertainty in Artificial Intelligence; Kanal, L.N., Lemmer, J.F., Eds.; Elsevier: New York, NY, USA, 1986; pp. 473–491. [Google Scholar]
  32. Solomonoff, R.J. A System for Incremental Learning Based on Algorithmic Probability. In Proceedings of the Sixth Israeli Conference on Artificial Intelligence, Computer Vision and Pattern Recognition, Tel Aviv, Israel, 26–27 December 1989; pp. 515–527. [Google Scholar]
  33. Soler-Toscano, F.; Zenil, H.; Delahaye, J.P.; Gauvrit, N. Calculating Kolmogorov complexity from the output frequency distributions of small Turing machines. PloS ONE 2014, 9, e96223. [Google Scholar] [CrossRef] [PubMed]
  34. Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar] [CrossRef] [Green Version]
  35. Delahaye, J.P.; Zenil, H. Numerical Evaluation of the Complexity of Short Strings: A Glance Into the Innermost Structure of Algorithmic Randomness. Appl. Math. Comput. 2012, 219, 63–77. [Google Scholar] [CrossRef]
  36. Zenil, H. Une Approche Expérimentalea la Théorie Algorithmique de la Complexité. Ph.D. Thesis, Université de Paris, Paris, France, 2013. (In French). [Google Scholar]
  37. Calude, C.S.; Stay, M.A. Most programs stop quickly or never halt. Adv. Appl. Math. 2008, 40, 295–308. [Google Scholar] [CrossRef]
  38. Rado, T. On non-computable functions. Bell Syst. Tech. J. 1962, 41, 877–884. [Google Scholar] [CrossRef]
  39. Zenil, H. From Computer Runtimes to the Length of Proofs: With an Algorithmic Probabilistic Application to Waiting Times in Automatic Theorem Proving. In Computation, Physics and Beyond International Workshop on Theoretical Computer Science; Dinneen, M.J., Khousainov, B., Nies, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 223–240. [Google Scholar]
  40. Brady, A.H. The determination of the value of Rado’s noncomputable function Σ(k) for four-state Turing machines. Math. Comput. 1983, 40, 647–665. [Google Scholar]
  41. Soler-Toscano, F.; Zenil, H.; Delahaye, J.P.; Gauvrit, N. Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures. Computability 2013, 2, 125–140. [Google Scholar]
  42. Langton, C.G. Studying artificial life with cellular automata. Physica D 1986, 22, 120–149. [Google Scholar] [CrossRef]
  43. Morse, M.; Hedlund, G.A. Unending Chess, Symbolic Dynamics, and a Problem in Semigroups. Duke Math. J. 1944, 11, 1–7. [Google Scholar] [CrossRef]
  44. Bailey, D.H.; Borwein, P.B.; Plouffe, S. On the Rapid Computation of Various Polylogarithmic Constants. Math. Comput. 1997, 66, 903–913. [Google Scholar] [CrossRef]
  45. Gauvrit, N.; Singmann, H.; Soler-Toscano, F.; Zenil, H. Acss: Algorithmic Complexity for Short Strings, Package at the Comprehensive R Archive Network. Available online: https://cran.r-project.org/web/packages/acss/ (accessed on 13 June 2018).
  46. Rueda-Toicen, A.; Singmann, H. The Online Algorithmic Complexity Calculator, R Shiny Code Repository. Available online: https://github.com/andandandand/OACC (accessed on 13 June 2018).
  47. Soler-Toscano, F.; Zenil, H. Kolmogorov Complexity of 3 × 3 and 4 × 4 Squares, on the Wolfram Demonstrations Project. Available online: http://demonstrations.wolfram.com/KolmogorovComplexityOf33And44Squares/ (accessed on 13 June 2018).
Figure 1. Hypothetical behaviour of (non-)regular convergence rates of the constant involved in the invariance theorem. The invariance theorem guarantees that complexity values for a string s measured by different reference UTMs U 1 and U 2 will only diverge by a constant c (the length between U 1 and U 2 ) independent of s yet it does not tell how fast or in what way the convergence may happen particularly at the beginning. The invariance theorem only tells us that at the limit the curve will converge to a small and constant value c, but it tells us nothing about the rate of convergence or about transitional behaviour.
Figure 1. Hypothetical behaviour of (non-)regular convergence rates of the constant involved in the invariance theorem. The invariance theorem guarantees that complexity values for a string s measured by different reference UTMs U 1 and U 2 will only diverge by a constant c (the length between U 1 and U 2 ) independent of s yet it does not tell how fast or in what way the convergence may happen particularly at the beginning. The invariance theorem only tells us that at the limit the curve will converge to a small and constant value c, but it tells us nothing about the rate of convergence or about transitional behaviour.
Entropy 20 00605 g001
Figure 2. The best version of Shannon entropy can be rewritten as a function of variable block length where the minimum value best captures the (possible) periodicity of a string here illustrated with three strings of length 12, regular, periodic and random-looking. Because blocks larger than n / 2 would result in only one block and therefore entropy equal to 0, the largest possible block is n / 2 . The normalized version (bottom) divides the entropy value for that block size by the largest possible number of blocks for that size and alphabet (here binary).
Figure 2. The best version of Shannon entropy can be rewritten as a function of variable block length where the minimum value best captures the (possible) periodicity of a string here illustrated with three strings of length 12, regular, periodic and random-looking. Because blocks larger than n / 2 would result in only one block and therefore entropy equal to 0, the largest possible block is n / 2 . The normalized version (bottom) divides the entropy value for that block size by the largest possible number of blocks for that size and alphabet (here binary).
Entropy 20 00605 g002
Figure 3. (A) Observed data, a sequence of successive positive natural numbers. (B) The transition table of the Turing machine found by running all possible small Turing machines. (C) The same transition table in visual form. (D) The space-time evolution of the Turing machine for starting from an empty tape. (E) Space-time evolution of the Turing machine implementing a binary counter, taking as halting criterion the leftmost position of the original Turing machine head as depicted in C (states are arrows). (E) This small computer program that our CTM and BDM methods find (c.f. next Section) mean that the sequence in A is not algorithmic random as the program represents a succinct generative causal model (and thus not random) for any arbitrary length that otherwise would have been assigned a maximal randomness with Shannon Entropy among all strings of the same length (in the face of no other knowledge about the source) despite its highly algorithmic non-random structured nature. Entropy alone—only equipped to spot statistical regularities when there is no access to probability distributions—cannot find this kind of generative models demonstrating the low randomness of an algorithmic sequence. (F) This illustrates how algorithmic complexity and entropy may diverge in practice.
Figure 3. (A) Observed data, a sequence of successive positive natural numbers. (B) The transition table of the Turing machine found by running all possible small Turing machines. (C) The same transition table in visual form. (D) The space-time evolution of the Turing machine for starting from an empty tape. (E) Space-time evolution of the Turing machine implementing a binary counter, taking as halting criterion the leftmost position of the original Turing machine head as depicted in C (states are arrows). (E) This small computer program that our CTM and BDM methods find (c.f. next Section) mean that the sequence in A is not algorithmic random as the program represents a succinct generative causal model (and thus not random) for any arbitrary length that otherwise would have been assigned a maximal randomness with Shannon Entropy among all strings of the same length (in the face of no other knowledge about the source) despite its highly algorithmic non-random structured nature. Entropy alone—only equipped to spot statistical regularities when there is no access to probability distributions—cannot find this kind of generative models demonstrating the low randomness of an algorithmic sequence. (F) This illustrates how algorithmic complexity and entropy may diverge in practice.
Entropy 20 00605 g003
Figure 4. Non-overlapping BDM calculations are invariant to block permutations (reshuffling base strings and matrices), even when these permutations may have different complexities due to the reorganization of the blocks that can produce statistical or algorithmic patterns. For example, starting from a string of size 24 (top) or an array of size 8 × 8 (bottom), with decomposition length l = 8 for strings and decomposition l = 4 × 4 block size for the array, all 6 permutations for the string and all 6 permutations for the array have the same BDM value regardless of the shuffling procedure.
Figure 4. Non-overlapping BDM calculations are invariant to block permutations (reshuffling base strings and matrices), even when these permutations may have different complexities due to the reorganization of the blocks that can produce statistical or algorithmic patterns. For example, starting from a string of size 24 (top) or an array of size 8 × 8 (bottom), with decomposition length l = 8 for strings and decomposition l = 4 × 4 block size for the array, all 6 permutations for the string and all 6 permutations for the array have the same BDM value regardless of the shuffling procedure.
Entropy 20 00605 g004
Figure 5. One way to deal with the decomposition of n-dimensional tensors is to embed them in an n-dimensional torus ( n = 2 in the case of the one depicted here), making the borders cyclic or periodic by joining the borders of the object. Depicted here are three examples of graph canonical adjacency matrices embedded in a 2-dimensional torus that preserves the object complexity on the surface, a complete graph, a cycle graph and an Erdös-Rényi graph with edge density 0.5, all of size 20 nodes and free of self-loops. Avoiding borders has the desired effect of producing no residual matrices after the block decomposition with overlapping.
Figure 5. One way to deal with the decomposition of n-dimensional tensors is to embed them in an n-dimensional torus ( n = 2 in the case of the one depicted here), making the borders cyclic or periodic by joining the borders of the object. Depicted here are three examples of graph canonical adjacency matrices embedded in a 2-dimensional torus that preserves the object complexity on the surface, a complete graph, a cycle graph and an Erdös-Rényi graph with edge density 0.5, all of size 20 nodes and free of self-loops. Avoiding borders has the desired effect of producing no residual matrices after the block decomposition with overlapping.
Entropy 20 00605 g005
Figure 6. (A) Telling π and the Thue-Morse sequence apart from truly (algorithmic) random sequences. CTM assigns significantly lower randomness (B,DF) to known low algorithmic complexity objects. (B) If absolute Borel normal (as strongly suspected and statistically demonstrated to any confidence degree), π ’s entropy and block entropy asymptotically approximate 1 while, by the invariance theorem of algorithmic complexity, CTM asymptotically approximates 0. Smooth transitions between CTM and BDM are also shown (C,D) as a function of string complexity. Other smooth transition functions of BDM are explored and introduced in Section 9.1.
Figure 6. (A) Telling π and the Thue-Morse sequence apart from truly (algorithmic) random sequences. CTM assigns significantly lower randomness (B,DF) to known low algorithmic complexity objects. (B) If absolute Borel normal (as strongly suspected and statistically demonstrated to any confidence degree), π ’s entropy and block entropy asymptotically approximate 1 while, by the invariance theorem of algorithmic complexity, CTM asymptotically approximates 0. Smooth transitions between CTM and BDM are also shown (C,D) as a function of string complexity. Other smooth transition functions of BDM are explored and introduced in Section 9.1.
Entropy 20 00605 g006
Figure 7. Strings that are assigned lower randomness than that estimated by entropy. Top left: Comparison between values of entropy, compression (Compress[]) and BDM over a sample of 100 strings of length 10,000 generated from a binary random variable following a Bernoulli distribution and normalized by maximal complexity values. Entropy follows a Bernoulli distribution and, unlike compression that follows entropy, BDM values produce clear convex-shaped gaps on each side assigning lower complexity to some strings compared to both entropy and compression. Top right: The results confirmed using a popular lossless compression algorithm BZip2 (and also confirmed, even if not reported, with LZMA) on 100 random strings of 100 bits each (BZip2 is slower than Compress but achieves greater compression). Bottom left: The C T M l o w ( s ) H h i g h ( s ) gap between near-maximal entropy and low algorithmic complexity grows and is consistent along different string lengths, here from 8 to 12 bits. This gap is the one exploited by BDM and carried out over longer strings, which gives it the algorithmic edge against entropy and compression. Bottom right: When strings are sorted by CTM, one notices that BZip2 collapses most strings to minimal compressibility. Over all 2 12 = 4096 possible binary strings of length 12, entropy only produces 6 different entropy values, but CTM is much more fine-grained, and this is extended to the longer strings by BDM, which succeeds in identifying strings of lower algorithmic complexity that have near-maximal entropy and therefore no statistical regularities. Examples of such strings are in Section 6.
Figure 7. Strings that are assigned lower randomness than that estimated by entropy. Top left: Comparison between values of entropy, compression (Compress[]) and BDM over a sample of 100 strings of length 10,000 generated from a binary random variable following a Bernoulli distribution and normalized by maximal complexity values. Entropy follows a Bernoulli distribution and, unlike compression that follows entropy, BDM values produce clear convex-shaped gaps on each side assigning lower complexity to some strings compared to both entropy and compression. Top right: The results confirmed using a popular lossless compression algorithm BZip2 (and also confirmed, even if not reported, with LZMA) on 100 random strings of 100 bits each (BZip2 is slower than Compress but achieves greater compression). Bottom left: The C T M l o w ( s ) H h i g h ( s ) gap between near-maximal entropy and low algorithmic complexity grows and is consistent along different string lengths, here from 8 to 12 bits. This gap is the one exploited by BDM and carried out over longer strings, which gives it the algorithmic edge against entropy and compression. Bottom right: When strings are sorted by CTM, one notices that BZip2 collapses most strings to minimal compressibility. Over all 2 12 = 4096 possible binary strings of length 12, entropy only produces 6 different entropy values, but CTM is much more fine-grained, and this is extended to the longer strings by BDM, which succeeds in identifying strings of lower algorithmic complexity that have near-maximal entropy and therefore no statistical regularities. Examples of such strings are in Section 6.
Entropy 20 00605 g007
Figure 8. Error rate for 2-dimensional arrays. With no loss of generalization, the error rate for n-dimensional tensors lim d k n n k = 0 is convergent and thus negligible, even for the discontinuities disregarded in this plot which are introduced by some BDM versions, such as non-overlapping blocks and discontinuities related to trimming the boundary condition.
Figure 8. Error rate for 2-dimensional arrays. With no loss of generalization, the error rate for n-dimensional tensors lim d k n n k = 0 is convergent and thus negligible, even for the discontinuities disregarded in this plot which are introduced by some BDM versions, such as non-overlapping blocks and discontinuities related to trimming the boundary condition.
Entropy 20 00605 g008
Figure 9. Box plot showing the error introduced by BDM quantified by CTM. The sigmoid appearance comes from the fact that we actually have exact values for CTM up to bitstrings of length 12 and so B D M ( 12 ) = C T M ( i ) for i = 12 but the slope of the curve gives an indication of the errors when assuming that BDM has only access to C T M ( i ) for i < 12 versus actual C T M ( 12 ) . This means that the error grows linearly as a function of CTM and of the string length, and the accuracy degrades smoothly and slowly towards entropy if CTM is not updated.
Figure 9. Box plot showing the error introduced by BDM quantified by CTM. The sigmoid appearance comes from the fact that we actually have exact values for CTM up to bitstrings of length 12 and so B D M ( 12 ) = C T M ( i ) for i = 12 but the slope of the curve gives an indication of the errors when assuming that BDM has only access to C T M ( i ) for i < 12 versus actual C T M ( 12 ) . This means that the error grows linearly as a function of CTM and of the string length, and the accuracy degrades smoothly and slowly towards entropy if CTM is not updated.
Entropy 20 00605 g009
Figure 10. NBDM assigns maximum value 1 to any base matrix with highest CTM or any matrix constructed out of base matrices. In this case, the 4 base matrices on the left are those with the highest CTM in the space of all base matrices of the same size, while the matrix to the left is assigned the highest value because it is built out of the maximum complexity base matrices.
Figure 10. NBDM assigns maximum value 1 to any base matrix with highest CTM or any matrix constructed out of base matrices. In this case, the 4 base matrices on the left are those with the highest CTM in the space of all base matrices of the same size, while the matrix to the left is assigned the highest value because it is built out of the maximum complexity base matrices.
Entropy 20 00605 g010
Figure 11. Spearman correlation coefficients ( ρ ) between CTM and BDM of all possible block sizes and overlap lengths for 12 bit strings, compared with the correlation between CTM and Shannon entropy, and the correlation between CTM and compression length (shown at the rightmost edge of the plot) in blue. ρ coefficients for the 2048 strings below and above the median CTM value are shown in green and orange, respectively. BDM block size and overlap increases to the left. Compression length was obtained using Mathematica’s Compress[] function. All values were normalized as described in Section 8.
Figure 11. Spearman correlation coefficients ( ρ ) between CTM and BDM of all possible block sizes and overlap lengths for 12 bit strings, compared with the correlation between CTM and Shannon entropy, and the correlation between CTM and compression length (shown at the rightmost edge of the plot) in blue. ρ coefficients for the 2048 strings below and above the median CTM value are shown in green and orange, respectively. BDM block size and overlap increases to the left. Compression length was obtained using Mathematica’s Compress[] function. All values were normalized as described in Section 8.
Entropy 20 00605 g011
Table 1. Calculated empirical distributions from rulespace ( t , k ) . Letter codes: F full space, S sample, R ( t , k ) reduced enumeration. Time is given in seconds (s), hours (h) and days (d).
Table 1. Calculated empirical distributions from rulespace ( t , k ) . Letter codes: F full space, S sample, R ( t , k ) reduced enumeration. Time is given in seconds (s), hours (h) and days (d).
(t,k)CalculationNumber of MachinesTime
(2,2)F (6 steps) | R ( 2 , 2 ) | = 2000 0.01 s
(3,2)F (21) | R ( 3 , 2 ) | = 2,151,296 8 s
(4,2)F (107) | R ( 4 , 2 ) | = 3,673,320,192 4 h
(4,2) 2 D F 2 D (1500) | R ( 4 , 2 ) 2 D | = 315,140,100,864 252 d
(4,4)S (2000) 334 × 10 9 62 d
(4,5)S (2000) 214 × 10 9 44 d
(4,6)S (2000) 180 × 10 9 41 d
(4,9)S (4000) 200 × 10 9 75 d
(4,10)S (4000) 201 × 10 9 87 d
(5,2)F (500) | R ( 5 , 2 ) | = 9,658,153,742,336 450 d
(5,2) 2 D S 2 D (2000) 1291 × 10 9 1970 d
Table 2. Summary of ranges of application and scalability of CTM and all versions of BDM. d stands for the dimension of the object.
Table 2. Summary of ranges of application and scalability of CTM and all versions of BDM. d stands for the dimension of the object.
Short Strings < 100 bitsLong Strings > 100 bitsScalability
Lossless compression×n
Coding Theorem Method (CTM)× e x p to ∞
Non-overlapping BDMn
Full-overlapping Recursive BDM n d 1
Full-overlapping Smooth BDM n d 1
Smooth add col BDMn
Table 3. Spearman ρ values of various BDM versions tested on dual and cospectral graphs that theoretically have the same algorithmic complexity up to a (small) constant.
Table 3. Spearman ρ values of various BDM versions tested on dual and cospectral graphs that theoretically have the same algorithmic complexity up to a (small) constant.
Non-Overlapping BDMFully Overlapping Recursive BDMSmooth Fully Overlapping BDMSmooth Add Row or Column BDM
Duality test0.8740.7830.9350.931
Cospectrality test0.9430.9330.93050.931

Share and Cite

MDPI and ACS Style

Zenil, H.; Hernández-Orozco, S.; Kiani, N.A.; Soler-Toscano, F.; Rueda-Toicen, A.; Tegnér, J. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. Entropy 2018, 20, 605. https://0-doi-org.brum.beds.ac.uk/10.3390/e20080605

AMA Style

Zenil H, Hernández-Orozco S, Kiani NA, Soler-Toscano F, Rueda-Toicen A, Tegnér J. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. Entropy. 2018; 20(8):605. https://0-doi-org.brum.beds.ac.uk/10.3390/e20080605

Chicago/Turabian Style

Zenil, Hector, Santiago Hernández-Orozco, Narsis A. Kiani, Fernando Soler-Toscano, Antonio Rueda-Toicen, and Jesper Tegnér. 2018. "A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity" Entropy 20, no. 8: 605. https://0-doi-org.brum.beds.ac.uk/10.3390/e20080605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop