Next Article in Journal / Special Issue
Probabilistic Ensemble of Deep Information Networks
Previous Article in Journal
A Review of the Application of Information Theory to Clinical Diagnostic Testing
Previous Article in Special Issue
Pareto-Optimal Data Compression for Binary Classification Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Convex Information Bottleneck Lagrangian

by
Borja Rodríguez Gálvez
*,†,
Ragnar Thobaben
*,† and
Mikael Skoglund
*,†
Department of Intelligent Systems, Division of Information Science and Engineering (ISE), KTH Royal Institute of Technology, 11428 Stockholm, Sweden
*
Authors to whom correspondence should be addressed.
Current address: Malvinas väg 10, 100 44 Stockholm, Sweden.
Submission received: 9 December 2019 / Revised: 3 January 2020 / Accepted: 8 January 2020 / Published: 14 January 2020
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)

Abstract

:
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization.

Graphical Abstract

1. Introduction

Let X X and Y Y be two statistically dependent random variables with joint distribution p ( X , Y ) . The information bottleneck (IB) [1] investigates the problem of extracting the relevant information from X for the task of predicting Y.
For this purpose, the IB defines a bottleneck variable T T obeying the Markov chain Y X T so that T acts as a representation of X. Tishby et al. [1] define the relevant information as the information the representation keeps from Y after the compression of X (i.e., I ( T ; Y ) ), provided a certain level of compression (i.e., I ( X ; T ) r ). Therefore, we select the representation which yields the value of the IB curve that best fits our requirements.
Definition 1 (IB Functional).
Let X and Y be statistically dependent variables. Let Δ be the set of random variables T obeying the Markov condition Y X T . Then the IB functional is
F IB , max ( r ) = max T Δ I ( T ; Y ) s . t . I ( X ; T ) r , r [ 0 , ) .
Definition 2 (IB Curve).
The IB curve is the set of points defined by the solutions of F IB , max ( r ) for varying values of r [ 0 , ) .
Definition 3 (Information Plane).
The plane is defined by the axes I ( T ; Y ) and I ( X ; T ) .
This method has been successfully applied to solve different problems from a variety of domains. For example:
  • Supervised learning. In supervised learning, we are presented with a set of n pairs of input features and task outputs instances. We seek an approximation of the conditional probability distribution between the task outputs Y and the input features X. In classification tasks (i.e., when Y is a discrete random variable), the introduction of the variable T learned through the information bottleneck principle maintained the performance of standard algorithms based on the cross-entropy loss while providing with more adversarial attacks robustness and invariance to nuisances [2,3,4]. Moreover, by the nature of its definition the information bottleneck appears to be closely related with a trade-off between accuracy on the observable set and generalization to new, unseen instances (see Section 2).
  • Clustering. In clustering, we are presented with a set of n pairs of instances of a random variable X and their attributes of interest Y. We seek groups of instances (or clusters T) such that the attributes of interest within the instances of each cluster are similar and the attributes of interest of the instances of different clusters are dissimilar. Therefore, the information bottleneck can be employed since it allows us to aim for attribute representative clusters (maximizing the similarity between instances within the clusters) and enforce a certain compression of the random variable X (ensuring a certain difference between instances of the different clusters). This has been successfully implemented, for instance, for gene expression analysis and word, document, stock pricing, or movie rating clustering [5,6,7].
  • Image segmentation. In image segmentation, we want to partition an image into segments such that each pixel in a region shares some attributes. If we divide the image into very small regions X (e.g., each region is a pixel or a set of pixels defined by a grid), we can consider the problem of segmentation as that of clustering the regions X based on the region attributes Y. Hence, we can use the information bottleneck so that we seek region clusters T that are maximally informative about the attributes Y (e.g., the intensity histogram bins) and maintain a level of compression of the original regions X [8].
  • Quantization. In quantization, we consider a random variable X X such that X is a large or continuous set. Our objective is to map X into a variable T T such that T is a smaller, countable set. If we fix the quantization set size to | T | = r and aim at maximizing the information of the quantized variable with another random variable Y and restrict the mapping to be deterministic, then the problem is equivalent to the information bottleneck [9,10].
  • Source coding. In source coding, we consider a data source S which generates a signal Y Y , which is later perturbed by a channel C : Y X that outputs X. We seek a coding scheme that generates a code T T from the output of the channel X which is as informative as possible about the original source signal Y and can be transmitted at a small rate I ( X ; T ) r . Therefore, this problem is equivalent to the the formulation of the information bottleneck [11].
Furthermore, it has been employed as a tool for development or explanation in other disciplines like reinforcement learning [12,13,14], attribution methods [15], natural language processing [16], linguistics [17] or neuroscience [18]. Moreover, it has connections with other problems such as source coding with side information (or the Wyner-Ahlswede-Körner (WAK) problem), the rate-distortion problem or the cost-capacity problem (see Sections 3, 6 and 7 from [19]).
In practice, solving a constrained optimization problem such as the IB functional is challenging. Thus, in order to avoid the non-linear constraints from the IB functional, the IB Lagrangian is defined.
Definition 4 (IB Lagrangian).
Let X and Y be statistically dependent variables. Let Δ be the set of random variables T obeying the Markov condition Y X T . Then we define the IB Lagrangian as
L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) .
Here β [ 0 , 1 ] is the Lagrange multiplier which controls the trade-off between the information of Y retained and the compression of X. Note we consider β [ 0 , 1 ] because (i) for β 0 many uncompressed solutions such as T = X maximize L IB ( T ; β ) , and (ii) for β 1 the IB Lagrangian is non-positive due to the data processing inequality (DPI) (Theorem 2.8.1 from Cover and Thomas [20]) and trivial solutions like T = const are maximizers with L IB ( T ; β ) = 0 [21].
We know the solutions of the IB Lagrangian optimization (if existent) are solutions of the IB functional by the Lagrange’s sufficiency theorem (Theorem 5 in Appendix A of Courcoubetis [22]). Moreover, since the IB functional is concave (Lemma 5 of Gilad-Bachrach et al. [19]) we know they exist (Theorem 6 in Appendix A of Courcoubetis [22]).
Therefore, the problem is usually solved by maximizing the IB Lagrangian with adaptations of the Blahut-Arimoto algorithm [1], deterministic annealing approaches [23] or a bottom-up greedy agglomerative clustering [6] or its improved sequential counterpart [24]. However, when provided with high-dimensional random variables X such as images, these algorithms do not scale well and deep learning-based techniques, where the IB Lagrangian is used as the objective function, prevailed [2,25,26].
Note the IB Lagrangian optimization yields a representation T with a given performance ( I ( X ; T ) , I ( T ; Y ) ) for a given β . However, there is no one-to-one mapping between β and I ( X ; T ) . Hence, we cannot directly optimize for the desired compression level r but we need to perform several optimizations for different values of β and select the representation with the desired performance; e.g., [2]. The Lagrange multiplier selection is important since (i) sometimes even choices of β < 1 lead to trivial representations such that p T | X = p T , and (ii) there exist some discontinuities on the performance level w.r.t. the values of β [27].
Moreover, recently Kolchinsky et al. [21] showed how in deterministic scenarios (such as many classification problems where an input x i belongs to a single particular class y i ) the IB Lagrangian could not explore the IB curve. Particularly, they showed that multiple β yielded the same performance level and that a single value of β could result in different performance levels. To solve this issue, they introduced the squared IB Lagrangian, L sq IB ( T ; β sq ) = I ( T ; Y ) β s q I ( X ; T ) 2 , which is able to explore the IB curve in any scenario by optimizing for different values of β sq . However, even though they realized a one-to-one mapping between β sq and the compression level existed, they did not find such mapping. Hence, multiple optimizations of the Lagrangian were still required to find the best trade-off solution.
The main contributions of this article are:
  • We introduce a general family of Lagrangians (the convex IB Lagrangians) which are able to explore the IB curve in any scenario for which the squared IB Lagrangian [21] is a particular case of. More importantly, the analysis made for deriving this family of Lagrangians can serve as inspiration for obtaining new Lagrangian families that solve other objective functions with intrinsic trade-offs such as the IB Lagrangian.
  • We show that in deterministic scenarios (and other scenarios where the IB curve shape is known) one can use the convex IB Lagrangian to obtain a desired level of performance with a single optimization. That is, there is a one-to-one mapping between the Lagrange multiplier used for the optimization and the level of compression and informativeness obtained, and we provide the exact mapping. This eliminates the need for multiple optimizations to select a suitable representation.
  • We introduce a particular case of the convex IB Lagrangians: the shifted exponential IB Lagrangian, which allows us to approximately obtain a specific compression level in any scenario. This way, we can approximately solve the initial constrained optimization problem from Equation (1) with a single optimization.
Furthermore, we provide some insight for explaining why there are discontinuities in the performance levels w.r.t. the values of the Lagrange multipliers. In a classification setting, we connect those discontinuities with the intrinsic clusterization of the representations when optimizing the IB bottleneck objective.
The structure of the article is the following: In Section 2 we motivate the usage of the IB in supervised learning settings. Then, in Section 3 we outline the important results used about the IB curve in deterministic scenarios. Later, in Section 4 we introduce the convex IB Lagrangian and explain some of its properties like the bijective mapping between Lagrange multipliers and the compression level and the range of such multipliers. After that, we support our (proved) claims with some empirical evidence on the MNIST [28] and TREC-6 [29] datasets in Section 5. Finally, in Section 6 we discuss our claims and empirical results. A PyTorch [30] implementation of the article can be found at https://github.com/burklight/convex-IB-Lagrangian-PyTorch.
In the Appendix A, Appendix B, Appendix C, Appendix D, Appendix E and Appendix F we provide with the proofs of the theoretical results. Then, in Appendix G we show some alternative families of Lagrangians with similar properties. Later, in Appendix H we provide with the precise experimental setup details to reproduce the results from the paper, and further experimentation with different datasets and neural network architectures. To conclude, in Appendix I we show some guidelines on how to set the convex information bottleneck Lagrangians for practical problems.

2. The IB in Supervised Learning

In this section, we will first give an overview of supervised learning in order to later motivate the usage of the information bottleneck in this setting.

2.1. Supervised Learning Overview

In supervised learning we are given a dataset D n = { ( x i , y i ) } i = 1 n of n pairs of input features and task outputs. In this case, X and Y are the random variables of the input features and the task outputs. We assume x i and y i are sampled i.i.d. from the true distribution p ( X , Y ) = p Y | X p X . The usual aim of supervised learning is to use the dataset D n to learn a particular conditional distribution q Y ^ | X of the task outputs given the input features, parametrized by θ , which is a good approximation of p Y | X . We use Y ^ and y ^ to indicate the predicted task output random variable and its outcome. We call a supervised learning task regression when Y is continuous-valued and classification when it is discrete.
Usually, supervised learning methods employ intermediate representations of the inputs before making predictions about the outputs; e.g., hidden layers in neural networks (Chapter 5 from Bishop [31]) or transformations in a feature space through the kernel trick in kernel machines like SVMs or RVMs (Sections 7.1 and 7.2 from Bishop [31]). Let T be a possibly stochastic function of the input features X with a parametrized conditional distribution q T | X , then, T obeys the Markov condition Y X T . The mapping from the representation to the predicted task outputs is defined by the parametrized conditional distribution q Y ^ | T . Therefore, in representation-based machine learning methods, the full Markov Chain is Y X T Y ^ . Hence, the overall estimation of the conditional probability p Y | X is given by the marginalization of the representations; i.e., q Y ^ | X = E t q T | X q Y ^ | T = t (The notation q Y ^ | T = t represents the probability distribution q Y ^ | T ( · | t ; θ ) . For the rest of the text, we will use the same notation to represent conditional probability distributions where the conditioning argument is given).
In order to achieve the goal of having a good estimation of the conditional probability distribution p Y | X , we usually define an instantaneous cost function 𝒿 : X × Y R . The value of this function 𝒿 ( x , y ; θ ) serves as a heuristic to measure the loss of our algorithm, parametrized by θ , obtains when trying to predict the realization of the task output y with the input realization x.
Clearly, we can be interested in minimizing the expectation of the instantaneous cost function over all the possible input features and task outputs, which we call the cost function. However, since we only have a finite dataset D n we have instead to minimize the empirical cost function.
Definition 5 (Cost Function and Empirical Cost Function).
Let X and Y be the input features and task output random variables and x X and y Y their realizations. Let also 𝒿 be the instantaneous cost function, θ the parametrization of our learning algorithm, and D n = { ( x i , y i ) } i = 1 n the given dataset. Then, we define:
1 . T h e c o s t f u n c t i o n : J ( p ( X , Y ) ; θ ) = E ( x , y ) p ( X , Y ) [ 𝒿 ( x , y ; θ ) ]
2 . T h e e m p r i c a l c o s t f u n c t i o n : J ^ ( D n ; θ ) = 1 n i = 1 n 𝒿 ( x i , y i ; θ )
The discrepancy between the normal and empirical cost functions is called the generalization gap or generalization error (see Section 1 of Xu and Raginsky [32], for instance) and intuitively, the smaller this gap is, the better our model generalizes; i.e., the better it will perform to new, unseen samples in terms of our cost function.
Definition 6 (Generalization Gap).
Let J ( p ( X , Y ) ; θ ) and J ^ ( D n ; θ ) be the cost and the empirical cost functions as defined in Definition 5. Then, the generalization gap is defined as
gen ( D n ; θ ) = J ( p ( X , Y ) ; θ ) J ^ ( D n ; θ ) ,
and it represents the error incurred when the selected distribution is the one parametrized by θ when the rule J ^ ( D n ; θ ) is used instead of J ( p ( X , Y ) ; θ ) as the function to minimize.
Ideally, we would want to minimize the cost function. Hence, we usually try to minimize the empirical cost function and the generalization gap simultaneously. The modifications to our learning algorithm which intend to reduce the generalization gap but not hurt the performance on the empirical cost function are known as regularization.

2.2. Why Do We Use the IB?

Definition 7 (Representation cross-entropy cost function).
Let X and Y be two statistically dependent variables with joint distribution p ( X , Y ) = p Y | X p X . Let also T be a random variable obeying the Markov condition Y X T and q T | X and q Y ^ | T be the encoding and decoding distributions of our model, parametrized by θ. Finally, let C ( p Z | | q Z ) = E z p Z [ log ( q Z ( z ) ) ] be the cross entropy between two probability distributions p Z and q Z . Then, the cross-entropy cost function is
J CE ( p ( X , Y ) ; θ ) = E ( x , t ) q T | X p X C ( q Y | T = t | | q Y ^ | T = t ) = E ( x , y ) p ( X , Y ) 𝒿 CE ( x , y ; θ ) ,
where 𝒿 CE ( x , y ; θ ) = E t q T | X = x [ q Y ^ | T = t ( y | t ; θ ) ] is the instantaneous representation cross-entropy cost function and q Y | T = E x p X [ p Y | X = x q T | X = x / q T ] and q T = E x p X [ q T | X = x ] .
The cross-entropy is a widely used cost function in classification tasks (e.g., Teahan [8], Krizhevsky et al. [33], Shore and Gray [34]) which has many interesting properties [35]. Moreover, it is known that minimizing the J CE ( p ( X , Y ) ; θ ) maximizes the mutual information I ( T ; Y ) . That is:
Proposition 1 (Minimizing the Cross Entropy Maximizes the Mutual Information).
Let  J CE ( p ( X , Y ) ; θ ) be the representation cross-entropy cost function as defined in Definition 7. Let also I ( T ; Y ) be the mutual information between random variables T and Y in the setting from Definition 7. Then, minimizing J CE ( p ( X , Y ) ; θ ) implies maximizing I ( T ; Y ) .
The proof of this proposition can be found in Appendix A.
Definition 8 (Nuisance).
A nuisance is any random variable that affects the observed data X but is not informative to the task we are trying to solve. That is, Ξ is a nuisance for Y if Y Ξ or I ( Ξ , Y ) = 0 .
Similarly, we know that minimizing I ( X ; T ) minimizes the generalization gap for restricted classes when using the cross-entropy cost function (Theorem 1 of Vera et al. [36]), and when using I ( T ; Y ) directly as an objective to maximize (Theorem 4 of Shamir et al. [37]). Furthermore, Achille and Soatto [38] in Proposition 3.1 upper bound the information of the input representations, T, with nuisances that affect the observed data, Ξ , with I ( X ; T ) . Therefore, minimizing I ( X ; T ) helps generalization by not keeping useless information of Ξ in our representations.
Thus, jointly maximizing I ( T ; Y ) and minimizing I ( X ; T ) is a good choice both in terms of performance in the available dataset and in new, unseen data, which motivates studies on the IB.

3. The Information Bottleneck in Deterministic Scenarios

Kolchinsky et al. [21] showed that when Y is a deterministic function of X (i.e., Y = f ( X ) ), the IB curve is piecewise linear. More precisely, it is shaped as stated in Proposition 2.
Proposition 2 (The IB Curve is Piecewise Linear in Deterministic Scenarios).
Let X be a random variable and Y = f ( X ) be a deterministic function of X. Let also T be the bottleneck variable that solves the IB functional. Then the IB curve in the information plane is defined by the following equation:
I ( T ; Y ) = I ( X ; T ) if I ( X ; T ) [ 0 , I ( X ; Y ) ) I ( T ; Y ) = I ( X ; Y ) if I ( X ; T ) I ( X ; Y )
Furthermore, they showed that the IB curve could not be explored by optimizing the IB Lagrangian for multiple β because the curve was not strictly concave. That is, there was not a one-to-one relationship between β and the performance level.
Theorem 1 (In Deterministic Scenarios, the IB Curve cannot be Explored Using the IB Lagrangian).
Let X be a random variable and Y = f ( X ) be a deterministic function of X. Let also Δ be the set of random variables T obeying the Markov condition Y X T . Then:
1. 
Any solution T Δ such that I ( X ; T ) [ 0 , I ( X ; Y ) ) and I ( T ; Y ) = I ( X ; T ) solves arg max T Δ { L IB ( T ; β ) } for β = 1 . That is, many different compression and performance levels can be achieved for β = 1 .
2. 
Any solution T Δ such that I ( X ; T ) > I ( X ; Y ) and I ( T ; Y ) = I ( X ; Y ) solves arg sup T Δ { L IB ( T ; β ) } for β = 0 . That is, many compression levels can be achieved with the same performance for β = 0 .
Note we use the supremum in this case since for β = 0 we have that I ( X ; T ) could be infinite and then the search set from Equation (1); i.e., { T : Y X T } { T : I ( X ; T ) < } is not compact anymore.
3. 
Any solution T Δ such that I ( X ; T ) = I ( T ; Y ) = I ( X ; Y ) solves arg max T Δ { L IB ( T ; β ) } for all β ( 0 , 1 ) . That is, many different β achieve the same compression and performance level.
An alternative proof for this theorem can be found in Appendix B.

4. The Convex IB Lagrangian

4.1. Exploring the IB Curve

Clearly, a situation like the one depicted in Theorem 1 is not desirable, since we cannot aim for different levels of compression or performance. For this reason, we generalize the effort from Kolchinsky et al. [21] and look for families of Lagrangians which are able to explore the IB curve. Inspired by the squared IB Lagrangian, L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 , we look at the conditions a function of I ( X ; T ) requires in order to be able to explore the IB curve. In this way, we realize that any monotonically increasing and strictly convex function will be able to do so, and we call the family of Lagrangians with these characteristics the convex IB Lagrangians, due to the nature of the introduced function.
Theorem 2 (Convex IB Lagrangians).
Let Δ be the set of r.v. T obeying the Markov condition Y X T . Then, if u is a monotonically increasing and strictly convex function, the IB curve can always be recovered by the solutions of arg max T Δ { L IB , u ( T ; β u ) } , with
L IB , u ( T ; β u ) = I ( T ; Y ) β u u ( I ( X ; T ) ) .
That is, for each point ( I ( X ; T ) , I ( T ; Y ) ) s.t. d I ( T ; Y ) / d I ( X ; T ) > 0 there is a unique β u for which maximizing L IB , u ( T ; β u ) achieves this solution. Furthermore, β u is strictly decreasing w.r.t. I ( X ; T ) . We call L IB , u ( T ; β u ) the convex IB Lagrangian.
The proof of this theorem can be found in Appendix C. Furthermore, by exploiting the IB curve duality (Lemma 10 of Gilad-Bachrach et al. [19]) we were able to derive other families of Lagrangians which allow for the exploration of the IB curve (Appendix G).
Remark 1.
Clearly, we can see how if u is the identity function (i.e., u ( I ( X ; T ) ) = I ( X ; T ) ) then we end up with the normal IB Lagrangian. However, since the identity function is not strictly convex, it cannot ensure the exploration of the IB curve.
During the proof of this theorem we observed a relationship between the Lagrange multipliers and the solutions obtained of the normal IB Lagrangian L IB ( T ; β ) and the convex IB Lagrangian L IB , u ( T ; β u ) . This relationship is formalized in the following corollary.
Corollary 1 (IB Lagrangian and IB convex Lagrangian connection).
Let L IB ( T ; β ) be the IB Lagrangian and L IB , u ( T ; β u ) the convex IB Lagrangian. Then, maximizing L IB ( T ; β ) and L IB , u ( T ; β u ) can obtain the same point in the IB curve if β u = β / u ( I ( X ; T ) ) , where u is the derivative of u.
This corollary allows us to better understand why the addition of u allows for the exploration of the IB curve in deterministic scenarios. If we note that for β = 1 we can obtain any point in the increasing region of the curve, then we clearly see how evaluating u for different values of I ( X ; T ) define different values of β u that obtain such points. Moreover, it lets us see how if for β = 0 maximizing the IB Lagrangian could obtain any point ( I ( X ; Y ) ; I ( X ; T ) ) with I ( X ; T ) > I ( X ; Y ) , then the same happens for the IB convex Lagrangian.

4.2. Aiming for a Specific Compression Level

Let B u denote the domain of Lagrange multipliers β u for which we can find solutions in the IB curve with the convex IB Lagrangian. Then, the convex IB Lagrangians do not only allow us to explore the IB curve with different β u . They also allow us to identify the specific β u that obtains a given point ( I ( X ; T ) , I ( T ; Y ) ) , provided we know the IB curve in the information plane. Conversely, the convex IB Lagrangian allows finding the specific point ( I ( X ; T ) , I ( T ; Y ) ) that is obtained by a given β u .
Proposition 3 (Bijective Mapping between IB Curve Point and Convex IB Lagrange multiplier).
Let the IB curve in the information plane be known; i.e., I ( T ; Y ) = f IB ( I ( X ; T ) ) is known. Then there is a bijective mapping from Lagrange multipliers β u B u \ { 0 } from the convex IB Lagrangian to points in the IB curve ( I ( X ; T ) , f IB ( I ( X ; T ) ) . Furthermore, these mappings are:
β u = d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 u ( I ( X ; T ) ) a n d I ( X ; T ) = ( u ) 1 d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 β u ,
where u is the derivative of u and ( u ) 1 is the inverse of u .
This is especially interesting since in deterministic scenarios we know the shape of the IB curve (Theorem 2) and since the convex IB Lagrangians allow for the exploration of the IB curve (Theorem 2). A proof for Proposition 3 can be found in Appendix D.
Remark 2.
Note that the definition from Tishby et al. [1] β = d f IB ( I ( X ; T ) ) / d I ( X ; T ) only allows for a bijection between β and I ( X ; T ) if f IB is a strictly convex, and known function, and we have seen this is not the case in deterministic scenarios (Theorem 1).
A direct result derived from this proposition is that we know the domain of Lagrange multipliers, B u , which allows for the exploration of the IB curve if the shape of the IB curve is known. Furthermore, if the shape is not known we can at least bound that range.
Corollary 2 (Domain of Convex IB Lagrange Multiplier with Known IB Curve Shape).
Let the IB curve in the information plane be I ( T ; Y ) = f IB ( I ( X ; T ) ) and let I max = I ( X ; Y ) . Let also I ( X ; T ) = r max be the minimum mutual information s.t. f IB ( r max ) = I max ; i.e., r max = arg inf r { f IB ( r ) } s . t . f IB ( r ) = I max . Then, the range of Lagrange multipliers that allow the exploration of the IB curve with the convex IB Lagrangian is B u = [ β u , min , β u , max ] , with
β u , min = lim r r max f IB ( r ) u ( r ) a n d β u , max = lim r 0 + f IB ( r ) u ( r ) ,
where f IB ( r ) and u ( r ) are the derivatives of f IB ( I ( X ; T ) ) and u ( I ( X ; T ) ) w.r.t. I ( X ; T ) evaluated at r respectively. Also, note that there are some scenarios where r max (see, e.g., [39]), in these scenarios β u , min = lim r f IB ( r ) / u ( r ) 0 .
Corollary 3 (Domain of Convex IB Lagrange Multiplier Bound).
The range of the Lagrange multipliers that allow the exploration of the IB curve is contained by [ 0 , β u , top ] which is also contained by [ 0 , β u , top + ] , where
β u , top = ( inf Ω x X { β 0 ( Ω x ) } ) 1 lim r 0 + u ( r ) , a n d β u , top + = 1 lim r 0 + u ( r ) ,
where u ( r ) is the derivative of u ( I ( X ; T ) ) w.r.t. I ( X ; T ) evaluated at r, X is the set of possible realizations of X and β 0 and Ω x are defined as in [27] (Note in [27] they consider the dual problem (see Appendix G), so when they refer to β 1 it translates to β in this article). That is, B u [ 0 , β u , top ] [ 0 , β u , top + ] .
Corollaries 2 and 3 allow us to reduce the range search for β when we want to explore the IB curve. Practically, inf Ω x X { β 0 ( Ω x ) } might be difficult to calculate so Wu et al. [27] derived an algorithm to approximate it. However, we still recommend setting the numerator to 1 for simplicity. The proofs for both corollaries are found in Appendix E and Appendix F.

5. Experimental Support

In order to showcase our claims we use the MNIST [28] and the TREC-6 [29] datasets. We modify the nonlinear-IB method [26], which is a neural network that minimizes the cross-entropy while also minimizing a differentiable kernel-based estimate of I ( X ; T ) [40]. Then, we used this technique to maximize a lower bound on the convex IB Lagrangians by applying the functions u to the I ( X ; T )  estimate.
The network structure is the following: first, a stochastic encoder T = f enc ( X ; θ ) + W with p W = N ( 0 , I d ) such that T R d , where d is the dimension of the bottleneck variable (Note that the encoder needs to be stochastic to (i) ensure a finite and well-defined mutual information [21,41] and (ii) make gradient-based optimization methods over the IB Lagrangian useful [41]). Second, a deterministic decoder q Y ^ | T = f dec ( T ; θ ) . For the MNIST dataset both the encoder and the decoder are fully-connected networks, for a fair comparison with [26]. For the TREC-6 dataset, the encoder is a set of convolutions of word embeddings followed by a fully-connected network and the decoder is also a fully-connected network. For further details about the experiment setup, additional results for different values of α and η and supplementary experimental results for different datasets and network architectures, please refer to Appendix H.
In Figure 1 we show our results for two particularizations of the convex IB Lagrangians:
  • the power IB Lagrangians: L IB , pow ( T ; β pow , α ) = I ( T ; Y ) β pow I ( X ; T ) ( 1 + α ) , α > 0 (Note when α = 1 we have the squared IB functional from Kolchinsky et al. [21]).
  • the exponential IB Lagrangians: L IB , exp ( T ; β exp , η ) = I ( T ; Y ) β exp exp ( η I ( X ; T ) ) , η > 0 .
We can clearly see how both Lagrangians are able to explore the IB curve (first column from Figure 1) and how the theoretical performance trend of the Lagrangians matches the experimental results (second and third columns from Figure 1). There are small mismatches between the theoretical and experimental performance. This is because using the nonlinear-IB, as stated by Kolchinsky et al. [21], does not guarantee that we find optimal representations due to factors like (i) inaccurate estimation of I ( X ; T ) , (ii) restrictions on the structure of T, (iii) use of an estimation of the decoder instead of the real one and (iv) the typical non-convex optimization issues that arise with gradient-based methods. The main difference comes from the discontinuities in performance for increasing β , which cause is still unknown (cf. Wu et al. [27]). It has been observed, however, that the bottleneck variable performs an intrinsic clusterization in classification tasks (see, for instance, [21,26,42] or Figure 2b). We observed how this clusterization matches with the quantized performance levels observed (e.g., compare Figure 2a with the top center graph in Figure 1); with maximum performance when the number of clusters is equal to the cardinality of Y and reducing performance with a reduction of the number of clusters, which is in line with the concurrent work from Wu and Fischer [43]. We do not have a mathematical proof for the exact relationship between these two phenomena; however, we agree with Wu et al. [27] that it is an interesting matter and hope this observation serves as motivation to derive new theory.
In practice, there are different criteria for choosing the function u. For instance, the exponential IB Lagrangian could be more desirable than the power IB Lagrangian when we want to draw the IB curve since it has a finite range of β u . This is B u = [ ( η exp ( η I max ) ) 1 , η 1 ] for the exponential IB Lagrangian vs. B u = [ ( ( 1 + α ) I max α ) 1 , ) for the power IB Lagrangian. Furthermore, there is a trade-off between (i) how much the selected u function resembles a linear function in our region of interest; e.g., with α or η close to zero, since it will suffer from similar problems as the original IB Lagrangian; and (ii) how fast it grows in our region of interest; e.g., higher values of α or η , since it will suffer from value convergence; i.e., optimizing for separate values of β u will achieve similar levels of performance (Figure 3). Please, refer to Appendix I for a more thorough explanation of these two phenomena.
Particularly, the value convergence phenomenon can be exploited in order to approximately obtain a particular level of compression r , both for known and unkown IB curves (see Appendix I or the example in Figure 4). For known IB curves, we also know the achieved predictability I ( T ; Y ) since it is the same as the level of compression I ( X ; T ) . For this exploitation, we can employ the shifted version of the exponential IB Lagrangian (which is also a particular case of the convex IB Lagrangian):
  • the shifted exponential IB Lagrangians:
    L IB , sh exp ( T ; β sh exp , η , r ) = I ( T ; Y ) β sh exp exp ( η ( I ( X ; T ) r ) ) , η > 0 , r [ 0 , ) .
For this Lagrangian, the optimization procedure converges to representations with approximately the desired compression level r if the hyperparameter η is set to a large value.
In Figure 4 we show the results of aiming for a compression level of r = 2 bits in the MNIST dataset and of r = 16 bits in the TREC-6 dataset, both with η = 200 . We can see how for different values of β sh exp we can obtain the same desired compression level, which makes this method stable to variations in the Lagrange multiplier selection.
To sum up, in order to achieve a desired level of performance with the convex IB Lagrangian as an objective one should:
  • In a deterministic or close to a deterministic setting (see ϵ -deterministic definition in Kolchinsky et al. [21]): Use the adequate β u for that performance using Proposition 3. Then if the performance is lower than desired, i.e., we are placed in the wrong performance plateau, gradually reduce the value of β u until reaching the previous performance plateau. Alternatively, exploit the value convergence phenomenon with, for instance, the shifted exponential IB Lagrangian.
  • In a stochastic setting: exploit the value convergence phenomenon with, for instance, the shifted exponential IB Lagrangian. Alternatively, draw the IB curve with multiple values of β u on the range defined by Corollary 3 and select the representations that best fit their interests.

6. Conclusions

The information bottleneck is a widely used and studied technique. However, it is known that the IB Lagrangian cannot be used to achieve varying levels of performance in deterministic scenarios. Moreover, in order to achieve a particular level of performance, multiple optimizations with different Lagrange multipliers must be done to draw the IB curve and select the best traded-off representation.
In this article we introduced a general family of Lagrangians which allow to (i) achieve varying levels of performance in any scenario, and (ii) pinpoint a specific Lagrange multiplier β u to optimize for a specific performance level in known IB curve scenarios; e.g., deterministic. Furthermore, we showed the β u domain when the IB curve is known and a β u domain bound for exploring the IB curve when it is unknown. This way we can reduce and/or avoid multiple optimizations and, hence, reduce the computational effort for finding well traded-off representations. Moreover, (iii) when the IB curve is not known, we saw how we can exploit the value convergence issue of the convex IB Lagrangian to approximately obtain a specific compression level for both known and unknown IB curve shapes. Finally, (iv) we provided some insight into the discontinuities on the performance levels w.r.t. the Lagrange multipliers by connecting those with the intrinsic clusterization of the bottleneck variable.

Author Contributions

Conceptualization, B.R.G. and R.T.; formal analysis, B.R.G.; funding acquisition, M.S.; methodology, B.R.G. and R.T.; resources, M.S.; software, B.R.G.; supervision, R.T. and M.S.; visualization, B.R.G.; writing—original draft, B.R.G.; writing—review and editing, B.R.G., R.T. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Swedish Research Council.

Acknowledgments

We want to thank the anonymous reviewers for their insighful comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

Proof. 
We can easily prove this statement by finding I ( T ; Y ) is lower bounded by the γ J CE ( p ( X , Y ) ; θ ) + C where γ < 0 and C does not depend on T. This way maximizing such lower bound would be equivalent to minimizing J CE ( p ( X , Y ) ; θ ) and, moreover, it would imply maximizing I ( T ; Y ) .
We can find such an expression as follows:
I ( T ; Y ) = E ( y , t ) q Y | T q T log q Y | T = t ( y | t ; θ ) p Y ( y ) = H ( Y ) + E ( y , t ) q Y | T q T log ( q Y | T = t ( y | t ; θ ) )
= H ( Y ) + E t q T D KL q Y | T = t | | q Y ^ | T = t + E ( y , t ) q Y | T q T log ( q Y ^ | T ( y | t ; θ ) )
H ( Y ) + E ( x , y , t ) q Y | T q T | X p X log ( q Y ^ | T = t ( y | t , θ ) ) = H ( Y ) E ( x , t ) q T | X p X C ( q Y | T = t | | q Y ^ | T = t )
= H ( Y ) J CE ( p ( X , Y ) ; θ ) .
Here, in Equation (A1) we just used the definition of the mutual information between two random variables, and then we decoupled it using the definition of the entropy of a variable (Note we used H ( · ) which is usually employed for discrete variables. However, in this setting H ( · ) could also refer to the differential entropy h ( · ) of a continuous random variable since we employed the general definition using the expectation). Then, in Equation (A2) we only multiplied and divided by q Y ^ | T inside the logarithm and employed the definition of the Kullback–Leibler divergence. Finally, in Equation (A3) we first used the fact the Kullback–Leibler divergence is always positive (Theorem 2.6.3 from Cover and Thomas [20]) and then the properties of the Markov chain T X Y .
Therefore, since H ( Y ) does not depend on T and we have a negative multiplicative term on J CE ( p ( X , Y ) ; θ ) the proposition is proved. □

Appendix B. Alternative Proof of Theorem 1

Proof. 
We will proof all the enumerated statements sequentially, since the third one requires from the two first ones to be proved.
  • Proposition 2 states that the IB curve in the information plane follows the equation I ( T ; Y ) = I ( X ; T ) if I ( X ; T ) [ 0 , I ( X ; Y ) ) . Then, since β = d I ( T ; Y ) / d I ( X ; T ) [1], we know β = 1 in all these points. Therefore, for β = 1 all points ( I ( X ; T ) , I ( X ; T ) ) such that I ( X ; T ) [ 0 , I ( X ; Y ) ) are solutions of optimizing the IB Lagrangian.
  • Similarly, Proposition 2 states that the IB curve follows the equation I ( T ; Y ) = I ( X ; Y ) if I ( X ; T ) I ( X ; Y ) . Then, since β = d I ( T ; Y ) / d I ( X ; T ) [1], we know β = 0 in all points such that I ( X ; T ) > I ( X ; Y ) . We cannot ensure it at I ( X ; T ) = I ( X ; Y ) since β = 1 for I ( X ; T ) = lim ϵ 0 + { I ( X ; Y ) ϵ } .
  • Finally, in order to prove the last statement we will first prove that if β ( 0 , 1 ) achieves a solution, it is ( I ( X ; Y ) , I ( X ; Y ) ) . Then, we will prove that if the solution ( I ( X ; Y ) , I ( X ; Y ) ) exists, this can be yield by any β ( 0 , 1 ) . Hence, the solution ( I ( X ; Y ) , I ( X ; Y ) ) is achieved β ( 0 , 1 ) and it is the only solution achievable.
    (a)
    Since the IB curve is concave we know β is non-increasing in I ( X ; T ) R + . We also know β = 1 at the points in the IB curve where I ( X ; T ) lim ϵ 0 + { I ( X ; Y ) ϵ } and β = 1 at the points in the IB curve where I ( X ; T ) lim ϵ 0 + { I ( X ; Y ) + ϵ } . Hence, if we achieve a solution with β ( 0 , 1 ) , this solution is I ( X ; T ) = I ( T ; Y ) = I ( X ; Y ) .
    (b)
    We can upper bound the IB Lagrangian by
    L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ( 1 β ) I ( T ; Y ) ( 1 β ) I ( X ; Y ) ,
    where the first and second inequalities use the DPI (Theorem 2.8.1 from Cover and Thomas [20]).
    Then, we can consider the point of the IB curve ( I ( X ; Y ) , I ( X ; Y ) ) . Since the function is concave a tangent line to ( I ( X ; Y ) , I ( X ; Y ) ) exists such that all other points in the curve lie below this line. Let β be the slope of this curve (which we know it is from Tishby et al. [1]). Then,
    I ( X ; Y ) β I ( X ; Y ) = ( 1 β ) I ( X ; Y ) F IB , max ( r ) β r , r [ 0 , ) .
    As we see, by the upper bound on the IB Lagrangian from Equation (A5), if the point ( I ( X ; Y ) , I ( X ; Y ) ) exists, any β can be the slope of the tangent line to ( I ( X ; Y ) , I ( X ; Y ) ) that ensures concavity. □

Appendix C. Proof of Theorem 2

Proof. 
We start the proof by remembering the optimization problem at hand (Definition 1):
F IB , max ( r ) = max T Δ { I ( T ; Y ) } s . t . I ( X ; T ) r
We can modify the optimization problem by
max T Δ { I ( T ; Y ) } s . t . u ( I ( X ; T ) ) u ( r )
iff u is a monotonically non-decreasing function since otherwise u ( I ( X ; T ) ) u ( r ) would not hold necessarily. Now, let us assume T Δ and β u s.t. T maximizes L IB , u ( T ; β u ) over all T Δ , and I ( X ; T ) r . Then, we can operate as follows:
max T Δ u ( I ( X ; T ) ) u ( r ) { I ( T ; Y ) } = max T Δ u ( I ( X ; T ) ) u ( r ) { I ( T ; Y ) β u ( u ( I ( X ; T ) ) u ( r ) + ξ ) }
max T Δ { I ( T ; Y ) β u ( u ( I ( X ; T ) ) u ( r ) + ξ ) }
= I ( T ; Y ) β u ( u ( I ( X ; T ) u ( r ) + ξ ) = I ( T ; Y ) .
Here, the equality from Equation (A9) comes from the fact that since I ( X ; T ) r , then ξ 0 s.t. u ( I ( X ; T ) ) u ( r ) + ξ = 0 . Then, the inequality from Equation (A10) holds since we have expanded the optimization search space. Finally, in Equation (A11) we use that T maximizes L IB , u ( T ; β u ) and that I ( X ; T ) r .
Now, we can exploit that u ( r ) and ξ do not depend on T and drop them in the maximization in Equation (A10). We can then realize we are maximizing over L IB , u ( T ; β u ) ; i.e.,
max T Δ u ( I ( X ; T ) ) u ( r ) { I ( T ; Y ) } max T Δ { I ( T ; Y ) β u ( u ( I ( X ; T ) ) u ( r ) + ξ ) }
= max T Δ { I ( T ; Y ) β u ( I ( X ; T ) ) } = max T Δ { L IB , u ( T ; β u ) } .
Therefore, since I ( T ; Y ) satisfies both the maximization with T Δ and the constraint I ( X ; T ) r , maximizing L IB , u ( T ; β u ) obtains F IB , max ( r ) .
Now, we know if such β u exists, then the solution of the Lagrangian will be a solution for F IB , max ( r ) . Then, if we consider Theorem 6 from the Appendix of Courcoubetis [22] and consider the maximization problem instead of the minimization problem, we know if both I ( T ; Y ) and u ( I ( X ; T ) ) are concave functions, then a set of Lagrange multipliers S u exists with these conditions. We can make this consideration because f is concave if f is convex and max { f } = min { f } . We know I ( T ; Y ) is a concave function of T for T Δ (Lemma 5 of Gilad-Bachrach et al. [19]) and I ( X ; T ) is convex w.r.t. T given p X is fixed (Theorem 2.7.4 of Cover and Thomas [20]). Thus, if we want u ( I ( X ; T ) ) to be concave we need u to be a convex function.
Finally, we will look at the conditions of u so that for every point ( I ( X ; T ) , I ( T ; Y ) ) in the IB curve, there exists a unique β u s.t. L IB , u ( T ; β u ) is maximized. That is, the conditions of u s.t. | S u | = 1 . For this purpose we will look at the solutions of the Lagrangian optimization:
d L IB , u ( T ; β u ) d T = d ( I ( T ; Y ) β u u ( I ( X ; T ) ) ) d T = d I ( T ; Y ) d T β u d u ( I ( X ; T ) ) d I ( X ; T ) d I ( X ; T ) d T = 0
Now, if we integrate both sides of Equation (A14) over all T Δ we obtain
β u = d I ( T ; Y ) d I ( X ; T ) d u ( I ( X ; T ) ) d I ( X ; T ) 1 = β u ( I ( X ; T ) ) ,
where β is the Lagrange multiplier from the IB Lagrangian [1] and u ( I ( X ; T ) ) is d u ( I ( X ; T ) ) d I ( X ; T ) . Also, if we want to avoid indeterminations of β u we need u ( I ( X ; T ) ) not to be 0. Since we already imposed u to be monotonically non-decreasing, we can solve this issue by strengthening this condition. That is, we will require u to be monotonically increasing.
We would like β u to be continuous, this way there would be a unique β u for each value of I ( X ; T ) . We know β is a non-increasing function of I ( X ; T ) (Lemma 6 of Gilad-Bachrach et al. [19]). Hence, if we want β u to be a strictly decreasing function of I ( X ; T ) , we will require u to be a strictly increasing function of I ( X ; T ) . Therefore, we will require u to be a strictly convex function.
Thus, if u is a strictly convex and monotonically increasing function, for each point ( I ( X ; T ) , I ( T ; Y ) ) in the IB curve s.t. d I ( T ; Y ) / d I ( X ; T ) > 0 there is a unique β u for which maximizing L IB , u ( T ; β u ) achieves this solution. □

Appendix D. Proof of Proposition 3

Proof. 
In Theorem 2 we showed how each point of the IB curve ( I ( X ; T ) , I ( T ; Y ) ) can be found with a unique β u maximizing L IB , u ( T ; β u ) . Therefore, since we also proved L IB , u ( T ; β u ) is strictly concave w.r.t. T we can find the values of β u that maximize the Lagrangian for fixed I ( X ; T ) .
First, we look at the solutions of the Lagrangian maximization:
d L IB , u ( T ; β u ) d T = d ( f IB ( I ( X ; T ) ) β u u ( I ( X ; T ) ) ) d T = d f IB ( I ( X ; T ) ) d T β u d u ( I ( X ; T ) ) d I ( X ; T ) d I ( X ; T ) d T = 0 .
Then as before we can integrate at both sides for all T Δ and solve for β u :
β u = d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 u ( I ( X ; T ) ) .
Moreover, since u is a strictly convex function it’s derivative u is strictly increasing. Hence, u is an invertible function (since a strictly increasing function is bijective and a function is invertible iff it is bijective by definition). Now, if we consider β u > 0 to be known and I ( X ; T ) to be the unknown we can solve for I ( X ; T ) and get:
I ( X ; T ) = ( u ) 1 d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 β u .
Note we require β u not to be 0 so the mapping is defined. □

Appendix E. Proof of Corollary 2

Proof. 
We will start the proof by proving the following useful Lemma.
Lemma A1.
Let L IB , u ( T ; β u ) be a convex IB Lagrangian, then sup T Δ { L IB , u ( T ; 0 ) } = I ( X ; Y ) .
Proof. 
Since L IB , u ( T ; 0 ) = I ( T ; Y ) , maximizing this Lagrangian is directly maximizing I ( T ; Y ) . We know I ( T ; Y ) is a concave function of T for T Δ (Theorem 2.7.4 from Cover and Thomas [20]); hence it has a supremum. We also know I ( T ; Y ) I ( X ; Y ) . Moreover, we know I ( X ; Y ) can be achieved if, for example, Y is a deterministic function of T (since then the Markov Chain X T Y is formed). Thus, sup T Δ { L IB , u ( T ; 0 ) } = I ( X ; Y ) . □
For β u = 0 we know maximizing L IB , u ( T ; 0 ) we can obtain the point in the IB curve ( r max , I max ) (Lemma A1). Moreover, we know that for every point ( I ( X ; T ) , f IB ( I ( X ; T ) ) ) such that d f IB ( I ( X ; T ) ) / d I ( X ; T ) > 0 , ! β u s.t. max { L IB , u ( T ; β u ) } achieves that point (Theorem 2). Thus, ! β u , min s.t. lim r r max ( r , f IB ( r ) ) is achieved. From Proposition 3 we know this β u , min is given by
β u , min = lim r r max f IB ( r ) u ( r ) .
Since we know f IB ( I ( X ; T ) ) is a concave non-decreasing function in ( 0 , r max ) (Lemma 5 of Gilad-Bachrach et al. [19]) we know it is continuous in this interval. In addition we know β u is strictly decreasing w.r.t. I ( X ; T ) (Theorem 2). Furthermore, by definition of r max and knowing I ( T ; Y ) I ( X ; Y ) we know f IB ( r ) = 0 , r > r max . Therefore, we cannot ensure the exploration of the IB curve for β u s.t. 0 < β u < β u , min .
Then, since u is a strictly increasing function in ( 0 , r max ) , u is positive in that interval. Hence, taking into account β u is strictly decreasing we can find a maximum β u when I ( X ; T ) approaches to 0. That is,
β u , max = lim r 0 + f IB ( r ) u ( r ) ,
 □

Appendix F. Proof of Corollary 3

Proof. 
If we use Corollary 2, it is straightforward to see that β u [ L , L + ] if β u , min L and β u , max L + for all IB curves f IB and functions u. Therefore, we look at a domain bound dependent on the function choice. That is, if we can find β min f IB ( r ) and β max f IB ( r ) for all IB curves and all values of r, then
B u β min lim r r max { u ( r ) } , β max lim r 0 + { u ( r ) } .
The region for all possible IB curves regardless of the relationship between X and Y is depicted in Figure A1. The hard limits are imposed by the DPI (Theorem 2.8.1 from Cover and Thomas [20]) and the fact that the mutual information is non-negative (Corollary with Equation 2.90 for discrete and first Corollary of Theorem 8.6.1 for continuous random variables from Cover and Thomas [20]). Hence, a minimum and maximum values of f IB are given by the minimum and maximum values of the slope of the Pareto frontier. Which means
B u 0 , 1 lim r 0 + { u ( r ) } .
Note 0 / ( lim r r m a x { u ( r ) } ) = 0 since u is monotonically increasing and, thus, u will never be 0.
Then, we can tighten the bound using the results from Wu et al. [27], where, in Theorem 2, they showed the slope of the Pareto frontier could be bounded in the origin by f IB ( inf Ω x X { β 0 ( Ω x ) } ) 1 . Finally, we know that in deterministic classification tasks inf Ω x X { β 0 ( Ω x ) } = 1 , which aligns with Kolchinsky et al. [21] and what we can observe from Figure A1. Therefore,
B u 0 , ( inf Ω x X { β 0 ( Ω x ) } ) 1 lim r 0 + { u ( r ) } 0 , 1 lim r 0 + { u ( r ) } .
 □
Figure A1. Graphical representation of the IB curve in the information plane. Dashed lines in orange represent tight bounds confining the region (in light orange) of possible IB curves (delimited by the red line, also known as the Pareto frontier). Black dotted lines are informative values. In blue we show an example of a possible IB curve confining a region (in darker orange) of an IB curve that does not achieve the Pareto frontier. Finally, the yellow star represents the point where the representation keeps the same information about the input and the output.
Figure A1. Graphical representation of the IB curve in the information plane. Dashed lines in orange represent tight bounds confining the region (in light orange) of possible IB curves (delimited by the red line, also known as the Pareto frontier). Black dotted lines are informative values. In blue we show an example of a possible IB curve confining a region (in darker orange) of an IB curve that does not achieve the Pareto frontier. Finally, the yellow star represents the point where the representation keeps the same information about the input and the output.
Entropy 22 00098 g0a1

Appendix G. Other Lagrangian Families

We can use the same ideas we used for the convex IB Lagrangian to formulate new families of Lagrangians that allow the exploration of the IB curve. For that, we will use the duality of the IB curve (Lemma 10 of [19]). That is:
Definition A1 (IB Dual Functional).
Let X and Y be statistically dependent variables. Let also Δ be the set of random variables T obeying the Markov condition Y X T . Then the IB dual functional is
F IB , min ( i ) = min T Δ I ( X ; T ) s . t . I ( T ; Y ) i , i [ 0 , I ( X ; Y ) ) .
Theorem A1 (IB Curve Duality).
Let the IB curve be defined by the solutions of F IB , max ( r ) for varying r [ 0 , ) . Then,
r i s . t . ( r , F IB , max ( r ) ) = ( F IB , min ( i ) , i )
and
i r s . t . ( F IB , min ( i ) , i ) = ( r , F IB , max ( r ) ) .
From this definition, it follows that minimizing the dual IB Lagrangian, L IB , dual ( T ; β dual ) = I ( X ; T ) β dual I ( T ; Y ) , for β dual = β 1 is equivalent to maximizing the IB Lagrangian. In fact, the original Lagrangian for solving the problem was defined this way [1]. We decided to use the maximization version because the domain of useful β is bounded while it is not for β dual .
Following the same reasoning as we did in the proof of Theorem 2, we can ensure the IB curve can be explored if:
  • We minimize the concave IB Lagrangian L IB , v ( T ; β v ) = I ( X ; T ) β v v ( I ( T ; Y ) ) .
  • We maximize the dual concave IB Lagrangian L IB , v , dual ( T ; β v , dual ) = v ( I ( T ; Y ) ) β v , dual I ( X ; T ) .
  • We minimize the dual convex IB Lagrangian L IB , u , dual ( T ; β u , dual ) = u ( I ( X ; T ) ) β u , dual I ( T ; Y ) .
Here, u is a monotonically increasing strictly convex function, v is a monotonically increasing strictly concave function, and β v , β v , dual , β u , dual are the Lagrange multipliers of the families of Lagrangians defined above.
In a similar manner, one could obtain relationships between the Lagrange multipliers of the IB Lagrangian and the convex IB Lagrangian with these Lagrangian families. For instance, the convex IB Lagrangian L IB , u ( T ; β u ) is related with the concave IB Lagrangian L IB , v ( T ; β v ) as defined by Propositon A1.
Proposition A1 (Relationship between the convex and concave IB Lagrangians).
Consider the convex and concave IB Lagrangians L IB , u ( T ; β u ) , L IB , v ( T ; β v ) . Let the IB curve defined as in Definition 2 be f IB . Then, if we fix the functions u and v we can obtain the same point in the IB curve ( r , f IB ( r ) ) with both Lagrangians when
β v 1 = f IB ( r ) v f IB ( u ) 1 f IB ( r ) β u ,
or equivalently,
β u 1 = 1 f IB ( r ) u f IB 1 ( v ) 1 β v 1 f IB ( r ) .
Proof. 
If we proceed like we did in the proof of Proposition 3 we can find the mapping between I ( X ; T ) and β u and between I ( T ; Y ) and β v . That is,
I ( X ; T ) = ( u ) 1 d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 β u and I ( T ; Y ) = ( v ) 1 d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 1 β v .
Then, if we recall that I ( T ; Y ) = f IB ( I ( X ; T ) ) , we can directly obtain that
f IB ( u ) 1 d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 β u = ( v ) 1 d f IB ( I ( X ; T ) ) d I ( X ; T ) 1 1 β v .
Then, if we solve Equation (A30) with a fixed point ( I ( X ; T ) = r , I ( T ; Y ) = f IB ( r ) ) for β v we obtain Equation (A27), and if we solve it for β u we obtain Equation (A28). □
Also, one could find a range of values for these Lagrangians to allow for the IB curve exploration and define a bijective mapping between their Lagrange multipliers and the IB curve. However, (i) as mentioned in Section 2.2, I ( T ; Y ) is particularly interesting to maximize without transformations because of its meaning. Moreover, (ii) like β dual , the domain of useful β v and β u , dual is not upper bounded. These two reasons make these other Lagrangians less preferable. We only include them here for completeness. Nonetheless, we encourage the curiours reader to explore these families of Lagrangians too. For example, a possible interesting research would be investigating if some particularization of the concave IB Lagrangian suffers from an issue like value convergence that can be exploited for approximately obtaining any predictability level I ( T ; Y ) = i for many values of β v .

Appendix H. Experimental Setup Details and Further Experiments

In order to generate empirical support for our claims, we performed several experiments on different datasets with different neural network architectures and different ways of calculating the information bottleneck.

Appendix H.1. Information Bottleneck Calculations

The information bottleneck is calculated modifying either the nonlinear-IB [26]. This method of calculating the information bottleneck is a neural network that minimizes the cross-entropy while also miniminizing an upper bound estimate of the mutual information I θ I ( X ; T ) . The nonlinear-IB relies on a kernel-based estimate of this mutual information [40]. We modify this calculation method by applying the function u to the I ( X ; T ) estimate.
For the nonlinear-IB calculations, we estimated the gradients of both I θ ( X ; T ) and the cross-entropy with the same mini-batch. Moreover, we did not learn the covariance of the mixture of Gaussians used for the kernel density estimation of I θ ( X ; T ) and we set it to ( exp ( 1 ) ) 2 .
In both methods, and for all the experiments, we assumed a Gaussian stochastic encoder T = f enc ( X ; θ ) + W with p W = N ( 0 , I d ) , where d are the number of dimensions of the representations. We trained the neural networks with the Adam optimization algorithm [46] with a learning rate of 10 4 and a 0.6 decay rate every 10 epochs. We used a batch size of 128 samples and all the weights were initialized according to the method described by Glorot and Bengio [47] using a Gaussian distribution.
Then, we used the DBSCAN algorithm [44,45] for clustering. Particularly, we used the scikit-learn [48] implementation with ϵ = 0.3 and min_samples = 50.
The reader can find the PyTorch [30] implementation in the following link: https://github.com/burklight/convex-IB-Lagrangian-PyTorch.

Appendix H.2. The Experiments

We performed experiments in four different datasets:
  • A Classification Task on the MNIST Dataset [28] (Figure 1, Figure 2, Figure A2, Figure A3 and Figure A4 and top row from Figure 3). This dataset contains 60,000 training samples and 10,000 testing samples of hand-written digits. The samples are 28x28 pixels and are labeled from 0 to 9; i.e., X = R 784 and Y = { 0 , 1 , , 9 } . The data is pre-processed so that the input has zero mean and unit variance. This is a deterministic setting, hence the experiment is designed to showcase how the convex IB Lagrangians allow us to explore the IB curve in a setting where the normal IB Lagrangian cannot and the relationship between the performance plateaus and the clusterization phenomena. Furthermore, it intends to showcase the behavior of the power and exponential Lagrangians with different parameters of α and η . Finally, it wants to demonstrate how the value convergence can be employed to approximately obtain a specific compression value. In this experiment, the encoder f enc is a three fully-connected layer encoder with 800 ReLU units on the first two layers and two linear units on the last layer ( T R 2 ), and the decoder f dec is a fully-connected 800 ReLU unit layers followed by an output layer with 10 softmax units. The convex IB Lagrangian was calculated using the nonlinear-IB.
    In Figure A2 we show how the IB curve can be explored with different values of α for the power IB Lagrangian and in Figure A3 for different values of η and the exponential IB Lagrangian.
    Finally, in Figure A4 we show the clusterization for the same values of α and η as in Figure A2 and Figure A3. In this way the connection between the performance discontinuities and the clusterization is more evident. Furthermore, we can also observe how the exponential IB Lagrangian maintains better the theoretical performance than the power IB Lagrangian (see Appendix I for an explanation of why).
  • A Classification Task on the Fashion-MNIST Dataset [49] (Figure A5). As MNSIT, this dataset contains 60,000 training and 10,000 testing samples of 28x28 pixel images labeled from 0 to 9 and constitutes a deterministic setting. The difference is that this dataset contains fashion products instead of hand-written digits and it represents a harder classification task [49]. The data is also pre-processed so that the input has zero mean and unit variance. For this experiment, the encoder f enc is composed of a two-layer convolutional neural network (CNN) with 32 filters on the first layer and 128 filters on the second with kernels of size 5 and stride 2. This CNN is followed by two fully-connected layers of 128 linear units ( T R 128 ). After the first convolution and the first fully-connected layer, a ReLU activation is employed. The decoder f dec is a fully-connected 128 ReLU unit layer followed by an output layer with 10 softmax units. The convex IB Lagrangian was calculated using the nonlinear-IB. Therefore, this experiment intends to showcase how the convex IB Lagrangian can explore the IB curve for different neural network architectures and harder datasets.
  • A Regression Task on the California Housing Dataset [50] (Figure A6). This dataset contains 20,640 samples of 8 real number input variables like the longitude and latitude of the house (i.e., X R 8 ) and a task output real variable representing the price of the house (i.e., Y R ). We used the log-transformed house price as the target variable and dropped the 992 samples in which the house price was equal or greater than $ 500,000 so that the output distribution was closer to a Gaussian as they did in [26]. The input variables were processed so that they had zero mean and unit variance and we randomly split the samples into a 70% training and 30% test dataset. As in [40], for regression tasks we approximate H ( Y ) with the entropy of a Gaussian with variance Var ( Y ) and H ( Y | T ) with the entropy of a Gaussian with variance equal to the mean-squared error (MSE). This leads to the estimate I ( T ; Y ) 0.5 log ( Var ( Y ) / M S E ) . The encoder f enc is a three fully-connected layer encoder with 128 ReLU units on the first two layers and 2 linear units on the last layer ( T R 2 ), and the decoder f dec is a fully-connected 128 ReLU unit layers followed by an output layer with 1 linear unit. The convex IB Lagrangian was calculated using the nonlinear-IB. Hence, this experiment was designed to showcase the convex IB Lagrangian can explore the IB curve in stochastic scenarios for regression tasks.
  • A Classification Task on the TREC-6 Dataset [29] (Figure A7 and bottom row from Figure 3). This dataset is the six-class version of the TREC [51] dataset. It contains 5452 training and 500 test samples of text questions. Each question is labeled within six different semantic categories based on what the answer is; namely: Abbreviation, description and abstract concepts, entities, human beings, locations, and numeric values. This dataset does not constitute a deterministic setting since there are examples that could belong to more than one class and there are examples which are wrongly labeled (e.g., “What is a fear of parasites?” could belong both to the description and abstract concept category, however it is labeled into the entity category), and hence H ( Y | X ) > 0 . Following Ben Trevett’s tutorial on Sentiment Analysis [52] the encoder f enc is composed by a 6 billion token pre-trained 100-dimensional Glove word embedding [53], followed by a concatenation of three convolutions with kernel sizes 2–4 respectively, and finalized with a fully-connected 128 linear unit layer ( T R 128 ). The decoder f dec is a single fully-connected 6 softmax unit layer. The convex IB Lagrangian was calculated using the nonlinear-IB. Thus, this experiment intends to show an example where the classification task does not convey a deterministic scenario, that the convex IB Lagrangian can recover the IB curve in complex stochastic tasks with complex neural network architectures and that the value convergence can be employed to obtain a specific compression value even in stochastic settings where the IB curve is unknown.
Figure A2. Results for the power IB Lagrangian in the MNIST dataset with α = { 0.5 , 1 , 2 } , from top to bottom. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the green stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) in a dashed, orange line. All values are shown in bits.
Figure A2. Results for the power IB Lagrangian in the MNIST dataset with α = { 0.5 , 1 , 2 } , from top to bottom. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the green stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) in a dashed, orange line. All values are shown in bits.
Entropy 22 00098 g0a2
Figure A3. Results for the exponential IB Lagrangian in the MNIST dataset with η = { log ( 2 ) , 1 , 1.5 } , from top to bottom. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the gren stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) in a dashed, orange line. All values are shown in bits.
Figure A3. Results for the exponential IB Lagrangian in the MNIST dataset with η = { log ( 2 ) , 1 , 1.5 } , from top to bottom. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the gren stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) in a dashed, orange line. All values are shown in bits.
Entropy 22 00098 g0a3
Figure A4. Depiction of the clusterization behavior of the bottleneck variable in the MNIST dataset. In the first row, from left to right, the power IB Lagrangian with different values of α = { 0.5 , 1 , 2 } . In the second row, from left to right, the exponential IB Lagrangian with different values of η = { log ( 2 ) , 1 , 1.5 } .
Figure A4. Depiction of the clusterization behavior of the bottleneck variable in the MNIST dataset. In the first row, from left to right, the power IB Lagrangian with different values of α = { 0.5 , 1 , 2 } . In the second row, from left to right, the exponential IB Lagrangian with different values of η = { log ( 2 ) , 1 , 1.5 } .
Entropy 22 00098 g0a4
Figure A5. Results for the exponential IB Lagrangian in the Fashion MNIST dataset with η = 1 . From left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set and the blue dots the values computed with the validation set. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) . All values are shown in bits.
Figure A5. Results for the exponential IB Lagrangian in the Fashion MNIST dataset with η = 1 . From left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set and the blue dots the values computed with the validation set. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) . All values are shown in bits.
Entropy 22 00098 g0a5
Figure A6. The top row shows the results for the normal IB Lagrangian, and the bottom row for the exponential IB Lagrangian with η = 1 , both in the California housing dataset. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set and the blue dots the values computed with the validation set. Moreover, in all plots, it is indicated I ( X ; Y ) as the empirical value obtained maximizing I ( T ; Y ) without compression limitations as in [26]. All values are shown in bits.
Figure A6. The top row shows the results for the normal IB Lagrangian, and the bottom row for the exponential IB Lagrangian with η = 1 , both in the California housing dataset. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set and the blue dots the values computed with the validation set. Moreover, in all plots, it is indicated I ( X ; Y ) as the empirical value obtained maximizing I ( T ; Y ) without compression limitations as in [26]. All values are shown in bits.
Entropy 22 00098 g0a6
Figure A7. The top row shows the results for the normal IB Lagrangian, and the bottom row for the power IB Lagrangian with α = 0.1 , both in the TREC-6 dataset. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set and the blue dots the values computed with the validation set. Moreover, in all plots, it is indicated H ( Y ) = log 2 ( 6 ) . All values are shown in bits.
Figure A7. The top row shows the results for the normal IB Lagrangian, and the bottom row for the power IB Lagrangian with α = 0.1 , both in the TREC-6 dataset. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set and the blue dots the values computed with the validation set. Moreover, in all plots, it is indicated H ( Y ) = log 2 ( 6 ) . All values are shown in bits.
Entropy 22 00098 g0a7

Appendix I. Guidelines for Selecting A Proper Function in the Convex IB Lagrangian

When choosing the right u function, it is important to find the right balance between avoiding value convergence and aiming for strong convexity. Practically, this balance is found by looking at how much faster u grows w.r.t. the identity function.
When the aim is not to draw the IB curve but to find a specific level of performance, we can exploit the value convergence phenomenon in order to design a stable performance targeted u function.

Appendix I.1. Avoiding Value Convergence

In order to explain this issue we are going to use the example of classification on MNIST [28], where I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) , and again the power and exponential IB Lagrangians.
If we use Proposition 3 on both Lagrangians we obtain the bijective mapping between their Lagrange multipliers and a certain level of compression in the classification setting:
  • Power IB Lagrangian: β pow = ( 1 + α ) I ( X ; T ) α 1 and I ( X ; T ) = ( 1 + α ) β pow 1 α .
  • Exponential IB Lagrangian: β exp = η exp ( η I ( X ; T ) ) 1 and I ( X ; T ) = log ( η β exp ) / η .
Hence, we can simply plot the curves of I ( X ; T ) vs. β u for different hyperparameters α and η (see Figure A8). In this way, we can observe how increasing the growth of the function (e.g., increasing α or η in this case) too much provokes that many different values of β u converge to very similar values of I ( X ; T ) . This is an issue both for drawing the curve (for obvious reasons) and for aiming for a specific performance level. Due to the nature of the estimation of the IB Lagrangian, the theoretical and practical value of β u that yields a specific I ( X ; T ) may vary slightly (see Figure 1). Then if we select a function with too high growth, a small change in β u can result in a big change in the performance obtained.
Figure A8. Theoretical bijection between I ( X ; T ) and different α from β u , min to 1.5 in the power IB Lagrangian (top), and different η in the domain B u in the exponential IB Lagrangian (bottom).
Figure A8. Theoretical bijection between I ( X ; T ) and different α from β u , min to 1.5 in the power IB Lagrangian (top), and different η in the domain B u in the exponential IB Lagrangian (bottom).
Entropy 22 00098 g0a8

Appendix I.2. Aiming for Strong Convexity

Definition A2 ( μ -Strong Convexity).
If a function f ( r ) is twice continuous differentiable and its domain is confined in the real line, then it is μ-strong convex if f ( r ) μ 0 r .
Experimentally, we observed when the growth of our function u ( r ) is small in the domain of interest r > 0 the convex IB Lagrangian does not perform well (see first row of Figure A2 and Figure A3). Later we realized that this was closely related to the strength of the convexity of our function.
In Theorem 2 we imposed the function u to be strictly convex to enforce having a unique β u for each value of I ( X ; T ) . Hence, since in practice we are not exactly computing the Lagrangian but an estimation of it (e.g., with the nonlinear IB [26]) we require strong convexity in order to be able to explore the IB curve.
We now look at the second derivative of the power and exponential function: u ( r ) = ( 1 + α ) α r α 1 and u ( r ) = η 2 exp ( η r ) respectivelly. Here we see how both functions are inherently 0-strong convex for r > 0 and α , η > 0 . However, values of α < 1 and η < 1 could lead to low μ -strong convexity in certain domains of r. Particularly, the case of α < 1 is dangerous because the function approaches 0-strong convexity as r increases, so the power IB Lagrangian performs poorly when low α are used to find high performances.

Appendix I.3. Exploiting Value Convergence

When the aim is not to draw or explore the IB curve, but to obtain a specific level of performance, the power of exponential IB Lagrangians aforementioned might not be the best choice due to the problems with value convergence or non-strong convexity. However, we can exploit the former in order to design a performance targeted u function.
For instance, if we look at Figure A8 we can see how a modification of the exponential IB Lagrangian could result in such a function. More precisely, a shifted exponential u ( r ) = exp ( η ( r r ) ) , with η > 0 sufficiently large, converges to the compression level r . We can see this more clearly if we consider the shifted exponential IB Lagrangian L IB , sh exp ( T ; β sh exp , η , r ) = I ( T ; Y ) β sh exp exp ( η ( I ( X ; T ) r ) ) , since then the application of Proposition 3 results on I ( X ; T ) = log ( η β sh exp / f IB ( I ( X ; T ) ) ) / η + r , where f IB ( I ( X ; T ) ) is the derivative of f IB evaluated at I ( X ; T ) . We know f IB = 1 in deterministic scenarios (Theorem 2) and that f IB < 1 otherwise (see, e.g., [27]). Then, for large enough η , I ( X ; T ) r regardless of the value of f IB .
For instance, if we consider a deterministic scenario like the MNIST dataset [28] with I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) , for η = 200 and r = 2 the range of the Lagrange multipliers that allow the exploration of the IB curve, according to Corollary 2, is β sh exp [ 7.54 × 10 178 , 2.61 × 10 171 ] . Furthermore, I ( X ; T ) is close to 2 for many values of β sh exp . For instance, I ( X ; T ) = 1.974 for β sh exp = 1 and I ( X ; T ) = 1.963 for β sh exp = 8 . This ensures a stability in the performance level obtained so that small changes in the choice of β sh exp do not result in significant changes on the performance (e.g., see top row from Figure 4).
If we now consider a stochastic scenario like the TREC-6 dataset [29] with H ( Y ) = log 2 ( 6 ) , for η = 200 and r = 16 the range of the Lagrange multipliers that allow the IB curve, according to Corollary 3, is β sh exp [ 0 , 2.76 ( inf Ω x X { β 0 ( Ω x ) } ) 1 × 10 1287 ] , where β 0 and Ω x are defined as in [27]. Then, unless ( inf Ω x X { β 0 ( Ω x ) } ) 1 is of the order of 10 1287 , the range of possible betas is wide. Moreover, I ( X ; T ) is close to 16 for many values of β sh exp . For example, I ( X ; T ) = 15.939 if f IB = 0.001 at that point and I ( X ; T ) = 15.973 if f IB = 0.9 for β sh exp = 1 ; and I ( X ; T ) = 15.929 if f IB = 0.001 at that point and I ( X ; T ) = 15.963 if f IB = 0.9 for β sh exp = 8 . Hence, as in the deterministic scenario, the performance level obtained is stable with changes in the choice of β sh exp (e.g., see bottom row from Figure 4).

References

  1. Tishby, N.; Pereira, F.C.; Bialek, W. The information bottleneck method. arXiv 2000, arXiv:physics/0004057. [Google Scholar]
  2. Alemi, A.A.; Fischer, I.; Dillon, J.V.; Murphy, K. Deep variational information bottleneck. arXiv 2016, arXiv:1612.00410. [Google Scholar]
  3. Peng, X.B.; Kanazawa, A.; Toyer, S.; Abbeel, P.; Levine, S. Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  4. Achille, A.; Soatto, S. Information dropout: Learning optimal representations through noisy computation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2897–2905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Slonim, N.; Tishby, N. Document clustering using word clusters via the information bottleneck method. In Proceedings of the 23rd annual international ACM SIGIR Conference on Research and Development in Information Retrieval, Athens, Greece, 24–28 July 2000. [Google Scholar]
  6. Slonim, N.; Tishby, N. Agglomerative information bottleneck. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  7. Slonim, N.; Atwal, G.S.; Tkačik, G.; Bialek, W. Information-based clustering. Proc. Natl. Acad. Sci. USA 2005, 102, 18297–18302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Teahan, W.J. Text classification and segmentation using minimum cross-entropy. In Content-Based Multimedia Information Access; LE CENTRE DE HAUTES ETUDES INTERNATIONALES D’INFORMATIQUE DOCUMENTAIRE: Paris, France, 2000; pp. 943–961. [Google Scholar]
  9. Strouse, D.; Schwab, D.J. The deterministic information bottleneck. Neur. Comput. 2017, 29, 1611–1630. [Google Scholar] [CrossRef] [Green Version]
  10. Nazer, B.; Ordentlich, O.; Polyanskiy, Y. Information-distilling quantizers. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 96–100. [Google Scholar]
  11. Hassanpour, S.; Wübben, D.; Dekorsy, A. On the equivalence of double maxima and KL-means for information bottleneck-based source coding. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018; pp. 1–6. [Google Scholar]
  12. Goyal, A.; Islam, R.; Strouse, D.; Ahmed, Z.; Botvinick, M.; Larochelle, H.; Levine, S.; Bengio, Y. Infobot: Transfer and exploration via the information bottleneck. arXiv 2019, arXiv:1901.10902. [Google Scholar]
  13. Yingjun, P.; Xinwen, H. Learning Representations in Reinforcement Learning:An Information Bottleneck Approach. arXiv 2019, arXiv:cs.LG/1911.05695. [Google Scholar]
  14. Sharma, A.; Gu, S.; Levine, S.; Kumar, V.; Hausman, K. Dynamics-Aware Unsupervised Skill Discovery. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  15. Schulz, K.; Sixt, L.; Tombari, F.; Landgraf, T. Restricting the Flow: Information Bottlenecks for Attribution. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  16. Li, X.L.; Eisner, J. Specializing Word Embeddings (for Parsing) by Information Bottleneck. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; pp. 2744–2754. [Google Scholar]
  17. Zaslavsky, N.; Kemp, C.; Regier, T.; Tishby, N. Efficient compression in color naming and its evolution. Proc. Natl. Acad. Sci. USA 2018, 115, 7937–7942. [Google Scholar] [CrossRef] [Green Version]
  18. Chalk, M.; Marre, O.; Tkačik, G. Toward a unified theory of efficient, predictive, and sparse coding. Proc. Natl. Acad. Sci. USA 2018, 115, 186–191. [Google Scholar] [CrossRef] [Green Version]
  19. Gilad-Bachrach, R.; Navot, A.; Tishby, N. An information theoretic tradeoff between complexity and accuracy. In Learning Theory and Kernel Machines; Springer: Berlin, Germany, 2003; pp. 595–609. [Google Scholar]
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  21. Kolchinsky, A.; Tracey, B.D.; Van Kuyk, S. Caveats for information bottleneck in deterministic scenarios. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  22. Courcoubetis, C. Pricing Communication Networks Economics, Technology and Modelling; Wiley Online Library: Hoboken, NJ, USA, 2003. [Google Scholar]
  23. Tishby, N.; Slonim, N. Data clustering by markovian relaxation and the information bottleneck method. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2001; pp. 640–646. [Google Scholar]
  24. Slonim, N.; Friedman, N.; Tishby, N. Unsupervised document classification using sequential information maximization. In Proceedings of the 25th annual international ACM SIGIR Conference on Research and Development in Information Retrieval, Tampere, Finland, 11–15 August 2002. [Google Scholar]
  25. Chalk, M.; Marre, O.; Tkacik, G. Relevant sparse codes with variational information bottleneck. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2016; pp. 1957–1965. [Google Scholar]
  26. Kolchinsky, A.; Tracey, B.D.; Wolpert, D.H. Nonlinear information bottleneck. Entropy 2019, 21, 1181. [Google Scholar] [CrossRef] [Green Version]
  27. Wu, T.; Fischer, I.; Chuang, I.; Tegmark, M. Learnability for the Information Bottleneck. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  28. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. In Proceedings of the 1998 IEEE International Frequency Control Symposium, Pasadena, CA, USA, 27–29 May 1998. [Google Scholar]
  29. Li, X.; Roth, D. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics—Volume 1; Association for Computational Linguistics: Stroudsburg, PA, USA, 2002; pp. 1–7. [Google Scholar]
  30. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in pytorch. In Proceedings of the NIPS Autodiff Workshop, Long Beach, CA, USA, 9 December 2017. [Google Scholar]
  31. Bishop, C.M. Pattern Recognition and Machine Learning; Springer Science+ Business Media: Berlin, Germany, 2006. [Google Scholar]
  32. Xu, A.; Raginsky, M. Information-theoretic analysis of generalization capability of learning algorithms. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; pp. 2524–2533. [Google Scholar]
  33. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
  34. Shore, J.E.; Gray, R.M. Minimum cross-entropy pattern classification and cluster analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1982, 1, 11–17. [Google Scholar] [CrossRef] [PubMed]
  35. Shore, J.; Johnson, R. Properties of cross-entropy minimization. IEEE Trans. Pattern Anal. Mach. Intell. 1981, 27, 472–482. [Google Scholar] [CrossRef] [Green Version]
  36. Vera, M.; Piantanida, P.; Vega, L.R. The role of the information bottleneck in representation learning. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 1580–1584. [Google Scholar]
  37. Shamir, O.; Sabato, S.; Tishby, N. Learning and generalization with the information bottleneck. Theor. Comput. Sci. 2010, 411, 2696–2711. [Google Scholar] [CrossRef] [Green Version]
  38. Achille, A.; Soatto, S. Emergence of invariance and disentanglement in deep representations. J. Mach. Learn. Res 2018, 19, 1947–1980. [Google Scholar]
  39. Du Pin Calmon, F.; Polyanskiy, Y.; Wu, Y. Strong data processing inequalities for input constrained additive noise channels. IEEE Trans. Inf. Theory 2017, 64, 1879–1892. [Google Scholar] [CrossRef]
  40. Kolchinsky, A.; Tracey, B. Estimating mixture entropy with pairwise distances. Entropy 2017, 19, 361. [Google Scholar] [CrossRef] [Green Version]
  41. Amjad, R.A.; Geiger, B.C. Learning representations for neural network-based classification using the information bottleneck principle. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 1. [Google Scholar] [CrossRef] [Green Version]
  42. Alemi, A.A.; Fischer, I.; Dillon, J.V. Uncertainty in the variational information bottleneck. arXiv 2018, arXiv:1807.00906. [Google Scholar]
  43. Wu, T.; Fischer, I. Phase Transitions for the Information Bottleneck in Representation Learning. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  44. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA, USA, 2–4 August 1996; pp. 226–231. [Google Scholar]
  45. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. TODS 2017, 42, 19. [Google Scholar] [CrossRef]
  46. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  47. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  48. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  49. Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
  50. Pace, R.K.; Barry, R. Sparse spatial autoregressions. Stat. Probab. Lett. 1997, 33, 291–297. [Google Scholar] [CrossRef]
  51. Voorhees, E.M.; Tice, D.M. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Athens, Greece, 24–28 July 2000. [Google Scholar]
  52. Trevett, Ben. Tutorial on Sentiment Analysis: 5—Multi-class Sentiment Analysis. April 2019. Available online: https://github.com/bentrevett/pytorch-sentiment-analysis (accessed on 14 January 2020).
  53. Pennington, J.; Socher, R.; Manning, C. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
Figure 1. The top row shows the results for the power information bottleneck (IB) Lagrangian with α = 1 , and the bottom row for the exponential IB Lagrangian with η = 1 , both in the MNIST dataset. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the green stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) in a dashed, orange line. All values are shown in bits.
Figure 1. The top row shows the results for the power information bottleneck (IB) Lagrangian with α = 1 , and the bottom row for the exponential IB Lagrangian with η = 1 , both in the MNIST dataset. In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the green stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated I ( X ; Y ) = H ( Y ) = log 2 ( 10 ) in a dashed, orange line. All values are shown in bits.
Entropy 22 00098 g001
Figure 2. Depiction of the clusterization behavior of the bottleneck variable for the power IB Lagrangian in the MNIST dataset with α = 1 . The clusters were obtained using the DBSCAN algorithm [44,45].
Figure 2. Depiction of the clusterization behavior of the bottleneck variable for the power IB Lagrangian in the MNIST dataset with α = 1 . The clusters were obtained using the DBSCAN algorithm [44,45].
Entropy 22 00098 g002
Figure 3. Example of value convergence with the exponential IB Lagrangian with η = 3 . We show the intersection of the isolines of L IB , exp ( T ; β exp ) for different β exp B exp [ 1.56 × 10 5 , 3 1 ] using Corollary 2.
Figure 3. Example of value convergence with the exponential IB Lagrangian with η = 3 . We show the intersection of the isolines of L IB , exp ( T ; β exp ) for different β exp B exp [ 1.56 × 10 5 , 3 1 ] using Corollary 2.
Entropy 22 00098 g003
Figure 4. Example of value convergence exploitation with the shifted exponential Lagrangian with η = 200 . In the top row, for the MNIST dataset aiming for a compression level r = 2 and in the bottom row, for the TREC-6 dataset aiming for a compression level of r = 16 . In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the green stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated H ( Y ) in a dashed, orange line. All values are shown in bits.
Figure 4. Example of value convergence exploitation with the shifted exponential Lagrangian with η = 200 . In the top row, for the MNIST dataset aiming for a compression level r = 2 and in the bottom row, for the TREC-6 dataset aiming for a compression level of r = 16 . In each row, from left to right it is shown (i) the information plane, where the region of possible solutions of the IB problem is shadowed in light orange and the information-theoretic limits are the dashed orange line; (ii) I ( T ; Y ) as a function of β u ; and (iii) the compression I ( X ; T ) as a function of β u . In all plots, the red crosses joined by a dotted line represent the values computed with the training set, the blue dots the values computed with the validation set and the green stars the theoretical values computed as dictated by Proposition 3. Moreover, in all plots, it is indicated H ( Y ) in a dashed, orange line. All values are shown in bits.
Entropy 22 00098 g004

Share and Cite

MDPI and ACS Style

Rodríguez Gálvez, B.; Thobaben, R.; Skoglund, M. The Convex Information Bottleneck Lagrangian. Entropy 2020, 22, 98. https://0-doi-org.brum.beds.ac.uk/10.3390/e22010098

AMA Style

Rodríguez Gálvez B, Thobaben R, Skoglund M. The Convex Information Bottleneck Lagrangian. Entropy. 2020; 22(1):98. https://0-doi-org.brum.beds.ac.uk/10.3390/e22010098

Chicago/Turabian Style

Rodríguez Gálvez, Borja, Ragnar Thobaben, and Mikael Skoglund. 2020. "The Convex Information Bottleneck Lagrangian" Entropy 22, no. 1: 98. https://0-doi-org.brum.beds.ac.uk/10.3390/e22010098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop