Next Article in Journal
A Verifiable Arbitrated Quantum Signature Scheme Based on Controlled Quantum Teleportation
Next Article in Special Issue
Privacy-Preserving Image Template Sharing Using Contrastive Learning
Previous Article in Journal
Attaining Fairness in Communication for Omniscience
Previous Article in Special Issue
Key Generation Method Based on Multi-Satellite Cooperation and Random Perturbation
 
 
Correction published on 23 June 2022, see Entropy 2022, 24(7), 861.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints †

Chair of Communications Engineering and Security, University of Siegen, 57076 Siegen, Germany
This paper is an extended version of our papers published in the 25th International ITG Workshop on Smart Antennas (WSA 2021), French Riviera, France, 10–12 November 2021 and in the Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 31 October–3 November 2021.
Submission received: 7 December 2021 / Revised: 8 January 2022 / Accepted: 10 January 2022 / Published: 11 January 2022 / Corrected: 23 June 2022
(This article belongs to the Special Issue Information-Theoretic Approach to Privacy and Security)

Abstract

:
The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include (1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals’ observed sequences; (2) the information leakage to a fusion center with respect to the remote source is considered a new privacy leakage metric; (3) the function computed is allowed to be a distorted version of the target function, which allows the storage rate to be reduced compared to a reliable function computation scenario, in addition to reducing secrecy and privacy leakages; (4) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless and lossy single-function computation with two transmitting nodes, which recover previous results in the literature. For special cases, including invertible and partially invertible functions, and degraded measurement channels, simplified lossless and lossy rate regions are characterized, and one achievable region is evaluated as an example scenario.

1. Introduction

We consider function computation scenarios in a network with multiple nodes involved. Each node observes a random sequence, and all observed random sequences are modeled to be correlated. Recent advancements in network function virtualization [1] and distributed machine learning applications [2] make function computation in a wireless network via software defined networking an important practical problem that should be tackled to improve the performance of future communication systems. In a classic function computation scenario, the nodes exchange messages through authenticated, noiseless, and public communication links, which results in undesired information leakage about the computed function [3,4,5]. Furthermore, it is possible to reduce the amount of public communication [6,7] by using distributed lossless or lossy source coding methods; see [8,9,10,11,12] for several extensions. The former method uses Slepian-Wolf (SW) coding [13] constructions, and the latter allows the computed function to be a distorted version of the target function and applies Wyner-Ziv (WZ) coding [14] methods, which result in further reductions compared to the former. A decrease in public communication is also important in order to limit the information about the computed function leaked to an eavesdropper in the same network, i.e., secrecy leakage. In addition to the public messages, an eavesdropper has generally access to a random sequence correlated with other sequences; see [15,16,17] for various secure function computation extensions.
An important addition to the secure function computation model is a privacy constraint that measures the amount of information about the observed sequence leaked to an eavesdropper [18]. Providing privacy is necessary to ensure confidentiality of a private sequence that can be reused for future function computations [19]. An extension of the results in [18] are given in [20], where two privacy constraints are considered on a remote source whose different noisy measurements are observed by multiple nodes in the same network. The extension in [20] is different from the previous secure and private function computation models due to the assumption that there exists a remote source that is the main reason for the correlation between the random sequences observed by the nodes in the same network. It is illustrated via practical examples that considering a remote source hinders unexpected decrease in reliability and unnoticed secrecy leakage [19]. Similarly, such a remote source model is proposed, e.g., in [21] for biometric secrecy and in [22] for user or device authentication problems. It is shown in [20] that with such a remote source model, two different privacy leakage rate values should be limited, unlike a single constraint considered in [18].
We consider a private remote source whose three noisy versions are used for secure single-function computation. Suppose two nodes transmit public indices to a fusion center to compute one function. In [20], for each function computation, one node sends a public index to a fusion center. In [18], cases with two transmitting nodes for function computation are considered for a visible source model, whose results are improved in this work for a remote source model with an additional privacy leakage constraint. Furthermore, we also consider function computation scenarios where the function computed is allowed to be a distorted version of the target function, which is relevant for various recent function computation applications.

1.1. Models for Function Inputs and Outputs

We consider noisy remote source output measurements that are independent and identically distributed (i.i.d.) according to a fixed probability distribution and that are inputs of a target function. This model is reasonable if, e.g., one uses transform-coding algorithms from [19,23,24,25,26,27,28] to extract almost i.i.d. symbols, as applied in the biometric security, physical unclonable function, and image and video coding literature. Furthermore, the set of target functions we study are applied per letter; i.e., the same function is applied to each input symbol (see Section 2 below). These functions are realistic and are used in various recent applications, such as distributed and federated learning applications where the same loss function is applied to each data example [29].

1.2. Summary of Contributions

We extend the lossless and lossy rate region analysis of the single-function computation model with one transmitting node in [20] to consider two transmitting nodes with joint secrecy and privacy constraints, as well as a distortion constraint on the computed function. A summary of the main contributions is as follows.
  • The lossless single-function computation model with two transmitting nodes is considered, and an inner bound for the rate region that characterizes the optimal trade-off between secrecy, privacy, storage, and distortion constraints is established by using the output statistics of a random binning (OSRB) method [30,31]. An outer bound for the same rate region is also provided by using standard properties of Shannon entropy. Inner and outer bounds are shown to not match in general due to different Markov chains imposed.
  • The proposed inner and outer bounds are extended for the lossy single-function computation model with two transmitting nodes by considering a distortion metric. Furthermore, effects of considering a distortion constraint, rather than a reliability constraint, on the function computation are discussed.
  • For both partially invertible functions, which define a set that is a proper superset of the set of invertible functions, and invertible functions, we characterize simplified lossless and lossy rate regions.
  • The simplified rate regions for invertible functions are further simplified when the eavesdropper’s measurement channel is physically degraded with respect to the fusion center’s channel or vice versa, which results in different bounds on the rates.
  • We evaluate a simplified rate region for a physically degraded case with multiplicative Bernoulli noise components.
We remark that the new contributions in this work as compared to [32,33] include the characterization of the rate region for a physically-degraded channel and the evaluation of the rate region for an example lossless single-function computation scenario, which are mentioned in the last two bullet points above. Furthermore, additional discussions to compare lossless and lossy function computation models are given, the function computation literature review is extended, and discussions about why the considered model is realistic are provided.

1.3. Organization

This paper is organized as follows. In Section 2, we introduce the lossless and lossy single-function computation problems with two transmitting nodes under secrecy, privacy, storage, and reliability or distortion constraints. In Section 3, we present the inner and outer bounds for the rate regions of the introduced problems and discuss that the bounds differ because of different Markov chains imposed. In Section 4, we characterize simplified lossless and lossy rate regions for invertible functions, partially invertible functions, and two different degraded measurement channels, and a rate region for an example case is evaluated. In Section 5, we offer proofs of the inner and outer bounds for the lossless single-function computations with two transmitting nodes. In Section 6, we conclude the paper.

1.4. Notation

Upper case letters represent random variables and lower case letters their realizations. A superscript denotes a sequence of variables, e.g., X n = X 1 , X 2 , , X i , , X n , and a subscript i denotes the position of a variable in a sequence. A random variable X has probability distribution P X . Calligraphic letters such as X denote sets, and set sizes are written as | X | . Given any a R , we define [ a ] = min { a , 0 } . H b ( c ) = c log 2 c ( 1 c ) log 2 ( 1 c ) as the binary entropy function for any c [ 0 , 1 ] .

2. System Model

We consider the single-function computation model with two transmitting nodes illustrated in Figure 1. Noisy measurements X ˜ 1 n and X ˜ 2 n of an i.i.d. remote source X n P X n through memoryless channels P X ˜ 1 | X and P X ˜ 2 | X , respectively, are observed by two legitimate nodes in a network. Similarly, other noisy measurements Y n and Z n of the same remote source are observed by the fusion center and eavesdropper (Eve), respectively, through another memoryless channel P Y Z | X . Encoders Enc 1 ( · ) and Enc 2 ( · ) of the legitimate nodes send indices W 1 and W 2 , respectively, to the fusion center over public communication links with storage rate constraints. The fusion center decoder Dec ( · ) then uses its observed noisy sequence Y n and the public indices W 1 and W 2 to estimate a function f n ( X ˜ 1 n , X ˜ 2 n , Y n ) such that
f n ( X ˜ 1 n , X ˜ 2 n , Y n ) = { f ( X ˜ 1 , i , X ˜ 2 , i , Y i ) } i = 1 n .
The source and measurement alphabets are finite sets.
A natural secrecy leakage constraint is to minimize the information leakage about the function output f n ( X ˜ 1 n , X ˜ 2 n , Y n ) to the eavesdropper. However, its analysis depends on the specific function f ( · , · , · ) computed, so we impose below another secrecy leakage constraint that does not depend on the function used and that provides an upper bound for secrecy leakage for all functions, as considered in [18,20]. Furthermore, we impose two privacy leakage constraints to minimize the information leakage about X n to the fusion center and eavesdropper because the same remote source would be measured if another function would be computed in the same network (see also [19] for motivations to consider privacy leakage with respect to a remote source) as well as public storage constraints that minimize the rate of storage for transmitting nodes.
We next define lossless and lossy single-function computation rate regions.

2.1. Lossless Single-Function Computation

Consider the single-function computation model illustrated in Figure 1. The corresponding lossless rate region is defined as follows.
Definition 1.
A lossless tuple ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve ) is achievable if, for any δ > 0 , there exist n 1 , two encoders, and one decoder such that
Pr f n ( X ˜ 1 n , X ˜ 2 n , Y n ) f n ^ δ ( reliability )
1 n I ( X ˜ 1 n , X ˜ 2 n , Y n ; W 1 , W 2 | Z n ) R s + δ ( sec recy )
1 n log | W 1 | R w , 1 + δ ( storage 1 )
1 n log | W 2 | R w , 2 + δ ( storage 2 )
1 n I ( X n ; W 1 , W 2 | Y n ) R , Dec + δ ( privacyDec )
1 n I ( X n ; W 1 , W 2 | Z n ) R , Eve + δ ( privacyEve ) .
The lossless region R is the closure of the set of all achievable lossless tuples.

2.2. Lossy Single-Function Computation

The corresponding lossy rate region for the single-function computation model illustrated in Figure 1 is defined as follows.
Definition 2.
A lossy tuple ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve , D ) is achievable if, for any δ > 0 , there exist n 1 , two encoders, and one decoder such that (3)–(7) and
E d ( f n ( X ˜ 1 n , X ˜ 2 n , Y n ) , f n ^ ) D + δ ( distortion )
where
d ( f n , f n ^ ) = 1 n i = 1 n d ( f i , f ^ i )
is a per-letter distortion metric. The lossy region R D is the closure of the set of all achievable lossy tuples.

3. Inner and Outer Bounds

3.1. Lossless Single-Function Computation

We first extend the notion of admissibility defined in [6] for a single auxiliary random variable to two auxiliary random variables, used in the inner and outer bounds given below for lossless function computation; see also Theorem 3 of [18].
Definition 3.
A pair of (vector) random variables ( U 1 , U 2 ) is admissible for a function f ( X ˜ 1 , X ˜ 2 , Y ) if we have
H ( f ( X ˜ 1 , X ˜ 2 , Y ) | U 1 , U 2 , Y ) = 0
and
U 1 X ˜ 1 ( X ˜ 2 , Y )
U 2 X ˜ 2 ( X ˜ 1 , Y )
form Markov chains.
We next provide inner and outer bounds for the lossless region R ; see Section 5 for a proof sketch.
 Theorem 1. 
(Inner Bound): An achievable lossless region is the union over all P Q , P V 1 | Q , P V 2 | Q , P U 1 | V 1 , P U 2 | V 2 , P X ˜ 1 | U 1 , and P X ˜ 2 | U 2 of the rate tuples ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve ) such that the ( U 1 , U 2 ) pair is admissible for the function f ( X ˜ 1 , X ˜ 2 , Y ) and
R s I ( U 1 , U 2 ; Z | V 1 , V 2 , Q ) I ( U 1 , U 2 ; Y | V 1 , V 2 , Q ) + I ( U 1 , U 2 ; X ˜ 1 , X ˜ 2 | Z )
R w , 1 I ( V 1 ; X ˜ 1 | V 2 , Y ) + I ( U 1 ; X ˜ 1 | V 1 , U 2 , Y )
R w , 2 I ( V 2 ; X ˜ 2 | V 1 , Y ) + I ( U 2 ; X ˜ 2 | U 1 , V 2 , Y ) R w , 1 + R w , 2 I ( U 2 ; X ˜ 2 | U 1 , V 2 , Y ) + I ( U 1 ; X ˜ 1 | V 1 , V 2 , Y )
+ I ( V 2 ; X ˜ 2 | V 1 , Y ) + I ( V 1 ; X ˜ 1 | Y )
R , Dec I ( U 1 , U 2 ; X | Y )
R , Eve I ( U 1 , U 2 ; Z | V 1 , V 2 , Q ) I ( U 1 , U 2 ; Y | V 1 , V 2 , Q ) + I ( U 1 , U 2 ; X | Z )
where we have
P Q V 1 V 2 U 1 U 2 X ˜ 1 X ˜ 2 X Y Z = P Q | V 1 V 2 P V 1 | U 1 P U 1 | X ˜ 1 P X ˜ 1 | X P V 2 | U 2 P U 2 | X ˜ 2 P X ˜ 2 | X P X P Y Z | X .
(Outer Bound): An outer bound for the lossless region R is the union of the rate tuples in (13), (16)–(18), and
R w , 1 I ( V 1 ; X ˜ 1 | V 2 , Y ) + I ( U 1 ; X ˜ 1 | V 1 , U 2 , Y )
I ( V 1 ; V 2 | X ˜ 1 , Y ) I ( U 1 ; U 2 | X ˜ 1 , Y , V 1 ) R w , 2 I ( V 2 ; X ˜ 2 | V 1 , Y ) + I ( U 2 ; X ˜ 2 | U 1 , V 2 , Y )
I ( V 2 ; V 1 | X ˜ 2 , Y ) I ( U 2 ; U 1 | X ˜ 2 , Y , V 2 )
over all P Q , P V 1 | Q , P V 2 | Q , P U 1 | V 1 , P U 2 | V 2 , P X ˜ 1 | U 1 , and P X ˜ 2 | U 2 such that ( U 1 , U 2 ) pair is admissible for the function f ( X ˜ 1 , X ˜ 2 , Y ) and
( Q , V 1 ) U 1 X ˜ 1 X ( X ˜ 2 , Y , Z )
( Q , V 2 ) U 2 X ˜ 2 X ( X ˜ 1 , Y , Z )
form Markov chains. One can limit the cardinalities to | Q | 2 , | V 1 | | X ˜ 1 | + 6 , | V 2 | | X ˜ 2 | + 6 , | U 1 | ( | X ˜ 1 | + 6 ) 2 , and | U 2 | ( | X ˜ 2 | + 6 ) 2 .
We remark that if the joint probability distribution in (19) is imposed on the outer bound, (20) and (21) recover (14) and (15), respectively, because then
( V 1 , U 1 ) X ˜ 1 ( Y , U 2 , V 2 )
( V 2 , U 2 ) X ˜ 2 ( Y , U 1 , V 1 )
form Markov chains for (19). However, the outer bound that satisfies (22) and (23) defines a rate region that is in general larger than the rate region defined by the inner bound that satisfies (19). Thus, inner and outer bounds generally differ. The results in Theorem 1 recover previous results including Theorem 3 of [18] and, naturally, also other results that are recovered by these previous results such as the SW coding region.

3.2. Lossy Single-Function Computation

We next provide inner and outer bounds for the lossy region R D ; see below for a proof sketch.
 Theorem 2. 
(Inner Bound):An achievable lossy region is the union over all P Q , P V 1 | Q , P V 2 | Q , P U 1 | V 1 , P U 2 | V 2 , P X ˜ 1 | U 1 , and P X ˜ 2 | U 2 of the rate tuples in (13)–(18) and
D E [ d ( f ( X ˜ 1 , X ˜ 2 , Y ) , g ( U 1 , U 2 , Y ) ) ]
for some function g ( · , · , · ) and where P Q V 1 V 2 U 1 U 2 X ˜ 1 X ˜ 2 X Y Z is equal to (19).
(Outer Bound): An outer bound for the lossy region R D is the union over all P Q , P V 1 | Q , P V 2 | Q , P U 1 | V 1 , P U 2 | V 2 , P X ˜ 1 | U 1 , and P X ˜ 2 | U 2 of the set of rate tuples ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve , D ) in (13), (16)–(18), (20), (21), and (26) such that (22) and (23) form Markov chains. One can limit the cardinalities to | Q | 2 , | V 1 | | X ˜ 1 | + 7 , | V 2 | | X ˜ 2 | + 7 , | U 1 | ( | X ˜ 1 | + 7 ) 2 , and | U 2 | ( | X ˜ 2 | + 7 ) 2 .
 Proof Sketch 
The achievability proof of the lossy function computation problem follows from the achievability proof of its lossless version given in Section 5.1 by replacing the admissibility constraint with the constraint that P U 1 | X ˜ 1 , P V 1 | U 1 , P U 2 | X ˜ 2 , and P V 2 | U 2 are chosen such that there exists a function g ( U 1 , U 2 , Y ) that satisfies
g n ( U 1 n , U 2 n , Y n ) = { g ( U 1 , i , U 2 , i , Y i ) } i = 1 n
E [ d ( f n ( X ˜ 1 n , X ˜ 2 n , Y n ) , g n ( U 1 n , U 2 n , Y n ) ) ] D + ϵ n
where ϵ n > 0 such that ϵ n 0 when n . Since all ( x ˜ 1 n , x ˜ 2 n , y n , u 1 n , u 2 n ) tuples are in the jointly typical set with high probability, by the typical average lemma [34] p. 26, constraint in (8) is satisfied.
The proof of the outer bound applies the standard properties of the Shannon entropy and follows mainly from the outer bound proof for the lossless function computation problem given in Section 5.2. However, the proof for the lossless function computation problem requires the auxiliary random variables to be admissible as defined in Definition 3, unlike the lossy function computation problem. Thus, the outer bound proof for Theorem 2 follows by replacing the admissibility step (96) in the outer bound proof for the lossless function computation problem with the step
n ( D + δ n ) ( a ) E i = 1 n d f i ( X ˜ 1 , i , X ˜ 2 , i , Y i ) , f i ^ ( W 1 , W 2 , Y n ) ( b ) E i = 1 n d f i ( X ˜ 1 , i , X ˜ 2 , i , Y i ) , g i ( W 1 , W 2 , Y n , X i 1 , Z i 1 ) = ( c ) E i = 1 n d f i ( X ˜ 1 , i , X ˜ 2 , i , Y i ) , g i ( W 1 , W 2 , Y i n , X i 1 , Z i 1 ) = ( d ) E i = 1 n d f ( X ˜ 1 , i , X ˜ 2 , i , Y i ) , g ( U 1 , i , U 2 , i , Y i )
where ( a ) follows by (8) and (9), ( b ) follows since there exists a function g i ( · , · , · ) that achieves a distortion that is not greater than the distortion achieved by f i ^ ( W 1 , W 2 , Y n ) , where the distortion is measured with respect to f i ( X ˜ 1 , i , X ˜ 2 , i , Y i ) , since g i ( · , · , · ) has additional inputs, ( c ) follows from the Markov chain given in (100), and ( d ) follows from the definitions of U 1 , i and U 2 , i given in (91) and (92), respectively. Furthermore, the proof of the cardinality bounds for the lossy case follows from the proof for the lossless case since we preserve the same probability and conditional entropy values as being preserved for the lossless function computation problem with the addition of preserving the value of g ( U 1 , U 2 , Y ) = g ( U 1 , U 2 , V 1 , V 2 , Y ) , following from the Markov chain
( V 1 , V 2 ) ( U 1 , U 2 , Y ) g ( U 1 , U 2 , Y ) .
Entirely similar to Theorem 1, the inner and outer bounds given in Theorem 2 do not match in general because of different Markov chains imposed.
 Remark 1. 
Since all secrecy and privacy rate terms given in the outer bounds in Theorems 1 and 2, i.e., lower bounds in (13), (17), and (18), are generally strictly positive, strong secrecy or strong privacy constraints cannot be satisfied in general for the lossless and lossy single-function computation problems.
We next provide simplified rate regions, for various sets of computed functions f ( · , · , · ) and measurement channels P Y Z | X .

4. Rate Regions for Special Sets of Computed Functions and Measurement Channels

The terms that characterize simplified rate regions of the lossless and lossy function computation problems for various sets of functions and channels are the same, except (1) removal of the admissibility requirement; (2) addition of a distortion constraint; and (3) increase in the cardinality bounds on the auxiliary random variables for the lossy case as compared to the lossless case. Thus, we provide simplified rate regions only for the lossless case. However, we remark that the optimal auxiliary random variables for lossless and lossy cases might differ. Therefore, the corresponding lossless and lossy rate regions might look different for the same joint probability distribution P X ˜ 1 X ˜ 2 X Y Z .

4.1. Partially Invertible Functions

We now impose the condition that the function f ( X ˜ 1 , X ˜ 2 , Y ) is partially invertible with respect to X ˜ 1 , i.e., we have [9,35]
H ( X ˜ 1 | f ( X ˜ 1 , X ˜ 2 , Y ) , Y ) = 0 .
For such functions, it is straightforward to show that we have the following simplified rate region for the lossless function computation problem with two transmitting nodes. The proof of Lemma 1 follows from Theorem 1 by assigning U 1 = X ˜ 1 and constant V 1 and then by applying the Markov chain (23) to (13). Furthermore, by symmetry, the simplified lossless rate region for a function f ( X ˜ 1 , X ˜ 2 , Y ) that is partially invertible with respect to X ˜ 2 can be obtained by assigning U 2 = X ˜ 2 and constant V 2 and then applying (22) to (13).
Lemma 1.
The lossless rate region R when f ( X ˜ 1 , X ˜ 2 , Y ) is a partially invertible function with respect to X ˜ 1 includes the set of all tuples ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve ) such that U 2 is admissible for the function f ( X ˜ 1 , X ˜ 2 , Y ) and
R s I ( X ˜ 1 , U 2 ; Z | V 2 , Q ) I ( X ˜ 1 , U 2 ; Y | V 2 , Q ) + H ( X ˜ 1 | U 2 , Z ) + I ( U 2 ; X ˜ 2 | Z )
R w , 1 H ( X ˜ 1 | U 2 , Y )
R w , 2 I ( V 2 ; X ˜ 2 | Y ) + I ( U 2 ; X ˜ 2 | X ˜ 1 , V 2 , Y )
R w , 1 + R w , 2 I ( U 2 ; X ˜ 2 | X ˜ 1 , V 2 , Y ) + H ( X ˜ 1 | V 2 , Y ) + I ( V 2 ; X ˜ 2 | Y )
R , Dec I ( X ˜ 1 , U 2 ; X | Y )
R , Eve I ( X ˜ 1 , U 2 ; Z | V 2 , Q ) I ( X ˜ 1 , U 2 ; Y | V 2 , Q ) + I ( X ˜ 1 , U 2 ; X | Z )
such that (23) forms a Markov chain. One can limit the cardinalities to | Q | 2 , | V 2 | | X ˜ 2 | + 6 , and | U 2 | ( | X ˜ 2 | + 6 ) 2 .

4.2. Invertible Functions

Suppose now we impose the condition that the function f ( X ˜ 1 , X ˜ 2 , Y ) is invertible; i.e., we have [9,35]
H ( X ˜ 1 , X ˜ 2 | f ( X ˜ 1 , X ˜ 2 , Y ) , Y ) = 0 .
We provide in Lemma 2 below simplified rate region for the lossless function computation problem with two transmitting nodes when the function f ( X ˜ 1 , X ˜ 2 , Y ) is invertible. The proof of Lemma 2 follows from Theorem 1 by assigning U 1 = X ˜ 1 , U 2 = X ˜ 2 , and constant V 1 and V 2 .
Lemma 2.
The lossless rate region R when f ( X ˜ 1 , X ˜ 2 , Y ) is an invertible function includes the set of all tuples ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve ) satisfying
R s I ( X ˜ 1 , X ˜ 2 ; Z | Q ) I ( X ˜ 1 , X ˜ 2 ; Y | Q ) + H ( X ˜ 1 , X ˜ 2 | Z )
R w , 1 H ( X ˜ 1 | X ˜ 2 , Y )
R w , 2 H ( X ˜ 2 | X ˜ 1 , Y )
R w , 1 + R w , 2 H ( X ˜ 1 , X ˜ 2 | Y )
R , Dec I ( X ˜ 1 , X ˜ 2 ; X | Y )
R , Eve I ( X ˜ 1 , X ˜ 2 ; Z | Q ) I ( X ˜ 1 , X ˜ 2 ; Y | Q ) + I ( X ˜ 1 , X ˜ 2 ; X | Z )
where Q ( X ˜ 1 , X ˜ 2 ) X ( Y , Z ) forms a Markov chain. One can limit the cardinality to | Q | 2 .

4.3. Invertible Functions and Two Different Degraded Channels

The lossless rate region given in Lemma 2 can be further simplified by imposing conditions on the measurement channel P Y Z | X in addition to the function f ( X ˜ 1 , X ˜ 2 , Y ) being invertible. We next characterize further simplified lossless rate regions for two different physically degraded channels.

4.3.1. Eve’s Channel Is Physically Degraded

Suppose the measurement channel P Y Z | X is physically degraded such that
P Y Z | X = P Y | X P Z | Y .
For invertible functions and physically degraded measurement channels P Y Z | X as defined in (45), we provide further simplified lossless rate region in Lemma 3. The proof of Lemma 3 follows from Lemma 2, and by using the following Markov chain for this case
( X ˜ 1 , X ˜ 2 ) X Y Z
which follows by (45).
 Lemma 3. 
The lossless rate region R when f ( X ˜ 1 , X ˜ 2 , Y ) is an invertible function and P Y Z | X is as given in (45) includes the set of all tuples ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve ) satisfying (40)–(43) and
R s H ( X ˜ 1 , X ˜ 2 | Y )
R , Eve I ( X ˜ 1 , X ˜ 2 ; X | Y ) .

4.3.2. Fusion Center’s Channel Is Physically Degraded

Suppose the measurement channel P Y Z | X is physically degraded such that
P Y Z | X = P Z | X P Y | Z .
For invertible functions and physically degraded measurement channels P Y Z | X as defined in (49), we provide a simplified lossless rate region in Lemma 4. The proof of Lemma 4 follows from Lemma 2 and by using the following Markov chain for this case
( X ˜ 1 , X ˜ 2 ) X Z Y
which follows by (49).
Lemma 4.
The lossless rate region R when f ( X ˜ 1 , X ˜ 2 , Y ) is an invertible function and P Y Z | X is as given in (49) includes the set of all tuples ( R s , R w , 1 , R w , 2 , R , Dec , R , Eve ) satisfying (40)–(43) and
R s H ( X ˜ 1 , X ˜ 2 | Z )
R , Eve I ( X ˜ 1 , X ˜ 2 ; X | Z ) .
Remark 2.
The rate regions given in Lemmas 2–4 can be plotted by computing the terms that characterize the regions since P X ˜ 1 X ˜ 2 X Y Z is fixed for function computation problems considered. However, the rate region given in Lemma 1, similar to the inner bounds given in Theorems 1 and 2, might not be easy to characterize due to the requirement to optimize the auxiliary random variables whose cardinalities are bounded by large terms. Thus, evaluating the rate region for a function computation problem with two transmitting terminals is generally significantly more difficult than characterization of the rate region for function computation with one transmitting terminal; see [20] for an example of an information bottleneck for the latter problem.
We next evaluate an inner bound for the lossless rate region R by using Lemma 4 for specific measurement channels when f ( X ˜ 1 , X ˜ 2 , Y ) is an invertible function.

4.4. Lossless Rate Region Example

Suppose measurement channels in Figure 1 have binary input and output alphabets with multiplicative Bernoulli noise components; i.e., we have X = X ˜ 1 = X ˜ 2 = Z = Y = S 1 = S 2 = S Z = S Y = { 0 , 1 } and
X ˜ 1 = S 1 · X , X ˜ 2 = S 2 · X , Z = S Z · X , Y = S Y · X
where S 1 , S 2 , X, and ( S Z , S Y ) are mutually independent, and we have P X ( 1 ) = 0.5 , P S 1 ( 1 ) = β 1 , P S 2 ( 1 ) = β 2 , P S Z S Y ( 0 , 0 ) = ( 1 q ) , P S Z S Y ( 1 , 1 ) = q α , and P S Z S Y ( 1 , 0 ) = q ( 1 α ) for fixed β 1 , β 2 , q , α [ 0 , 1 ] , so (49) is satisfied; see also Section IV-A of [36]. Using Lemma 4 for the given probability distributions, we evaluate an inner bound for the lossless rate region R for an invertible function computation scenario with two transmitting nodes, in which, e.g., β 1 = 0.2 , β 2 = 0.11 , α = 0.3 , and q = 0.25 and obtain the lossless rate region that is characterized by
R s 0.7579 bits / symbol , R w , 1 0.4626 bits / symbol ,
R w , 2 0.3021 bits / symbol , R w , 1 + R w , 2 0.7686 bits / symbol ,
R , Dec 0.1577 bits / symbol , R , Eve 0.1469 bits / symbol
where the sum-storage rate constraint is active since the sum of the bounds on R w , 1 and R w , 2 is smaller than the bound on ( R w , 1 + R w , 2 ) .

5. Proof of Theorem 1

5.1. Inner Bound

 Proof Sketch 
The OSRB method [30] is used for the proof of achievability by applying the steps given in Section 1.6 of [37]. Let
( V 1 n , V 2 n , U 1 n , U 2 n , X ˜ 1 n , X ˜ 2 n , X n , Y n , Z n )
be i.i.d. according to P V 1 V 2 U 1 U 2 X ˜ 1 X ˜ 2 X Y Z that can be obtained from (19) with fixed P U 1 | X ˜ 1 , P V 1 | U 1 , P U 2 | X ˜ 2 , and P V 2 | U 2 such that the pair ( U 1 , U 2 ) is admissible for a function f ( X ˜ 1 , X ˜ 2 , Y ) , so ( U 1 n , U 2 n ) is also admissible since random variables in (57) are i.i.d.
To each v 1 n , assign two random bin indices ( F v 1 , W v 1 ) such that F v 1 [ 1 : 2 n R ˜ v 1 ] and W v 1 [ 1 : 2 n R v 1 ] . Furthermore, to each u 1 n , assign two random indices ( F u 1 , W u 1 ) such that F u 1 [ 1 : 2 n R ˜ u 1 ] and W u 1 [ 1 : 2 n R u 1 ] . Similarly, random indices ( F v 2 , W v 2 ) and ( F u 2 , W u 2 ) are assigned to each v 2 n and u 2 n , respectively. The indices F 1 = ( F v 1 , F u 1 ) , and F 2 = ( F v 2 , F u 2 ) represent the public choice of two encoders and one decoder, whereas W 1 = ( W v 1 , W u 1 ) and W 2 = ( W v 2 , W u 2 ) are the public messages sent by the encoders Enc 1 ( · ) and Enc 2 ( · ) , respectively, to the fusion center.
We consider the following decoding order:
  • observing ( Y n , F v 1 , W v 1 ) , the decoder Dec ( · ) estimates V 1 n as V ^ 1 n ;
  • observing ( Y n , V ^ 1 n , F v 2 , W v 2 ) , the decoder estimates V 2 n as V ^ 2 n ;
  • observing ( Y n , V ^ 1 n , V ^ 2 n , F u 1 , W u 1 ) , the decoder estimates U 1 n as U ^ 1 n ;
  • observing ( Y n , V ^ 1 n , V ^ 2 n , U ^ 1 n , F u 2 , W u 2 ) , the decoder estimates U 2 n as U ^ 2 n .
By swapping indices 1 and 2 in the decoding order, another corner point in the achievable rate region is obtained, so we analyze the given decoding order but also provide the results for the other corner point.
Consider Step 1 in the decoding order given above. Using an SW [13] decoder, one can reliably estimate V 1 n from ( Y n , F v 1 , W v 1 ) such that the expected value of the error probability taken over the random bin assignments vanishes when n , if we have Lemma 1 of [30]
R ˜ v 1 + R v 1 > H ( V 1 | Y ) .
Similarly, Step 2–4 estimations are reliable if we have
R ˜ v 2 + R v 2 > H ( V 2 | V 1 , Y )
R ˜ u 1 + R u 1 > H ( U 1 | V 1 , V 2 , Y )
R ˜ u 2 + R u 2 > H ( U 2 | V 1 , V 2 , U 1 , Y ) = ( a ) H ( U 2 | V 2 , U 1 , Y )
where ( a ) follows from the Markov chain V 1 U 1 ( U 2 , V 2 , Y ) . Therefore, (2) is satisfied if (58)–(61) are satisfied.
The public index F v 1 is almost independent of X ˜ 1 n , so it is almost independent of ( X ˜ 1 n , X ˜ 2 n , X n , Y n , Z n ) , if we have Theorem 1 of [30]
R ˜ v 1 < H ( V 1 | X ˜ 1 )
because then the expected value, which is taken over the random bin assignments, of the variational distance between the joint probability distributions Unif [ 1 : 2 n R ˜ v 1 ] · P X ˜ 1 n and P F v 1 X ˜ 1 n , vanishes when n . Furthermore, the public index F u 1 is almost independent of ( V 1 n , X ˜ 1 n ) , so it is almost independent of ( V 1 n , X ˜ 1 n , X ˜ 2 n , X n , Y n , Z n ) , if we have
R ˜ u 1 < H ( U 1 | V 1 , X ˜ 1 ) .
Similarly, F v 2 is almost independent of X ˜ 2 n if we have
R ˜ v 2 < H ( V 2 | X ˜ 2 )
and F u 2 is almost independent of ( V 2 n , X ˜ 2 n ) if we have
R ˜ u 2 < H ( U 2 | V 2 , X ˜ 2 ) .
To satisfy (58)–(65), for any ϵ > 0 we fix
R ˜ v 1 = H ( V 1 | X ˜ 1 ) ϵ
R v 1 = I ( V 1 ; X ˜ 1 ) I ( V 1 ; Y ) + 2 ϵ
R ˜ v 2 = H ( V 2 | X ˜ 2 ) ϵ
R v 2 = I ( V 2 ; X ˜ 2 ) I ( V 2 ; V 1 , Y ) + 2 ϵ
R ˜ u 1 = H ( U 1 | V 1 , X ˜ 1 ) ϵ
R u 1 = I ( U 1 ; X ˜ 1 | V 1 ) I ( U 1 ; V 2 , Y | V 1 ) + 2 ϵ
R ˜ u 2 = H ( U 2 | V 2 , X ˜ 2 ) ϵ
R u 2 = I ( U 2 ; X ˜ 2 | V 2 ) I ( U 2 ; U 1 , Y | V 2 ) + 2 ϵ .
Public Message (Storage) Rates: (67) and (71) result in a public message (storage) rate R w 1 of
R w 1 = R v 1 + R u 1 = ( a ) I ( V 1 ; X ˜ 1 | Y ) + H ( U 1 | V 1 , V 2 , Y ) H ( U 1 | V 1 , X ˜ 1 ) + 4 ϵ = ( b ) I ( V 1 ; X ˜ 1 | Y ) + I ( U 1 ; X ˜ 1 | V 1 , V 2 , Y ) + 4 ϵ
where ( a ) follows because V 1 X ˜ 1 Y forms a Markov chain and ( b ) follows because U 1 ( V 1 , X ˜ 1 ) ( V 2 , Y ) form a Markov chain. Furthermore, (69) and (73) result in a storage rate R w 2 of
R w 2 = R v 2 + R u 2 = ( a ) I ( V 2 ; X ˜ 2 | V 1 , Y ) + H ( U 2 | U 1 , V 2 , Y ) H ( U 2 | V 2 , X ˜ 2 ) + 4 ϵ = ( b ) I ( V 2 ; X ˜ 2 | V 1 , Y ) + I ( U 2 ; X ˜ 2 | U 1 , V 2 , Y ) + 4 ϵ
where ( a ) follows from the Markov chain V 2 X ˜ 2 ( V 1 , Y ) and ( b ) from U 2 ( V 2 , X ˜ 2 ) ( U 1 , Y ) . We remark that if the indices 1 and 2 in the decoding order given above are swapped, the other corner point with
R w 1 = I ( V 1 ; X ˜ 1 | V 2 , Y ) + I ( U 1 ; X ˜ 1 | U 2 , V 1 , Y ) + 4 ϵ
R w 2 = I ( V 2 ; X ˜ 2 | Y ) + I ( U 2 ; X ˜ 2 | V 1 , V 2 , Y ) + 4 ϵ
is achieved.
Privacy Leakage to Decoder: We have
I ( X n ; W 1 , W 2 , F 1 , F 2 | Y n ) = I ( X n ; W 1 , W 2 | F 1 , F 2 , Y n ) + I ( X n ; F 1 , F 2 | Y n ) ( a ) H ( X n | Y n ) H ( X n | W 1 , W 2 , F 1 , F 2 , V 1 n , V 2 n , U 1 n , U 2 n , Y n ) + 4 ϵ n = ( b ) H ( X n | Y n ) H ( X n | U 1 n , U 2 n , Y n ) + 4 ϵ n = ( c ) n I ( U 1 , U 2 ; X | Y ) + 4 ϵ n
where
( a ) follows for some ϵ n > 0 with ϵ n 0 when n because
I ( X n ; F 1 , F 2 | Y n ) = I ( X n ; F v 1 | Y n ) + I ( X n ; F u 1 | F v 1 , Y n ) + I ( X n ; F v 2 | F v 1 , F u 1 , Y n ) + I ( X n ; F u 2 | F v 1 , F u 1 , F v 2 , Y n ) 4 ϵ n
since (1) by (62) F v 1 is almost independent of ( X n , Y n ) ; (2) by (63) F u 1 is almost independent of ( V 1 n , X n , Y n ) and because V 1 n determines F v 1 ; (3) by (64) F v 2 is almost independent of ( U 1 n , V 1 n , X n , Y n ) and because ( V 1 n , U 1 n ) determine ( F v 1 , F u 1 ) ; (4) by (65) F u 2 is almost independent of ( V 2 n , U 1 n , V 1 n , X n , Y n ) and because ( V 1 n , U 1 n , V 2 n ) determine ( F v 1 , F u 1 , F v 2 ) ;
( b ) follows because ( V 1 n , V 2 n , U 1 n , U 2 n ) determine ( W 1 , W 2 , F 1 , F 2 ) and from the Markov chains V 1 n U 1 n ( X n , Y n , U 2 n , V 2 n ) and V 2 n U 2 n ( X n , Y n , U 1 n ) ;
( c ) follows because ( X n , U 1 n , U 2 n , Y n ) are i.i.d.
Privacy Leakage to Eve: We have
I ( X n ; W 1 , W 2 , F 1 , F 2 | Z n ) = ( a ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( W 1 , W 2 , F 1 , F 2 | X n ) = ( b ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( W u 1 , F u 1 , W u 2 , F u 2 , V 1 n , V 2 n | X n ) + H ( V 1 n | W 1 , W 2 , F 1 , F 2 , X n ) + H ( V 2 n | V 1 n , W 1 , W 2 , F 1 , F 2 , X n ) ( c ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( W u 1 , F u 1 , W u 2 , F u 2 , V 1 n , V 2 n | X n ) + 2 n ϵ n = ( d ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( U 1 n , U 2 n , V 1 n , V 2 n | X n ) + H ( U 1 n | W u 1 , F u 1 , W u 2 , F u 2 , V 1 n , V 2 n , X n ) + H ( U 2 n | U 1 n , W u 1 , F u 1 , W u 2 , F u 2 , V 1 n , V 2 n , X n ) + 2 n ϵ n ( e ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( U 1 n , U 2 n , V 1 n , V 2 n | X n ) + 4 n ϵ n = ( f ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) n H ( U 1 , U 2 , V 1 , V 2 | X ) + 4 n ϵ n
where ( a ) follows because ( W 1 , W 2 , F 1 , F 2 ) X n Z n form a Markov chain, ( b ) follows since ( V 1 n , V 2 n ) determine ( F v 1 , W v 1 , F v 2 , W v 2 ) , ( c ) follows for some ϵ n > 0 such that ϵ n 0 when n because ( F v 1 , W v 1 , X n ) can reliably recover V 1 n by (58), and similarly because ( F v 2 , W v 2 , V 1 n , X n ) can reliably recover V 2 n by (59) both due to the Markov chain ( V 1 n , V 2 n ) X n Y n , ( d ) follows because ( U 1 n , U 2 n ) determine ( F u 1 , W u 1 , F u , 2 , W u 2 ) , ( e ) follows because ( F u 1 , W u 1 , V 1 n , V 2 n , X n ) can reliably recover U 1 n by (60) and the inequality
H ( U 1 | V 1 , V 2 , Y ) H ( U 1 | V 1 , V 2 , X )
that follows from
I ( U 1 ; V 1 , V 2 , X ) I ( U 1 ; V 1 , V 2 , Y ) I ( U 1 ; V 1 , V 2 , X ) I ( U 1 ; V 1 , V 2 , Y , X ) = 0
since U 1 ( V 1 , V 2 , X ) Y form a Markov chain. Furthermore, ( F u 2 , W u 2 , V 1 n , V 2 n , U 1 n , X n ) can reliably recover U 2 n by (61) and the inequality
H ( U 2 | V 1 , V 2 , U 1 , Y ) H ( U 2 | V 1 , V 2 , U 1 , X )
that can be proved entirely the same as (82) by using the Markov chain U 2 ( V 1 , V 2 , U 1 , X ) Y , and ( f ) follows because ( U 1 n , U 2 n , V 1 n , V 2 n , X n ) are i.i.d.
In (80), obtaining single-letter bounds on the term H ( W 1 , W 2 , F 1 , F 2 | Z n ) requires analysis of numerous decodability cases, whereas there are only six different decodability cases analyzed in [20] for secure function computation with a single transmitting node. To simplify our analysis by applying the results in [20], we combine the decoding order Steps 1 and 2 given above such that ( V 1 , V 2 ) are treated jointly, and, similarly, we combine Steps 3 and 4 such that ( U 1 , U 2 ) are treated jointly. Using the combined steps, we can consider the six decodability cases analyzed in Section V-A of [20] by replacing V n with ( V 1 n , V 2 n ) and U n with ( U 1 n , U 2 n ) , respectively, in the proof. Since in (80) the second term n H ( U 1 , U 2 , V 1 , V 2 | X ) can be obtained by applying the same replacement to the second term in Equation (54) of [20], we obtain from (80) and these decodability analyses that
I ( X n ; W 1 , W 2 , F 1 , F 2 | Z n ) n ( [ I ( U 1 , U 2 ; Z | V 1 , V 2 ) I ( U 1 , U 2 ; Y | V 1 , V 2 ) + ϵ ] + I ( U 1 , U 2 ; X | Z ) + 4 ϵ n + ϵ n )
for some ϵ n > 0 such that ϵ n 0 when n .
Secrecy Leakage (to Eve): We obtain
I ( X ˜ 1 n , X ˜ 2 n , Y n ; W 1 , W 2 , F 1 , F 2 | Z n ) = ( a ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( W 1 , W 2 , F 1 , F 2 | X ˜ 1 n , X ˜ 2 n ) = ( b ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( W u 1 , W u 2 , F u 1 , F u 2 , V 1 n , V 2 n | X ˜ 1 n , X ˜ 2 n ) + H ( V 1 n | W 1 , W 2 , F 1 , F 2 , X ˜ 1 n , X ˜ 2 n ) + H ( V 2 n | V 1 n , W 1 , W 2 , F 1 , F 2 , X ˜ 1 n , X ˜ 2 n ) ( c ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( W u 1 , W u 2 , F u 1 , F u 2 , V 1 n , V 2 n | X ˜ 1 n , X ˜ 2 n ) + 2 n ϵ n = ( d ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( U 1 n , U 2 n , V 1 n , V 2 n | X ˜ 1 n , X ˜ 2 n ) + 2 n ϵ n + H ( U 1 n | W u 1 , W u 2 , F u 1 , F u 2 , V 1 n , V 2 n , X ˜ 1 n , X ˜ 2 n ) + H ( U 2 n | U 1 n , W u 1 , W u 2 , F u 1 , F u 2 , V 1 n , V 2 n , X ˜ 1 n , X ˜ 2 n ) ( e ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) H ( U 1 n , U 2 n , V 1 n , V 2 n | X ˜ 1 n , X ˜ 2 n ) + 4 n ϵ n ( f ) H ( W 1 , W 2 , F 1 , F 2 | Z n ) n H ( U 1 , U 2 , V 1 , V 2 | X ˜ 1 , X ˜ 2 ) + 4 n ϵ n
where ( a ) follows from the Markov chain ( W 1 , W 2 , F 1 , F 2 ) ( X ˜ 1 n , X ˜ 2 n ) ( Y n , Z n ) , ( b ) follows since ( V 1 n , V 2 n ) determine ( F v 1 , W v 1 , F v 2 , W v 2 ) , ( c ) follows because ( F v 1 , W v 1 , X ˜ 1 n , X ˜ 2 n ) can reliably recover V 1 n by (58), and similarly because ( F v 2 , W v 2 , V 1 n , X ˜ 1 n , X ˜ 2 n ) can reliably recover V 2 n by (59) both due to the Markov chain ( V 1 n , V 2 n ) ( X ˜ 1 n , X ˜ 2 n ) Y n , ( d ) follows since ( U 1 n , U 2 n ) determine ( F u 1 , W u 1 , F u 2 , W u 2 ) , ( e ) follows because ( F u 1 , W u 1 , V 1 n , V 2 n , X ˜ 1 n , X ˜ 2 n ) can reliably recover U 1 n by (60) and the inequality
H ( U 1 | V 1 , V 2 , Y ) H ( U 1 | V 1 , V 2 , X ˜ 1 , X ˜ 2 n )
that can be proved similarly to (82) due to the Markov chain U 1 ( V 1 , V 2 , X ˜ 1 , X ˜ 2 ) Y . Furthermore, ( F u 2 , W u 2 , V 1 n , V 2 n , U 1 n , X ˜ 1 n , X ˜ 2 n ) can reliably recover U 2 n by (61) and the inequality
H ( U 2 | V 1 , V 2 , U 1 , Y ) H ( U 2 | V 1 , V 2 , U 1 , X ˜ 1 , X ˜ 2 )
that can be proved by using the Markov chain U 2 ( V 1 , V 2 , U 1 , X ˜ 1 , X ˜ 2 ) Y , and ( f ) follows because ( U 1 n , U 2 n , V 1 n , V 2 n , X ˜ 1 n , X ˜ 2 n ) are i.i.d.
We remark that the terms in (86) are entirely similar to the terms in (80). One can show that all steps of the decodability analysis from Section V-A of [20] that is applied to (80) can be applied also to (86) by replacing X with ( X ˜ 1 , X ˜ 2 ) , so we obtain
I ( X ˜ 1 n , X ˜ 2 n , Y n ; W 1 , W 2 , F 1 , F 2 | Z n ) n [ I ( U 1 , U 2 ; Z | V 1 , V 2 ) I ( U 1 , U 2 ; Y | V 1 , V 2 ) + ϵ ] + n I ( U 1 , U 2 ; X ˜ 1 , X ˜ 2 | Z ) + 5 n ϵ n .
We consider that the public indices ( F 1 , F 2 ) are generated uniformly at random and the encoders generate ( V 1 n , U 1 n ) and ( V 2 n , U 2 n ) according to P V 1 n U 1 n V 2 n U 2 n | X ˜ 1 n F 1 X ˜ 2 n F 2 obtained from the binning scheme above. This procedure induces a joint probability distribution that is almost equal to P V 1 V 2 U 1 U 2 X ˜ 1 X ˜ 2 X Y Z fixed as in (19) Section 1.6 in [37]. Since the privacy and secrecy leakage metrics considered above are expectations over all possible realizations F = f , applying the selection lemma (Lemma 2.2 of [38]), these results prove the achievability for Theorem 1 by choosing an ϵ > 0 such that ϵ 0 when n . We remark that the achievable region is convexified by using a time-sharing random variable Q such that P Q V 1 V 2 = P Q P V 1 | Q P V 2 | Q , required because of the [ · ] operation.□

5.2. Outer Bound

Proof Sketch 
Assume that for some n 1 and δ n > 0 , there exist two encoders and a decoder such that (2)–(7) are satisfied for some tuple ( R s , R w 1 , R w , 2 , R , Dec , R , Eve ) . Let
V 1 , i ( W 1 , Y i + 1 n , Z i 1 )
V 2 , i ( W 2 , Y i + 1 n , Z i 1 )
U 1 , i ( X i 1 , W 1 , Y i + 1 n , Z i 1 )
U 2 , i ( X i 1 , W 2 , Y i + 1 n , Z i 1 )
that satisfy the Markov chains
V 1 , i U 1 , i X ˜ 1 , i X i ( X ˜ 2 , i , Y i , Z i )
V 2 , i U 2 , i X ˜ 2 , i X i ( X ˜ 1 , i , Y i , Z i ) .
Admissibility of ( U 1 , U 2 ) : Define
n ϵ n = n δ n | X ˜ 1 | | X ˜ 2 | | Y | + H b ( δ n )
such that ϵ n 0 if δ n 0 . Using Fano’s inequality and (2), we obtain
n ϵ n H ( f n | f n ^ ) = ( a ) H ( f n | f n ) = i = 1 n H ( f i | f i ) i = 1 n H ( f i | f n ) ( b ) i = 1 n H ( f i | W 1 , W 2 , Y n ) i = 1 n H ( f i | W 1 , W 2 , Y n , X i 1 , Z i 1 ) = ( c ) i = 1 n H ( f i | W 1 , W 2 , Y i + 1 n , X i 1 , Z i 1 , Y i ) = ( d ) i = 1 n H ( f i | U 1 , i , U 2 , i , Y i )
where ( a ) follows from Lemma 2 of [39] that proves that when n , there exists an i.i.d. random variable f n that satisfies both
H ( f n | f n ^ ) = H ( f n | f n )
and the Markov chain
f n ^ f n ( W 1 , W 2 , Y n )
( b ) follows from the data processing inequality because of the Markov chain
f n ( W 1 , W 2 , Y n ) f n
and permits randomized decoding, ( c ) follows from the Markov chain
Y i 1 ( X i 1 , Z i 1 , W 1 , W 2 , Y i , Y i + 1 n ) f i
and ( d ) follows from the definitions of U 1 , i and U 2 , i .
Public Message (Storage) Rates: We obtain
n ( R w 1 + δ n ) ( a ) log | W 1 | H ( W 1 | Y n ) H ( W 1 | X ˜ 1 n , Y n ) = H ( X ˜ 1 n | Y n ) H ( X ˜ 1 n | W 1 , Y n ) = H ( X ˜ 1 n | Y n ) i = 1 n H ( X ˜ 1 , i | X ˜ 1 i 1 , W 1 , Y n ) = ( b ) H ( X ˜ 1 n | Y n ) i = 1 n H ( X ˜ 1 , i | X ˜ 1 i 1 , W 1 , Y i + 1 n , Y i ) ( c ) H ( X ˜ 1 n | Y n ) i = 1 n H ( X ˜ 1 , i | X i 1 , Z i 1 , W 1 , Y i + 1 n , Y i ) = ( d ) n H ( X ˜ 1 | Y ) i = 1 n H ( X ˜ 1 , i | U 1 , i , Y i ) = i = 1 n I ( U 1 , i ; X ˜ 1 , i | Y i ) = ( e ) i = 1 n [ I ( V 1 , i ; X ˜ 1 , i | Y i ) + I ( U 1 , i ; X ˜ 1 , i | Y i , V 1 , i ) ] = i = 1 n [ I ( V 1 , i ; X ˜ 1 , i , V 2 , i | Y i ) I ( V 1 , i ; V 2 , i | X ˜ 1 , i , Y i ) + I ( U 1 , i ; X ˜ 1 , i , U 2 , i | Y i , V 1 , i ) I ( U 1 , i ; U 2 , i | X ˜ 1 , i , Y i , V 1 , i ) ] i = 1 n [ I ( V 1 , i ; X ˜ 1 , i | Y i , V 2 , i ) I ( V 1 , i ; V 2 , i | X ˜ 1 , i , Y i ) + I ( U 1 , i ; X ˜ 1 , i | Y i , V 1 , i , U 2 , i ) I ( U 1 , i ; U 2 , i | X ˜ 1 , i , Y i , V 1 , i ) ]
where ( a ) follows from (4), ( b ) follows from the Markov chain
Y i 1 ( X ˜ 1 i 1 , W 1 , Y i + 1 n , Y i ) X ˜ 1 , i
( c ) follows from the data processing inequality applied to the Markov chain
( X i 1 , Z i 1 ) ( X ˜ 1 i 1 , W 1 , Y i + 1 n , Y i ) X ˜ 1 , i
( d ) follows from the definition of U 1 , i , and ( e ) follows from (93). Similarly, one can show by symmetry that we have
n ( R w 2 + δ n ) i = 1 n [ I ( V 2 , i ; X ˜ 2 , i | Y i , V 1 , i ) I ( V 2 , i ; V 1 , i | X ˜ 2 , i , Y i ) + I ( U 2 , i ; X ˜ 2 , i | Y i , V 2 , i , U 1 , i ) I ( U 2 , i ; U 1 , i | X ˜ 2 , i , Y i , V 2 , i ) ] .
Now we consider the sum-rate bound such that
n ( R w 1 + δ n ) + n ( R w 2 + δ n ) ( a ) log ( | W 1 | · | W 2 | ) H ( W 1 , W 2 ) I ( W 1 , W 2 ; X ˜ 1 n , X ˜ 2 n ) I ( W 1 , W 2 ; Y n ) = ( b ) i = 1 n I ( W 1 , W 2 ; X ˜ 1 , i , X ˜ 2 , i | X ˜ 1 i 1 , X ˜ 2 i 1 , Y i + 1 n ) I ( W 1 , W 2 ; Y i | X ˜ 1 i 1 , X ˜ 2 i 1 , Y i + 1 n )
= ( c ) i = 1 n I ( W 1 , W 2 , X ˜ 1 i 1 , X ˜ 2 i 1 , Y i + 1 n ; X ˜ 1 , i , X ˜ 2 , i ) I ( W 1 , W 2 , X ˜ 1 i 1 , X ˜ 2 i 1 , Y i + 1 n ; Y i ) ( d ) i = 1 n I ( W 1 , W 2 , X i 1 , Z i 1 , Y i + 1 n ; X ˜ 1 , i , X ˜ 2 , i ) I ( W 1 , W 2 , X i 1 , Z i 1 , Y i + 1 n ; Y i ) = ( e ) i = 1 n I ( U 1 , i , U 2 , i ; X ˜ 1 , i , X ˜ 2 , i ) I ( U 1 , i , U 2 , i ; Y i ) = ( f ) i = 1 n I ( U 1 , i , U 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i ) = ( g ) i = 1 n I ( U 1 , i , U 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i , V 1 , i , V 2 , i ) + I ( V 1 , i , V 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i ) = ( h ) i = 1 n [ I ( U 1 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i , V 1 , i , V 2 , i ) + I ( U 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i , U 1 , i , V 2 , i ) + I ( V 1 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i ) + I ( V 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i , V 1 , i ) ] i = 1 n [ I ( U 1 , i ; X ˜ 1 , i | Y i , V 1 , i , V 2 , i ) + I ( U 2 , i ; X ˜ 2 , i | Y i , U 1 , i , V 2 , i ) + I ( V 1 , i ; X ˜ 1 , i | Y i ) + I ( V 2 , i ; X ˜ 2 , i | Y i , V 1 , i ) ]
where ( a ) follows from (4) and (5), ( b ) follows from Csiszár’s sum identity [40], ( c ) follows because ( X ˜ 1 n , X ˜ 2 n , Y n ) are i.i.d., ( d ) follows from the data processing inequality applied to the Markov chains
( X i 1 , Z i 1 ) ( X ˜ 1 i 1 , X ˜ 2 i 1 , W 1 , W 2 , Y i + 1 n ) ( X ˜ 1 , i , X ˜ 2 , i )
( X ˜ 1 i 1 , X ˜ 2 i 1 ) ( X i 1 , Z i 1 , W 1 , W 2 , Y i + 1 n ) Y i
( e ) follows from the definitions of U 1 , i and U 2 , i , ( f ) and ( g ) follow from the Markov chain
( V 1 , i , V 2 , i ) ( U 1 , i , U 2 , i ) ( X ˜ 1 , i , X ˜ 2 , i ) Y i
( h ) follows from the Markov chain
V 1 , i ( U 1 , i , Y i , V 2 , i ) ( U 2 , i , X ˜ 1 , i , X ˜ 2 , i ) .
Privacy Leakage to Decoder: We have
n ( R , Dec + δ n )
( a ) H ( W 1 , W 2 | Y n ) H ( W 1 , W 2 | X n ) = ( b ) i = 1 n I ( W 1 , W 2 ; X i | X i 1 , Y i + 1 n ) I ( W 1 , W 2 ; Y i | Y i + 1 n , X i 1 ) = ( c ) i = 1 n I ( W 1 , W 2 ; X i | X i 1 , Z i 1 , Y i + 1 n ) I ( W 1 , W 2 ; Y i | Y i + 1 n , X i 1 , Z i 1 ) = ( d ) i = 1 n I ( W 1 , W 2 , X i 1 , Z i 1 , Y i + 1 n ; X i ) I ( W 1 , W 2 , Y i + 1 n , X i 1 , Z i 1 ; Y i ) = ( e ) i = 1 n I ( U 1 , i , U 2 , i ; X i ) I ( U 1 , i , U 2 , i ; Y i ) = ( f ) i = 1 n I ( U 1 , i , U 2 , i ; X i | Y i )
where ( a ) follows from (6) and from the Markov chain ( W 1 , W 2 ) X n Y n , ( b ) follows from Csiszár’s sum identity, ( c ) follows from the Markov chain
Z i 1 ( X i 1 , Y i + 1 n ) ( X i , Y i , W 1 , W 2 )
( d ) follows because ( X n , Y n , Z n ) are i.i.d., ( e ) follows from the definitions of U 1 , i and U 2 , i , and ( f ) follows from the Markov chain
( U 1 , i , U 2 , i ) X i Y i .
Privacy Leakage to Eve: We have
n ( R , Eve + δ n ) ( a ) [ H ( W 1 , W 2 | Z n ) H ( W 1 , W 2 | Y n ) ] + [ H ( W 1 , W 2 | Y n ) H ( W 1 , W 2 | X n ) ] = ( b ) i = 1 n I ( W 1 , W 2 ; Y i | Y i + 1 n , Z i 1 ) I ( W 1 , W 2 ; Z i | Z i 1 , Y i + 1 n ) + i = 1 n I ( W 1 , W 2 ; X i | X i 1 , Y i + 1 n ) I ( W 1 , W 2 ; Y i | Y i + 1 n , X i 1 ) = ( c ) i = 1 n I ( W 1 , W 2 ; Y i | Y i + 1 n , Z i 1 ) I ( W 1 , W 2 ; Z i | Z i 1 , Y i + 1 n ) + i = 1 n I ( W 1 , W 2 ; X i | X i 1 , Y i + 1 n , Z i 1 ) I ( W 1 , W 2 ; Y i | Y i + 1 n , X i 1 , Z i 1 ) = ( d ) i = 1 n I ( W 1 , W 2 , Y i + 1 n , Z i 1 ; Y i ) I ( W 1 , W 2 , Z i 1 , Y i + 1 n ; Z i ) + i = 1 n I ( W 1 , W 2 , X i 1 , Y i + 1 n , Z i 1 ; X i ) I ( W 1 , W 2 , Y i + 1 n , X i 1 , Z i 1 ; Y i ) = ( e ) i = 1 n [ I ( V 1 , i , V 2 , i ; Y i ) I ( V 1 , i , V 2 , i ; Z i ) + I ( U 1 , i , U 2 , i V 1 , i , V 2 , i ; X i ) I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; Y i ) ] = i = 1 n [ I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; Z i ) + I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; X i ) + I ( U 1 , i , U 2 , i ; Z i | V 1 , i , V 2 , i ) I ( U 1 , i , U 2 , i ; Y i | V 1 , i , V 2 , i ) ] ( f ) i = 1 n I ( U 1 , i , U 2 , i ; X i | Z i ) + I ( U 1 , i , U 2 , i ; Z i | V 1 , i , V 2 , i ) I ( U 1 , i , U 2 , i ; Y i | V 1 , i , V 2 , i )
where ( a ) follows from (7) and from the Markov chain ( W 1 , W 2 ) X n Z n , ( b ) follows from Csiszár’s sum identity, ( c ) follows from the Markov chain in (112), ( d ) follows because ( X n , Y n , Z n ) are i.i.d., ( e ) follows from the definitions of V 1 , i , V 2 , i , U 1 , i and U 2 , i , and ( f ) follows from the Markov chain
( V 1 , i , V 2 , i ) ( U 1 , i , U 2 , i ) X i Z i .
Secrecy Leakage (to Eve): We obtain
n ( R s + δ n ) ( a ) [ H ( W 1 , W 2 | Z n ) H ( W 1 , W 2 | Y n ) ] + [ H ( W 1 , W 2 | Y n ) H ( W 1 , W 2 | X ˜ 1 n , X ˜ 2 n , Y n ) ] = ( b ) i = 1 n [ I ( W 1 , W 2 ; Y i | Y i + 1 n , Z i 1 ) I ( W 1 , W 2 ; Z i | Z i 1 , Y i + 1 n ) + H ( X ˜ 1 , i , X ˜ 2 , i | Y i ) H ( X ˜ 1 , i , X ˜ 2 , i | X ˜ 1 i 1 , X ˜ 2 i 1 , W 1 , W 2 , Y i + 1 n , Y i ) ] ( c ) i = 1 n [ I ( W 1 , W 2 , Y i + 1 n , Z i 1 ; Y i ) I ( W 1 , W 2 , Z i 1 , Y i + 1 n ; Z i ) + H ( X ˜ 1 , i , X ˜ 2 , i | Y i ) H ( X ˜ 1 , i , X ˜ 2 , i | X i 1 , Z i 1 , W 1 , W 2 , Y i + 1 n , Y i ) ] = ( d ) i = 1 n I ( V 1 , i , V 2 , i ; Y i ) I ( V 1 , i , V 2 , i ; Z i ) + I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Y i ) = ( e ) i = 1 n [ I ( V 1 , i , V 2 , i ; Y i ) I ( V 1 , i , V 2 , i ; Z i ) + I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; X ˜ 1 , i , X ˜ 2 , i ) I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; Y i ) ] = i = 1 n [ I ( U 1 , i , U 2 , i , V 1 , i , V 2 , i ; Z i ) + I ( U 1 , i U 2 , i , V 1 , i , V 2 , i ; X ˜ 1 , i , X ˜ 2 , i ) + I ( U 1 , i , U 2 , i ; Z i | V 1 , i , V 2 , i ) I ( U 1 , i , U 2 , i ; Y i | V 1 , i , V 2 , i ) ] ( f ) i = 1 n I ( U 1 , i , U 2 , i ; X ˜ 1 , i , X ˜ 2 , i | Z i ) + I ( U 1 , i , U 2 , i ; Z i | V 1 , i , V 2 , i ) I ( U 1 , i , U 2 , i ; Y i | V 1 , i , V 2 , i )
where ( a ) follows from (3), ( b ) follows because ( X ˜ 1 n , X ˜ 2 n , Y n ) are i.i.d. and from Csiszár’s sum identity and the Markov chain
Y i 1 ( X ˜ 1 i 1 , X ˜ 2 i 1 , W 1 , W 2 , Y i + 1 n , Y i ) ( X ˜ 1 , i , X ˜ 2 , i )
( c ) follows because ( Y n , Z n ) are i.i.d. and from the data processing inequality applied to the Markov chain
( X i 1 , Z i 1 ) ( X ˜ 1 i 1 , X ˜ 2 i 1 , W 1 , W 2 , Y i + 1 n , Y i ) ( X ˜ 1 , i , X ˜ 1 , i )
( d ) follows from the definitions of V 1 , i , V 2 , i , U 1 , i , and U 2 , i , ( e ) follows from the Markov chain given in (108), and ( f ) follows from the Markov chain
( V 1 , i , V 2 , i ) ( U 1 , i , U 2 , i ) ( X ˜ 1 , i , X ˜ 2 , i ) Z i .
Introduce a uniformly distributed time-sharing random variable Q Unif [ 1 : n ] that is independent of other random variables, and define X = X Q , X ˜ 1 = X ˜ 1 , Q , X ˜ 2 = X ˜ 2 , Q , Y = Y Q , Z = Z Q , V 1 = V 1 , Q , V 2 = V 2 , Q , U 1 = ( U 1 , Q , Q ) , U 2 = ( U 2 , Q , Q ) , and f = f Q , so
( Q , V 1 ) U 1 X ˜ 1 X ( X ˜ 2 , Y , Z )
( Q , V 2 ) U 2 X ˜ 2 X ( X ˜ 1 , Y , Z )
form Markov chains. The proof of the outer bound follows by letting δ n 0 .
Cardinality Bounds: We use the support lemma Lemma 15.4 of [40] to prove the cardinality bounds and apply similar steps as in [18,20], so we omit the proof. □

6. Conclusions

We considered the function computation problem, where three nodes observe correlated random variables and aim to compute a target function of their observations at the fusion center node. We modeled the source of the correlation between these nodes by positing that all three random variables are noisy observations of a remote random source. Furthermore, we imposed one secrecy, two privacy, and two storage constraints with operational meanings on this function computation problem to define a lossless rate region by considering an eavesdropper that observes a correlated random variable. The lossless function computation problem was extended by allowing the function computed to be a distorted version of the target function, which defined the lossy function computation problem.
We proposed inner and outer bounds for the lossless and lossy rate regions. The secrecy leakage and privacy leakage rates that are measured with respect to the eavesdropper were shown to be different due to the remote source considered, unlike in the literature. Furthermore, we characterized a simplified rate region for functions that are partially invertible with respect to one of the transmitting node observations as well as for invertible functions. Moreover, we considered two different physical-degradation cases for the measurement channels of the eavesdropper and fusion center when the function computed was invertible. We derived the corresponding simplified rate regions, one of which is evaluated as an example scenario, and proved that no auxiliary or time-sharing random variable is necessary to characterize these regions.
In future work, we will propose inner and outer bounds for the lossless and lossy multi-function computation problems with multiple transmitting nodes and characterize the rate regions for multi-function computations when the function computed is invertible.

Funding

This research was supported by the German Federal Ministry of Education and Research (BMBF) within the national initiative for “Post Shannon Communication (NewCom)” under the Grant 16KIS1004 and by the German Research Foundation (DFG) under the Grant SCHA 1944/9-1.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

O. Günlü thanks Matthieu Bloch and Rafael F. Schaefer for their contributions to the conference papers used in this work.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Mijumbi, R.; Serrat, J.; Gorricho, J.L.; Bouten, N.; De Turck, F.; Boutaba, R. Network Function Virtualization: State-of-the-Art and Research Challenges. IEEE Commun. Surv. Tutor. 2016, 18, 236–262. [Google Scholar] [CrossRef] [Green Version]
  2. Predd, J.B.; Kulkarni, S.B.; Poor, H.V. Distributed learning in wireless sensor networks. IEEE Sign. Process. Mag. 2006, 23, 56–69. [Google Scholar] [CrossRef] [Green Version]
  3. Yao, A.C. Protocols for secure computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science, Chicago, IL, USA, 3–5 November 1982; pp. 160–164. [Google Scholar]
  4. Yao, A.C. How to generate and exchange secrets. In Proceedings of the 27th Annual Symposium on Foundations of Computer Science, Toronto, ON, Canada, 27–29 October 1986; pp. 162–167. [Google Scholar]
  5. Tyagi, H.; Narayan, P.; Gupta, P. When Is a Function Securely Computable? IEEE Trans. Inf. Theory 2011, 57, 6337–6350. [Google Scholar] [CrossRef]
  6. Orlitsky, A.; Roche, J.R. Coding for computing. IEEE Trans. Inf. Theory 2001, 47, 903–917. [Google Scholar] [CrossRef]
  7. Bloch, M.; Günlü, O.; Yener, A.; Oggier, F.; Poor, H.V.; Sankar, L.; Schaefer, R.F. An Overview of Information-Theoretic Security and Privacy: Metrics, Limits and Applications. IEEE J. Sel. Areas Inf. Theory 2021, 2, 5–22. [Google Scholar] [CrossRef]
  8. Ma, N.; Ishwar, P. Some Results on Distributed Source Coding for Interactive Function Computation. IEEE Trans. Inf. Theory 2011, 57, 6180–6195. [Google Scholar] [CrossRef]
  9. Sefidgaran, M.; Tchamkerten, A. Computing a function of correlated Sources: A rate region. In Proceedings of the 2011 IEEE International Symposium on Information Theory, St. Petersburg, Russia, 31 July–5 August 2011; pp. 1856–1860. [Google Scholar]
  10. Nazer, B.; Gastpar, M. Computation Over Multiple-Access Channels. IEEE Trans. Inf. Theory 2007, 53, 3498–3516. [Google Scholar] [CrossRef]
  11. Kowshik, H.; Kumar, P.R. Optimal Function Computation in Directed and Undirected Graphs. IEEE Trans. Inf. Theory 2012, 58, 3407–3418. [Google Scholar] [CrossRef]
  12. Kannan, S.; Viswanath, P. Multi-Session Function Computation and Multicasting in Undirected Graphs. IEEE J. Sel. Areas Commun. 2013, 31, 702–713. [Google Scholar] [CrossRef] [Green Version]
  13. Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  14. Wyner, A.D.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theory 1976, 22, 1–10. [Google Scholar] [CrossRef]
  15. Goldenbaum, M.; Boche, H.; Poor, H.V. On secure computation over the binary modulo-2 adder multiple-access wiretap channel. In Proceedings of the 2016 IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 21–25. [Google Scholar]
  16. Tyagi, H.; Watanabe, S. Converses For Secret Key Agreement and Secure Computing. IEEE Trans. Inf. Theory 2015, 61, 4809–4827. [Google Scholar] [CrossRef] [Green Version]
  17. Prabhakaran, V.; Ramchandran, K. On Secure Distributed Source Coding. In Proceedings of the 2007 IEEE Information Theory Workshop, Lake Tahoe, CA, USA, 2–6 September 2007; pp. 442–447. [Google Scholar]
  18. Tu, W.; Lai, L. On Function Computation with Privacy and Secrecy Constraints. IEEE Trans. Inf. Theory 2019, 65, 6716–6733. [Google Scholar] [CrossRef]
  19. Günlü, O. Key Agreement with Physical Unclonable Functions and Biometric Identifiers. Ph.D. Thesis, Technische Universität München, Munich, Germany, 2018. [Google Scholar]
  20. Günlü, O.; Bloch, M.; Schaefer, R.F. Secure Multi-Function Computation with Private Remote Sources. arXiv 2021, arXiv:2106.09485. [Google Scholar]
  21. Wang, Y.; Rane, S.; Draper, S.C.; Ishwar, P. A Theoretical Analysis of Authentication, Privacy, and Reusability Across Secure Biometric Systems. IEEE Trans. Inf. Forensics Secur. 2012, 7, 1825–1840. [Google Scholar] [CrossRef] [Green Version]
  22. Günlü, O.; Kittichokechai, K.; Schaefer, R.F.; Caire, G. Controllable Identifier Measurements for Private Authentication With Secret Keys. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1945–1959. [Google Scholar] [CrossRef] [Green Version]
  23. Li, N.; Zhang, Y.; Kuo, C.C.J. Explainable Machine Learning based Transform Coding for High Efficiency Intra Prediction. arXiv 2020, arXiv:2012.11152. [Google Scholar]
  24. Voloshynovskiy, S.; Koval, O.; Holotyak, T.; Beekhof, F. Privacy enhancement of common randomness based authentication: Key rate maximized case. In Proceedings of the 2009 First IEEE International Workshop on Information Forensics and Security (WIFS), London, UK, 6–9 December 2009; pp. 86–90. [Google Scholar]
  25. Wayman, J.; Jain, A.; Maltoni, D.; Maio, D. Biometric Systems: Technology, Design and Performance Evaluation; Springer: London, UK, 2005. [Google Scholar]
  26. De Groot, J.; Škoric, B.; De Vreede, N.; Linnartz, J.P. Information leakage of continuous-source zero secrecy leakage helper data schemes. Citeseer Gen 2012, 1, 1–16. [Google Scholar]
  27. Voloshynovskiy, S.; Koval, O.; Holotyak, T.; Beekhof, F.; Farhadzadeh, F. Privacy amplification of content identification systems based on fingerprint bit reliability. In Proceedings of the 2010 IEEE International Workshop on Information Forensics and Security, Seattle, WA, USA, 12–15 December 2010; pp. 1–6. [Google Scholar]
  28. Campisi, P. Security and Privacy in Biometrics; Springer: London, UK, 2013. [Google Scholar]
  29. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. Int. Conf. Artif. Intell. Statist. 2017, 54, 1273–1282. [Google Scholar]
  30. Yassaee, M.H.; Aref, M.R.; Gohari, A. Achievability Proof via Output Statistics of Random Binning. IEEE Trans. Inf. Theory 2014, 60, 6760–6786. [Google Scholar] [CrossRef] [Green Version]
  31. Holenstein, T.; Renner, R. On the Randomness of Independent Experiments. IEEE Trans. Inf. Theory 2011, 57, 1865–18716. [Google Scholar] [CrossRef]
  32. Günlü, O.; Bloch, M.; Schaefer, R.F. Secure Lossy Function Computation with Multiple Private Remote Source Observations. In Proceedings of the 25th International ITG Workshop on Smart Antennas (WSA 2021), French Riviera, France, 10–12 November 2021. [Google Scholar]
  33. Günlü, O.; Bloch, M.; Schaefer, R.F. Multiple Noisy Private Remote Source Observations for Secure Function Computation. In Proceedings of the Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 31 October–3 November 2021. [Google Scholar]
  34. Gamal, A.E.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  35. Ericson, T.; Körner, J. Successive encoding of correlated sources. IEEE Trans. Inf. Theory 1983, 29, 390–395. [Google Scholar] [CrossRef] [Green Version]
  36. Ahmadipour, M.; Wigger, M.; Kobayashi, M. Joint Sensing and Communication over Memoryless Broadcast Channels. In Proceedings of the IEEE Information Theory Workshop, Kanazawa, Japan, 17–21 October 2021; pp. 1–5. [Google Scholar]
  37. Bloch, M. Lecture Notes in Information-Theoretic Security; The Georgia Institute of Technology: Atlanta, GA, USA, 2018. [Google Scholar]
  38. Bloch, M.; Barros, J. Physical-Layer Security; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  39. Günlü, O.; Schaefer, R.F.; Poor, H.V. Biometric and Physical Identifiers with Correlated Noise for Controllable Private Authentication. arXiv 2020, arXiv:2001.00847. [Google Scholar]
  40. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems, 2nd ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
Figure 1. Single-function computation problem with two transmitting nodes under secrecy, privacy, and storage (or communication) constraints.
Figure 1. Single-function computation problem with two transmitting nodes under secrecy, privacy, and storage (or communication) constraints.
Entropy 24 00110 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Günlü, O. Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints. Entropy 2022, 24, 110. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010110

AMA Style

Günlü O. Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints. Entropy. 2022; 24(1):110. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010110

Chicago/Turabian Style

Günlü, Onur. 2022. "Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints" Entropy 24, no. 1: 110. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop