Next Article in Journal
CoCMA: Energy-Efficient Coverage Control in Cluster-Based Wireless Sensor Networks Using a Memetic Algorithm
Next Article in Special Issue
Selective Attention in Multi-Chip Address-Event Systems
Previous Article in Journal
A Novel Label-Free Optical Biosensor Using Synthetic Oligonucleotides from E. coli O157:H7: Elementary Sensitivity Tests
Previous Article in Special Issue
On the Relevance of Using OpenWireless Sensor Networks in Environment Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Distributed Joint Source-Channel Coding in Wireless Sensor Networks

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, 100876 Beijing, China
*
Author to whom correspondence should be addressed.
Submission received: 3 April 2009 / Revised: 15 May 2009 / Accepted: 3 June 2009 / Published: 22 June 2009
(This article belongs to the Special Issue Wireless Sensor Technologies and Applications)

Abstract

:
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency.

1. Introduction

In recent years, the growing concern for our environment and society has led to applications ranging from detecting chemical leaks to monitoring underground parking. Wireless Sensor Networks (WSNs) have received significant attention to fulfill these requirements. WSNs consist of a large number of cheap sensors, e.g. intelligent sensor nodes, micro-cameras, which are densely allocated and self-organized to efficiently and reliably perform complex tasks in inaccessible situations.
While having a significant impact throughout society with their rich application space, WSNs also bring challenges to information and network technologies. The constrained energy of the sensors leads to limited processing capabilities and transmitting power. Furthermore, the impacts of noise, fading and multi-user channels [1] damage the data reconstruction. Therefore, for sending the collected data from sensors to the sink (a more powerful sensor node for preliminary data collection and processing), a robust method with high compression and low complexity is urgently needed in WSNs.
Addressing the above issues, Distributed Joint Source-Channel Coding (DJSCC) has attracted considerable research as an accepted efficient technique. DJSCC derives from the Distributed Source Coding (DSC), which is an important source coding development. DSC allows the separate encoding of correlated sources to be as efficient as joint encoding in traditional Shannon's theory, which is stated by the Slepian-Wolf theorem [2]. This advantage suits the WSNs perfectly by employing the large redundancy among the sensors' data collections and saving the energy for inter-sensor communications. Moreover, the joint source-channel approach is also an inevitable choice in WSNs. Since the separate design requires two sets of encoders and decoders for individual source coding and channel coding respectively, this aggravates the burden of the sensors in WSNs. Another reason is that the joint approach is usually more efficient or at least equal to the separate approach. Consequently, unifying the DSC and joint source-channel coding, DJSCC is fully efficient in WSNs.
This paper summarizes the developments of DJSCC over various channels in WSNs. In [3], the authors simply generalize three categories of situations: the asymmetric case, the independent channels case and the Multiple Access Channel (MAC) case. Usually we divide the sensors in WSNs into several clusters in which there is a more powerful sensor node acting as a sink for the preliminary collection and analysis of the data from the member sensors. When only one source is compressed and transmitted to the sink at a time, the sink uses its full information as the side information for jointly decoding which is known as the asymmetric case. If there is more than one source to be compressed, they can be transmitted through several independent channels or an MAC to the sink. However, reference [3] ignores another important case, the Broadcast Channel (BC) case, which has attracted attention in recent years. We add the fourth case because this will produce an adaptive WSN with minimum cost. The member sensors can receive a global observation from the sink in addition to their own measurements to make reliable decisions or other reactions. For example, the global observation can be a threshold of the interested temperature which may changes as required. The sink will distribute this global information to all the member sensors in time so that the member sensors can avoid transmitting useless data to the sink and save energy.
This paper is meant to generalize a framework for the development of DJSCC in WSNs, and is by no means exhaustive. The organization is as follows. A brief review of DSC is presented in Section 2, especially the noise immunity of the DSC. The previous theoretical results and implementing methods of the DJSCC for the asymmetric case, the independent channels case, the MAC case and the BC case are presented in Section 3. Our proposed scheme is provided as an example showing the efficiency of DJSCC for the independent channels case in Section 4. Section 5 concludes the paper.

2. Distributed Source Coding

2.1. Theory and Practice

Slepian-Wolf theorem [2] proves the achievable region of DSC for correlated sources X1 and X2:
R 1 H ( X 1 X 2 ) R 2 H ( X 2 X 1 ) R 1 + R 2 H ( X 1 , X 2 )
where H(▪)represents the entropy function, and Ri is the encoding rate of Xi, i = 1,2, It is called as the Slepian-Wolf rate region. In [4] the technique of random binning to generalize the Slepian–Wolf result to jointly ergodic sources was introduced. Similar results are obtained by [5] with regard to lossy coding. In brief, binning refers to partitioning the space of source into disjoint subsets or bins under conscious design. The goal is to reduce the bits used for identifying a source symbol with the index of the bins.
Reference [6] first introduced the practical strategy to exploit the potential of the Slepian-Wolf theorem based on channel codes. The statistical dependence between the two sources can be modeled as a virtual correlation channel. The source X1 and the other source X2 (called the side information) can be regarded as the input and the output of the virtual channel respectively. Thus, the DSC can be processed as the channel coding problem. Based on this idea, the first practical DSC scheme called DISCUS [7] was designed. So far there has been a flurry of practical DSC schemes [8-14].

2.2. Syndrome-Based and Parity-Based DSC

The binning/syndrome approach and the parity approach are major methods for practical DSC due to the notion in [6]. References [7,9,12,14] use the syndrome approach, while [8,10,11,13] use the parity approach. As analyzed in [15], the syndrome approach outperforms the parity approach by optimizing the space partitioning in the noiseless environment. However, in noisy environment, the syndrome approach is sensitive to any mistake in the syndrome sequence (bin-index), so the search space will shift to unforeseeable one. In comparison, the parity approach is much more robust to residual transmission errors. Moreover, the parity approach can directly use the encoders and decoders from channel codes. All the considerations above lead to the conclusion that parity-based DSC is more efficient in distributed compression with noisy communication channels, namely the DJSCC problem.

3. Distributed Joint Source-Channel Coding

3.1. DJSCC for the Asymmetric Case

In the asymmetric case, one of the sources X2 is available at the decoder, and it can be treated as the noisy version of the source X1 through a virtual correlated channel. The goal is to reconstruct X1 lossless or within certain distortion at the joint decoder with the help of the side information X2. The system model is shown in Figure 1.
In the asymmetric case, the separation theorem is valid. [16] proved that the standard separation theorem [1] can be used while replacing the entropy of source by the conditional entropy with side information. Thus, the source X1 can be reliably transmitted if and only if:
H ( X 1 X 2 ) < C
where C is the capacity of the actual noisy channel. Combined with the Slepian-Wolf theory R1H (X1 | X2) the encoding rate of X1 for the DJSCC in the asymmetric case can be derived that:
R 1 H ( X 1 X 2 ) / C
An extension to lossy DJSCC in this case is given in [17]. Assuming the sources discussed are independent, identically distributed binary sequences, the code is less efficient on compression than the one in a pure Slepian-Wolf scheme, for C < 1. However, we acquire the abilities of resisting the errors of the noisy channels. Therefore, the most direct method for implementing DJSCC is sending additional bits for general DSC scheme to approach the theoretical bound. Based on this idea, efficient designs are proposed in [18-23].
Reference [18] presents the case that decompression must be done from compressed data corrupted by Additive White Gaussian Noise (AWGN). Turbo codes are used by finely designing the matrices of the two constituent encoders. The design of [19] exploits systematic Irregular Repeat-Accumulate (IRA) codes [24] for DJSCC. The feasibility of designing different channel conditions for the systematic and the parity part separately is the main advantage. The simulation results confirm the superior performance to the turbo codes scheme. Using the IRA codes for precoding, a Raptor code [25] was designed for DJSCC [20] over packet erasure channels. The rateless property can guarantee the success in decoding regardless of the packet loss ratio. A bias towards selecting IRA parity symbols versus systematic symbols in forming the bipartite graph of the LT code is introduced. However, they didn't give a calculable method to determine the optimal design. In [21], the authors propose a scheme of compressing the sources with the memory correlation following a Hidden Markov Model (HMM). Requiring no previous knowledge of the HMM parameters at the encoder or decoder, the proposed scheme can perform well.
The syndrome-based approach can still be used in DJSCC by treating the syndromes as a special type of parity bits. Reference [22] designs an error resilient syndrome-based DJSCC scheme using convolutional codes based on the Asymmetric Syndrome-former Inverse-syndrome-former Framework (ASIF) proposed in [28]. Inspired by reference [22], reference [23] considers the syndrome-based LDPC decoding in the asymmetric DJSCC problem.

3.2. DJSCC for the Independent Channels Case

Instead of compressing only one source, in the independent channels case, all the sources are independently encoded and transmitted over their respective noisy channels, as shown in Figure 2.
In the independent channels case, the separation theorem still holds [29]. It has been shown in [30] that an intersection between the channel capacity region and the Slepian-Wolf region is the sufficient and necessary condition for reliable communication, known as the independent channels theory. As illustrated in Figure 3, the conditions for two correlated sources X1 and X2 can be formulated as:
H ( X 1 X 2 ) < C 1 H ( X 2 X 1 ) < C 2 H ( X 1 , X 2 ) < C 1 + C 2
where Ci is the channel capacity of Xi, i = 1,2. Assuming C1 = C2 = C, the similar results with Formula (3) can be obtained as:
R 1 H ( X 1 X 2 ) / C R 2 H ( X 2 X 1 ) / C R 1 + R 2 H ( X 1 , X 2 ) / C
where Ri is the encoding rate of Xi.
Typical designs [31-34] of DJSCC for the independent channels case follow the same strategy of that for the asymmetric case. In [31], the Low-Density Generator Matrices (LDGM codes) [35] are used for DJSCC. A concatenated scheme is proposed for avoiding error floors. Combining the sparseness of the generator matrix and the parity check matrix, LDGM codes present a complexity advantage over standard LDPC and turbo codes. The performance of the proposed DJSCC scheme is very close to the theoretical limits in practical situations. A DJSCC scheme for known/unknown correlated sources using “super” turbo code is proposed in [32]. The so-called “super” turbo code consists of the turbo code for each source and an additional spread interleaver acting over the sequence associated with the second source. If the parameter of the correlation is unknown at the decoder, it still can be estimated in the decoding iterations. Extracting the similarity of [31] and [32], turbo-like codes for DJSCC is discussed in [33]. In [34], the authors consider the DJSCC problem using LDPC codes in a situation that no side information is communicated to the decoder. They also provide analytical performance bounds of joint iterative LDPC decoding of correlated sources.

3.3. DJSCC for the Multiple Access Channel Case

Most of the works on DJSCC over MAC concentrate on the necessary and sufficient conditions for sending the correlated sources with arbitrarily small probability of error. The practical algorithms are rare to the best of our knowledge.
Reference [36] provides the theorem developing the so-called single letter characterizations of an achievable rate region for correlated sources sent over a MAC. The system model is shown in Figure 4. Considering discrete alphabet sources (U1,U2) ∼ p(u1,u2), the sources transmit their codewords Xi, i = {1,2}, to a single decoder through a memoryless MAC (X1 × X2, Y, p (yx1,x2)). The achievable region sending U1 and U2 with arbitrarily small error is:
H ( U 1 U 2 ) < I ( X 1 ; Y X 2 , U 2 ) H ( U 2 U 1 ) < I ( X 2 ; Y X 1 , U 1 ) H ( U 1 , U 2 ) < I ( X 1 , X 2 ; Y )
for some p(u1,u2,x1,x2,y) = p(u1,u2)p(x1u1) p(x2u2) p(yx1,x2). The MAC region and the Slepian-Wolf region are the special cases of the region in (6). Generalizing the results to sources with a common part W = f(U1) = g(U2), corresponding achievable rate region is listed in Theorem 1 of [36].
Although an instructive counter example is presented in [37] showing that the coding strategy of [36] is not optimal, the problem of determining the true capacity regions is open. Thus, many follow-up works are still based on the conditions of [36]. A graph-based modular framework is proposed in [38] to further minimize the performance gap. Taking the schemes in [36] as the elements, correlated sources transmission over MACs with receiver side information Z is discussed in [39]. The conditions for lossless representation are:
H ( U 1 U 2 , Z ) < I ( X 1 ; Y X 2 , U 2 , Z ) H ( U 2 U 1 , Z ) < I ( X 2 ; Y X 1 , U 1 , Z ) H ( U 1 , U 2 Z ) < I ( X 1 , X 2 ; Y Z )
for some joint distribution p(u1,u2,z,x1,x2,y) = p(u1,u2,z)p(x1u1) p(x2u2) p(yx1,x2). Note that the correlation among the sources condenses the region. Recently, [40] proposed a generalized theorem including the side information scenario and various channel conditions. The most important conclusion in [40] is that in the MAC case the joint designs outperform the Shannon separation limit.
Using the similar strategies in [31], a DJSCC scheme with LDGM over MAC is proposed in [41] and [42]. By utilizing different structures at the encoder site, the proposed system is able to outperform the separation limit for a wide correlation range.
Here one important point should be added, that there is a special case of MAC, discrete memoryless (dm) Asymmetric MAC (AMAC), over which the separation theorem can be applied [43]. In the dm AMAC situation, the messages of one source are encoded by both encoders, whereas the messages of another source are encoded by only one of the encoders. [43] derives the necessary and sufficient conditions for the reliable transmission in dm AMAC. The similar conclusions are presented in [44] considering the common source of the two sources is transmitted losslessly in the Shannon sense.

3.4. DJSCC for the Broadcast Channel Case

The problem of transmitting correlated sources over BC has been investigated as early as in 1987 [45]. However, the scenario they mentioned is not what we discuss here because the correlated sources are all available at the encoder. Many other results are proposed in the same settings [46,47]. We consider another situation that the side information is only known at the decoder. As shown in Figure 5, a memoryless source X is transmitted over a memoryless BC PV1,…,VK|U(v1,…,vK|u), where each receiver's own observation Yi, i = 1,…, K acts as the side information about the source X.
In [48], the author discusses the characterization of achievable rates for lossless coding using separate and joint designs. By analyzing the performance of both, joint source-channel coding allows a much simpler strategy for any K ≥ 2 and perfect single-letter nature. The rate of each channel per symbol κ is an achievable rate if and only if there exists PU(u) such that:
H ( X Y i ) < κ I ( U ; V i )
Then the necessary and sufficient condition for joint source-channel coding over BC is presented in Theorem 6 in [48] which states that reliable communication is possible with rate κ if and only if :
( H ( X Y 1 ) , H ( X Y 2 ) , , H ( X Y K ) ) κ
is satisfied, where R is a tight bound on the total rates. The proof employs “virtual binning” and “operational separation” exhibited by the optimal joint source-channel coding strategy.
Using ideas from [48] and dirty paper coding [49], a design for lossy coding is proposed in [50]. The most direct extension is adding quantization before transmission, whose sufficient condition is given in Theorem 1 of [50]. A dirty-paper version of Theorem 1 is proposed by additionally handling channel state information (CSI) at the encoder in Theorem 2 in [50]. By numerical comparisons, their distortion performance is analyzed for the quadratic Gaussian and binary Hamming cases. Both of the schemes they designed are always at least as good as (in fact, except for one certain case, always better than) separate coding. In the subsequent paper [51], they combine the two digital schemes in [50] with analog or uncoded transmission to extract the benefits of both methods.
A brief summary of this section is given here. The first two cases, the asymmetric case and the independent channels case, are comparatively simple, and have been adequately studied. However, in the MAC case and the BC case, a number of problems remain to be solved; for example, the optimal rate regions of the general DJSCC settings and corresponding code design methods are all undetermined. If the interference of the adjacent channels can be considered as a kind of noise in WSNs, the MAC case degrades to the independent channels case. The research on the independent channels case will play a fundamental role in the study on DJSCC. Thus, we propose a DJSCC scheme for multiple correlated sources over the independent channels in the following section.

4. Proposed Scheme of DJSCC for the Independent Channels Case

A practical DJSCC scheme for multiple correlated sources is proposed in the scenario of the independent channels case. Compared with the practical designs listed in subsection 3.2, the proposed scheme stresses two points. Firstly, the DJSCC for three correlated sources is investigated. Intuitively, this leads to more efficient redundancy utilization than considering pairs. In contrast, the approaches in [31-34] all process two correlated sources. Though [52-54] have considered the compression of three correlated sources, the actual noisy channels are all absent in them. Secondly, the noisy side information is considered in the theoretical limits determination and the optimal code design; moreover, an instructive extension of the previous independent channels theory [29-30] is derived.
In the proposed scheme, an efficient coding strategy with a single LDPC code is designed to cope with both source and channel coding. And the theoretical limits of DJSCC for multiple sources are derived. The simulation results demonstrate the desired efficiency of the proposed DJSCC scheme.

4.1. Proposed Scheme

The proposed scheme is inspired from the parallel channel model introduced in [54]. The basic idea is dividing the encoded codeword into several fractions in order to process them separately. Consider three memoryless binary sources X1, X2 and X3 which are statistically dependent to each other with cross-over probability P[XjXiXi ]= Pij, ∀i, j ∈ {1,2,3} and ij. The k - bit sequence of source Xi is encoded independently using a common systematic LDPC code (k,n); thus, the codeword of Xi consists of the information bits Xi and the parity bits Pi, where ∣Pi∣ = n - k. However, it is unnecessary to transmit all the codewords to the decoder for the existence of correlation. So for source Xi, only the aik information bits and biPi∣ parity bits are transmitted through the noisy channel, where ai and bi are the proportional coefficients, 0 ≤ ai,bi≤ 1, i = 1 3 a i = i = 1 3 b i = 1. The structures for transmission are illustrated in Figure 6. The DJSCC encoding rate Ri for Xi is calculated as:
R i = a i k + b i | P i | k
Since Ri has the constraint condition according to the independent channels theory, the selection of ai and bi also follow the rules which will be discussed in the next subsection.
At the joint decoder, the received different parts of the information bits from different encoders will re-produce an integrated codeword for decoding, and they act as side information for each other. Therefore, the initial LLRs for different fractions in message-passing algorithm for decoding [9] are different according to different channel models.

4.2. Correlation Model and Theoretical Limits

Suppose that the actual noisy channel for the source Xi is the AWGN channel with variance σ i 2, assuming σ i 2 = σ 2 for clarity. Firstly considering X1, for obtaining a complete codeword for decoding, the corresponding a2 and a3 fractions are replaced by the side information which are received from X2 and X3 respectively. Since the side information is also corrupted by the AWGN, the correlation between the side information and the original information is modeled as a combined channel of the virtual BSC and the AWGN channel. This method can be applied in decoding for X2 and X3 similarly.
However, there is a more complex situation in determining the correlation models for some fractions of X2 and X3. Taking X2 as an example, there are two kinds of side information for the a3 fraction, the decoded version X̂1 of X1 and the received version of X3. As introduced in [52], we can model this dependency by a BSC with an unfixed cross-over probability Q, denoted by BSC (Q). The value of Q is determined by whether the decoded version X̂1 and the received version of X3 are equal. Figure 7 demonstrates the parallel channel model of source X2, which reveals the correlation model for different fractions of it. Other source can be modeled in the same manner.
The remainder of this subsection will give the constraint condition of Ri for this model. Assuming Pij = P, the channel capacities of the noisy channel, the combined BSC+AWGN channel and the combined BSC(Q)+AWGN channel are represented by C(σ2), C(p, σ2) and C(Q, σ2) respectively.
Theorem: The encoding rates of X1, X2, X3 and the total encoding rate follow the constraints:
R 1 1 ( 1 a 1 ) C ( p , σ 2 ) C ( σ 2 )
R 2 1 a 1 C ( p , σ 2 ) a 3 C ( Q , σ 2 ) C ( σ 2 )
R 3 1 ( 1 a 3 ) C ( Q , σ 2 ) C ( σ 2 )
R 1 + R 2 + R 3 3 C ( p , σ 2 ) C ( Q , σ 2 ) C ( σ 2 )
Proof: Taking source X2 as an example, the overall channel capacity of X2 is:
C 2 sum = a 2 R 2 c C ( σ 2 ) + ( 1 R 2 c ) C ( σ 2 ) + a 1 R 2 c C ( p , σ 2 ) + a 3 R 2 c C ( Q , σ 2 )
where the code rate of this channel is R 2 c and R 2 c C 2 sum, resulting:
R 2 c C ( σ 2 ) 1 a 2 C ( σ 2 ) + C ( σ 2 ) a 1 C ( p , σ 2 ) a 3 C ( Q , σ 2 )
Since R 2 c = k / ( k + | P 2 | ), the numbers of the parity bits ∣P2∣have the lower bound:
| P 2 | [ 1 a 1 C ( p , σ 2 ) a 3 C ( Q , σ 2 ) ] k C ( σ 2 ) a 2 k
This implies the formula (12) by substituting (10) into the (17). Similarly, other results of this theorem can be achieved.
In order to compare with the previous theoretical results in subsection 3.2, we consider the case of two correlated sources X1 and X2. This is a special case of our proposed scheme which can be obtained by setting a1 = a, a2 = 1 - a, a3 = 0, b1 = 1-a, b2 = a and b3 = 0, where 0≤a≤1. The total encoding rate can be obtained as:
R 1 + R 2 2 C ( p , σ 2 ) C ( σ 2 )
Neglecting the noisy influence on the side information bits, we can get C(p,σ2)=C(p)=1–H(p). Introducing it into formula (18), the same representation of formula (5) is achieved. Thus, the independent channels theory in subsection 3.2 is a special case of our theorem. This is a novel and instructive extension of the previous independent channels theory. Considering the noise interference on the side information bits, as the scheme we proposed, is more reasonable in practical applications.

4.3. Simulation Results and Analysis

In order to evaluate the performance of the proposed DJSCC scheme, we simulate it for the two sources and three sources cases. The length of original source sequence is fixed to k = 108. The LDPC code we used is defined by the degree distribution (19) from [55]. Here λ (x) denotes the variable-node degree distribution of the LDPC code, and the check-node degree is approximately uniform.
λ ( x ) = 0.170031 x + 0.160460 x 2 + 0.112837 x 5 + 0.047489 x 6 + 0.011481 x 9 + 0.091537 x 10 + 0.152978 x 25 + 0.036131 x 26 + 0.217056 x 99

4.3.1. Two Sources Case

It is assumed that the cross-over probability between the two sources p = 0.1. The code rate is Rc = k/(k + ∣P∣) = 1/3, and the total encoding rate is R = R1 + R2 = 1/Rc = 3. For the given p and R, the theoretical bound of the Eb/N0 can be calculated as 0.9691 dB for lossless recovery. We consider a symmetric rate case with a = 0.5 and two arbitrary rate cases with a = 0.3 and a = 0.7 respectively. The Figure 8 demonstrates that as a increases, the error correcting capabilities decreases. It can be found that our scheme is superior to the one proposed in [33] with turbo codes.
We also simulate two separate designs with the same total encoding rate R = 3, where the source code and channel code rate pairs (Rs, Rc) are (2/3, 1/2) and (4/7, 7/12). The convergence value is restricted by the source code when the channel code can completely remove the error, while in DJSCC scheme the surplus parity bits for channel code can help to correct the source errors. The separate design can be superior to DJSCC (see the curve of separate (4/7, 7/12) case), but complexity will significantly increase accordingly.

4.3.2. Three Sources Case

In our paradigm, it is assumed that pij = p = 0.1, Rc = 1/3. The theoretical bound here is 2.9243 dB. We first consider a symmetric setting with ai= bi = 1/3; then, a symmetric-rate setting (Ri = 1 = R/3) with a1 = 0.28, a2 = 0.34, a3 = 0.38, b1 = 0.36, b2 = 0.33 and b3 = 0.31 is simulated. Finally, an asymmetric setting which satisfies the rate limits in formulas (11-14) is considered, in which we calculate a1 = 0.28, a2 = 0.34, a3 = 0.38, b1 = 0.373, b2 = 0.315 and b3 = 0.312 with the Eb/N0 value of the theoretical limit. As shown in Figure 9, the asymmetric case with the least parity bits converges slowest even though it has more information bits than other settings. With the same amount of parity bits, the symmetric-rate case performs better than the symmetric case for more information bits. The performance of BER for all the three sources X1, X2 and X3 in the symmetric-rate case is shown in Figure 10. It can be noted that, as expected, from X1 to X3 the convergence values are closer to the theoretical bound because more side information is used at decoding for the latter decoded source.

5. Conclusions and Future Work

In this paper, a survey of the theories and implementation approaches of DJSCC for the asymmetric case, the independent channels case, the multiple access channel case and the broadcast channel case is presented. We have proposed an efficient framework of DJSCC for the independent channels case. Our results extend the existed theoretical limits of DJSCC to a more practical scenario by considering the noisy side information and multiple correlated sources. The simulate results verify the limit-approaching performance of the proposed scheme.
It can be found that the problem of finding the necessary and sufficient conditions of the most general setting in the MAC case and the BC case remains open to this day. Moreover, corresponding efficient coding strategies also have a tremendous space for further development. One of the main issues for practical deployment of DJSCC is the correlation model. In practical WSNs, it is usually hard to determine the exact joint distributed function; moreover, the correlation may be time-varying. Thus, the adaptive DJSCC scheme is a meaningful and challenging problem. Another future direction is introducing the multiterminal source coding to DJSCC. Since the multiple access channels and the broadcast channels are all the multi-user channels, it is obvious that the distributed compression of multiple correlated sources should be considered. However, the existing designs for DJSCC mainly constrained on processing two sources. The proposed scheme we presented here represents a first step towards this goal.

Acknowledgments

This work is partially supported by National Science Foundation of China under grant 60702049 and 60502036, National Science & Technology Pillar Program (2008bah37b04), and the 111 Project (No. B08004). The authors would like to thank all of the reviewers for their detailed comments that have certainly improved the quality of our paper.

References and Notes

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
  2. Slepian, D.; Wolf, J.K. Noiseless coding of correlated information sources. IEEE Trans. Inform. Theory 1973, 19, 471–480. [Google Scholar]
  3. Garcia-Frias, J.; Xiong, Z. Distributed source and joint source-channel coding: from theory to practice. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Philadelphia, PA, USA, March 2005; Vol. 5, pp. v/1093–v/1096.
  4. Cover, T.M. A proof of the data compression theorem of Slepian and Wolf for ergodic sources. IEEE Trans. Inform. Theory 1975, 22, 226–228. [Google Scholar]
  5. Wyner, A.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inform. Theory 1976, 22, 1–10. [Google Scholar]
  6. Wyner, A.D. Recent results in the Shannon theory. IEEE Trans. Inform. Theory 1974, 20, 2–10. [Google Scholar]
  7. Pradhan, S.S.; Ramchandran, K. Distributed source coding using syndromes (DISCUS): Design and construction. Proceedings of the IEEE Data Compression Conference (DCC), Snowbird, UT, USA, March 1999; pp. 158–167.
  8. Garcia-Frias, J.; Zhao, Y. Compression of correlated binary sources using turbo codes. IEEE Commun. Lett. 2001, 5, 417–419. [Google Scholar]
  9. Liveris, A.D.; Xiong, Z.; Georghiades, C.N. Compression of binary sources with side information at the decoder using LDPC codes. IEEE Commun. Lett. 2002, 6, 440–442. [Google Scholar]
  10. Aaron, A.; Girod, B. Compression with side information using turbo codes. Proceedings of the IEEE Data Compression Conference (DCC), Snowbird, UT, USA, April 2002; pp. 252–261.
  11. Stankovic, V.; Liveris, A.D.; Xiong, Z.; Georghiades, C.N. Design of Slepian-Wolf codes by channel code partitioning. Proceedings of the IEEE Data Compression Conference (DCC), Snowbird, UT, USA, March 2004; pp. 302–311.
  12. Gehrig, N.; Dragotti, P.L. Symmetric and asymmetric Slepian-Wolf codes with systematic and nonsystematic linear codes. IEEE Commun. Lett. 2005, 9, 61–63. [Google Scholar]
  13. Sartipi, M.; Fekri, F. Distributed source coding in wireless sensor networks using LDPC coding: the entire Slepian-Wolf rate region. Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), New Orleans, LA, USA, March 2005; pp. 1939–1944.
  14. Toto-Zarasoa, V.; Roumy, A.; Guillemot, C. Rate-adaptive codes for the entire Slepian-Wolf region and arbitrarily correlated sources. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, March 30-April 4 2008; pp. 2965–2968.
  15. Tan, P.; Xie, K.; Li, J. Slepian-Wolf coding using parity approach and syndrome approach. Proceedings of the 41st Annual Conference on Information Sciences and Systems, Baltimore MD, USA, March 2007; pp. 708–713.
  16. Shamai, S.; Verdu, S. Capacity of channels with uncoded side information. Eur. Trans. Telecomm. 1995, 6, 587–600. [Google Scholar]
  17. Shamai, S.; Verdu, S.; Zamir, R. Systematic lossy source/channel coding. IEEE Trans. Inform. Theory 1998, 44, 564–579. [Google Scholar]
  18. Mitran, P.; Bajcsy, J. Turbo source coding: a noise-robust approach to data compression. Proceedings of the IEEE Data Compression Conference (DCC), Snowbird, UT, USA, April 2002; p. 465.
  19. Liveris, A.D.; Xiong, Z.; Georghiades, C.N. Joint source-channel coding of binary sources with side information at the decoder using IRA codes. In IEEE Workshop on Multimedia Signal Processing; St. Thomas, VI, USA, December 2002; Vol. 9, issue 11, pp. 53–56. [Google Scholar]
  20. Xu, Q.; Stankovic, V.; Xiong, Z. Distributed joint source-channel coding of video using raptor codes. IEEE J. Sel. Areas Commun. 2007, 25, 851–861. [Google Scholar]
  21. Ser, J.D.; Crespo, P.M.; Galdos, O. Asymmetric joint source-channel coding for correlated sources with blind HMM estimation at the receiver. EURASIP J. Wirel. Commun. Netw. 2005, 5, 483–492. [Google Scholar]
  22. Tan, P.; Li, J. Enhancing the robustness of distributed compression using ideas from channel coding. Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM), St. Louis, MO, USA, December 2005; 4, pp. 2385–2389.
  23. Heidarzadeh, A.; Lahouti, F. On robust syndrome-based distributed source coding over noisy channels using LDPC codes. Proceedings of the IEEE International Conference on Signal Processing and Communications (ICSPC), Minneapolis, MN, USA, May 2007; pp. 400–403.
  24. Jin, H.; Khandekar, A.; McEliece, R. Irregular repeat-accumulate codes. Proceedings of the 2nd International Symposium on Turbo codes and related topics, Brest, France, September 2000; pp. 1–8.
  25. Shokrollahi, A. Raptor codes. IEEE Trans. Inf. Theory 2006, 52, 2551–2567. [Google Scholar]
  26. Luby, M. LT codes. Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), Vancouver, Canada, November 2002; pp. 271–280.
  27. Garcia-Frias, J.; Villasenor, J.D. Joint turbo decoding and estimation of hidden Markov sources. IEEE J. Sel. Areas Commun. 2001, 19, 1671–1679. [Google Scholar]
  28. Tu, Z.; Li, J.; Blum, R.S. An efficient SF-ISF approach for the Slepian-Wolf source coding problem. Eurasip J. Appl. Sign. Proces. 2005, 6, 961–971. [Google Scholar]
  29. Barros, J.; Servetto, S.D. The sensor reachback problem. IEEE Trans. Inform. Theory 2003. Submitted. [Google Scholar]
  30. Barros, J.; Servetto, S.D. Network information flow with correlated sources. IEEE Trans. Inform. Theory 2006, 52, 155–170. [Google Scholar]
  31. Zhong, W.; Lou, H.; Garcia-Frias, J. LDGM codes for joint source-channel coding of correlated sources. Proceedings of the International Conference on Image Processing (ICIP), Barcelona, Spain, September 2003; pp. 14–17.
  32. Garcia-Frias, J.; Zhao, Y. Near-Shannon/Slepian-Wolf performance for unknown correlated sources over AWGN channels. IEEE Trans. Commun. 2005, 53, 555–559. [Google Scholar]
  33. Garcia-Frias, J.; Zhao, Y.; Zhong, W. Turbo-like codes for transmission of correlated sources over noisy channels. IEEE Sign. Proces. Mag. 2007, 24, 58–66. [Google Scholar]
  34. Daneshgaran, F.; Laddomada, M.; Mondin, M. LDPC-based channel coding of correlated sources with iterative joint decoding. IEEE Trans. Commun. 2006, 54, 577–582. [Google Scholar]
  35. Cheng, J.-F.; McEliece, R.J. Some high-rate near capacity codecs for the Gaussian channel. Proceedings of the 34th Annual Allerton Conference on Communications, Control and Computing, Allerton, IL, USA, October 1996; pp. 494–503.
  36. Cover, T.M.; El Gamal, A.; Salehi, M. Multiple access channels with arbitrarily correlated sources. IEEE Trans. Inform. Theory 1980, 26, 648–657. [Google Scholar]
  37. Dueck, G. A note on the multiple access channel with correlated sources. IEEE Trans. Inform. Theory 1981, 27, 232–235. [Google Scholar]
  38. Pradhan, S.S.; Suhan, C.; Ramchandran, K. A graph-based framework for transmission of correlated sources over multiple-access channels. IEEE Trans. Inf. Theory 2007, 53, 4583–4604. [Google Scholar]
  39. Gunduz, D.; Erkip, E. Transmission of correlated sources over multiuser channels with receiver side information. Information Theory and Applications Workshop, San Diego, CA, USA, January 2007; pp. 197–201.
  40. Rajesh, R; Varshneya, V.K.; Sharma, V. Distributed joint source channel coding on a multiple access channel with side information. IEEE International Symposium on Information Theory (ISIT), Toronto, Ontario, Canada, July 2008; pp. 2707–2711.
  41. Zhong, W.; Zhao, Y.; Garcia-Frias, J. Turbo-like codes for distributed joint source-channel coding of correlated senders in multiple access channels. Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, November 2003; 1, pp. 840–844.
  42. Zhong, W.; Chai, H.; Garcia-Frias, J. LDGM codes for transmission of correlated senders over MAC. Proceedings of the Allerton Conference on Communication, Control, and Computing, Allerton, IL, USA, October 2005. invited paper.
  43. de Bruyn, K.; Prelov, V.; van der Meulen, E. Reliable transmission of two correlated sources over an asymmetric multiple-access channel. IEEE Trans. Inf. Theory 1987, 33, 716–718. [Google Scholar]
  44. Gunduz, D.; Erkip, E. Correlated sources over an asymmetric multiple access channel with one distortion criterion. Proceedings of the 41st Annual Conference on Information Sciences and Systems (CISS), Baltimore MD, USA; 2007; pp. 325–330. [Google Scholar]
  45. Han, T.S.; Costa, M.H.M. Broadcast channels with arbitrarily correlated sources. IEEE Trans. Inf. Theory 1987, 33, 641–650. [Google Scholar]
  46. Suhan, C.; Pradhan, S.S. Representation of correlated sources into graphs for transmission over broadcast channels. IEEE International Symposium on Information Theory, Seattle, WA, USA, July 2006; pp. 2418–2422.
  47. Suhan, C.; Pradhan, S.S. A graph-based framework for transmission of correlated sources over broadcast channels. IEEE Trans. Inf. Theory 2008, 54, 2841–2856. [Google Scholar]
  48. Tuncel, E. Slepian-Wolf coding over broadcast channels. IEEE Trans. Inf. Theory 2006, 52, 1469–1482. [Google Scholar]
  49. Costa, M.H.M. Writing on dirty paper. IEEE Trans. Inf. Theory 1983, 29, 439–441. [Google Scholar]
  50. Nayak, J.; Tuncel, E.; Gunduz, D. Wyner-Ziv coding over broadcast channels. In 2008 IEEE Information Theory Workshop; Porto, Portugal, 5-9 May 2008; pp. 179–183. [Google Scholar]
  51. Gunduz, D.; Nayak, J.; Tuncel, E. Wyner-Ziv coding over broadcast channels using hybrid digital/analog transmission. IEEE International Symposium on Information Theory, Toronto, Canada, July 2008; pp. 1543–1547.
  52. Liveris, A.D.; Lan, C.; Narayanan, K.R.; Xiong, Z.; Georghiades, C.N. Slepian-Wolf coding of three binary sources using LDPC codes. Proceedings of the International Symposium on Turbo Codes and Related Topics, Brest, France; 2003; pp. 63–66. [Google Scholar]
  53. Lajnef, K.; Guillemot, C.; Siohan, P. Distributed coding of three binary and Gaussian correlated sources using punctured turbo codes. Sign. Proces. 2006, 86, 3131–3149. [Google Scholar]
  54. Sartipi, M.; Fekri, F. Distributed source coding using short to moderate length rate-compatible LDPC codes: the entire Slepian-Wolf rate region. IEEE Trans. Commun. 2008, 56, 400–411. [Google Scholar]
  55. Chung, S.-Y. On the construction of some capacity-approaching coding schemes. Ph.D. Dissertation., Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. [Google Scholar]
Figure 1. The system model of DJSCC for the asymmetric case.
Figure 1. The system model of DJSCC for the asymmetric case.
Sensors 09 04901f1
Figure 2. The system model of DJSCC for the independent channels case.
Figure 2. The system model of DJSCC for the independent channels case.
Sensors 09 04901f2
Figure 3. The DJSCC rate region for reliable transmission in the independent channels case: the intersection between the capacity region and the Slepian-Wolf rate region.
Figure 3. The DJSCC rate region for reliable transmission in the independent channels case: the intersection between the capacity region and the Slepian-Wolf rate region.
Sensors 09 04901f3
Figure 4. The system model of DJSCC for the MAC case.
Figure 4. The system model of DJSCC for the MAC case.
Sensors 09 04901f4
Figure 5. The system model of DJSCC for the BC case.
Figure 5. The system model of DJSCC for the BC case.
Sensors 09 04901f5
Figure 6. Structures of the outputs of the encoders: Only the gray squares are transmitted for each encoder.
Figure 6. Structures of the outputs of the encoders: Only the gray squares are transmitted for each encoder.
Sensors 09 04901f6
Figure 7. The parallel channel model of the source X2 for three source DJSCC: The dashed box shows an alternative situation that we take X3 rather than X1 as the input of the virtual combined channel BSC (Q).
Figure 7. The parallel channel model of the source X2 for three source DJSCC: The dashed box shows an alternative situation that we take X3 rather than X1 as the input of the virtual combined channel BSC (Q).
Sensors 09 04901f7
Figure 8. BER of the X1 in the two sources DJSCC scheme.
Figure 8. BER of the X1 in the two sources DJSCC scheme.
Sensors 09 04901f8
Figure 9. BER of the X2 in the three sources DJSCC scheme.
Figure 9. BER of the X2 in the three sources DJSCC scheme.
Sensors 09 04901f9
Figure 10. BER of the sources X1, X2 and X3 in the symmetric-rate case.
Figure 10. BER of the sources X1, X2 and X3 in the symmetric-rate case.
Sensors 09 04901f10

Share and Cite

MDPI and ACS Style

Zhu, X.; Liu, Y.; Zhang, L. Distributed Joint Source-Channel Coding in Wireless Sensor Networks. Sensors 2009, 9, 4901-4917. https://0-doi-org.brum.beds.ac.uk/10.3390/s90604901

AMA Style

Zhu X, Liu Y, Zhang L. Distributed Joint Source-Channel Coding in Wireless Sensor Networks. Sensors. 2009; 9(6):4901-4917. https://0-doi-org.brum.beds.ac.uk/10.3390/s90604901

Chicago/Turabian Style

Zhu, Xuqi, Yu Liu, and Lin Zhang. 2009. "Distributed Joint Source-Channel Coding in Wireless Sensor Networks" Sensors 9, no. 6: 4901-4917. https://0-doi-org.brum.beds.ac.uk/10.3390/s90604901

Article Metrics

Back to TopTop