Ieee Transactions On Information Theory, Vol. 52, No. 4, April . - Winlab

Transcription

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 20061405Reliable Channel Regions for Good Binary CodesTransmitted Over Parallel ChannelsRuoheng Liu, Student Member, IEEE, Predrag Spasojević, Member, IEEE, and Emina Soljanin, Senior Member, IEEEAbstract—We study the average error probability performanceof binary linear code ensembles when each codeword is divided intosubcodewords with each being transmitted over one of parallelchannels. This model is widely accepted for a number of importantpractical channels and signaling schemes including block-fadingchannels, incremental redundancy retransmission schemes, andmulticarrier communication techniques for frequency-selectivechannels. Our focus is on ensembles of good codes whose performance in a single channel model is characterized by a thresholdbehavior, e.g., turbo and low-density parity-check (LDPC) codes.For a given good code ensemble, we investigate reliable channel regions which ensure reliable communications over parallel channelsunder maximum-likelihood (ML) decoding. To construct reliableregions, we study a modifed 1961 Gallager bound for parallel channels. By allowing codeword bits to be randomly assigned to eachcomponent channel, the average parallel-channel Gallager bound issimplified to be a function of code weight enumerators and channelassignment rates. Special cases of this bound, average unionBhattacharyya (UB), Shulman–Feder (SF), simplified-sphere (SS),and modified Shulman–Feder (MSF) parallel-channel bounds,allow for describing reliable channel regions using simple functionsof channel and code spectrum parameters. Parameters describingthe channel are the average parallel-channel Bhattacharyya noiseparameter, the average channel mutual information, and parallelGaussian channel signal-to-noise ratios (SNRs). Code parametersinclude the union-Bhattacharyya noise threshold and the weightspectrum distance to the random binary code ensemble. Reliablechannel regions of repeat–accumulate (RA) codes for parallelbinary erasure channels (BECs) and of turbo codes for paralleladditive white Gaussian noise (AWGN) channels are numericallycomputed and compared with simulation results based on iterativedecoding. In addition, an example transmission over a block-fadingGaussian channel is considered.Index Terms—Block-fading channel, coding theorems, hybridautomatic repeat request (HARQ), maximum-likelihood (ML)decoding, parallel channels, repeat–accumulate (RA) codes, turbocodes.I. INTRODUCTIONE study the average error probability performance ofgood binary code ensembles when each codeword isdivided into subsets of symbols and each subset is transmittedWManuscript received October 14, 2004; revised November 11, 2005. Thiswork was supported by the National Science Foundation under Grants CCR0205362 and SPN-0338805. The material in this paper was presented in partat the DIMACS Workshop on Network Information Theory, Piscataway, NJ,March 2003 and the IEEE International Symposium on Information Theory,Chicago, IL, June/July 2004.R. Liu and P. Spasojević are with WINLAB, Department of Electrical andComputer Engineering, Rutgers University, North Brunswick, NJ 08902 USA(e-mail: liurh@winlab.rutgers.edu; spasojev@winlab.rutgers.edu).E. Soljanin is with Mathematical Sciences Research Center, Bell Labs, Lucent, Murray Hill, NJ 07974 USA (e-mail: emina@lucent.com).Communicated by Ø. Ytrehus, Associate Editor for Coding Techniques.Digital Object Identifier 10.1109/TIT.2006.871615over (assigned to) one of (independent) parallel channels.Following MacKay [1], we say that a binary code (sequence)is good if it achieves arbitrarily small word error probabilitywhen transmitted over a noisy channel at or below a nonzerothreshold rate that may be lower than the channel capacity.Parallel-channel coding is highly pertinent to a number ofimportant communication schemes including transmissionover block-fading channels [2], [3],1 incremental redundancy retransmission schemes [4], [5], communication overblock interference channels [6], [7], coding for slow-fadingpacket-based transmissions [8]–[10], user cooperation diversitycoding [11], [12], and multicarrier communications.In practice, codes are often designed based on the averageor the worst case condition of an otherwise unknown or timevarying channel. Given a code structure (and the correspondingcode ensemble), an important practical question is: Under whatchannel conditions will the communication be reliable? For(single) channel models described by a single parameter, theanswer requires evaluation of a noise threshold defined as theworst case channel parameter value at which the word errorprobability decays to zero as the codeword length increases.Recent work studies noise thresholds associated with good codesand the corresponding maximum-likelihood (ML), “typicalpair,” and iterative decoding algorithm [13]–[18]. In fact (e.g.,for applications listed above), a transmitted codeword can experience a number of different (parallel/independent) channelsand, thus, instead of noise thresholds and the correspondingreliable communication channel intervals, we here considernoise boundaries and the corresponding reliable channel regions. More generally, for a given ensemble of good codesand a codeword symbol-to-channel assignment rule, a reliablechannel region is the union of channel transition probabilities forwhich the decoding word error probability approaches zero asthe codeword length approaches infinity.2 In particular, if eachcomponent channel is described by a single parameter, a reliable-parallel channel region is a set of reliable parallel-channelparameter -tuples. The shape of the parallel channel region is afunction of the channel model, the code structure, the codewordsymbol to channel assignment rule, and the decoding algorithm.In this paper, we study reliable channel regions for a class ofgood binary codes transmitted over a set of parallel binary-inputsymmetric-output (BISO) memoryless channels. Prescribed1The channel models with dependent (e.g., quasi-static) fading across channels can be considered within the independent channel coding framework byfirst assuming that the fading is known for each channel and, next, averagingover the fading distribution.2Note that this definition includes all regions based on upper bounds on codeword error probability and their union.0018-9448/ 20.00 2006 IEEE

1406IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006Fig. 1. System model.regions correspond to sufficient conditions for reliable communication when ML decoding is employed. The parallel-channelGallager bound on ML decoding word error probability averaged over all possible code symbol to channel assignments isa function of only code weight enumerators and channel assignment rates. In order to illustrate reliable channel regionswith a small number of channel and code descriptors, we focuson the following special cases of the average parallel-channelGallager bound: union-Bhattacharyya (UB), simplified-sphere(SS), Shulman–Feder (SF), and modified Shulman–Feder (MSF)bound. The UB reliable channel region3 is characterized by alinear combination of the parallel-channel Bhattacharyya noiseparameter and the UB noise threshold of the code ensemble. Forthe parallel-Gaussian channel, the SS reliable channel region ischaracterized by channel signal-to-noise ratios (SNRs), and theweight spectrum of the code ensemble. In general, the SF reliablechannel region is characterized by a linear combination of theparallel-channel mutual information and the maximum weightspectrum distance to the random binary code ensemble, termedSF distance. In addition, we study a modified Shulman–Feder(MSF) reliable channel region characterized by a two-set codeweight partition, and both the UB noise threshold and the SFdistance of the respective code restrictions. The restriction codeUB noise threshold is computed over a set of “small” and “large”code weights, whereas, the restriction SF distance is computedover the complementary “medium” set of code weights. Thelargest MSF reliable channel region is the union of regions corresponding to all possible partitions. Reliable channel regionsof repeat–accumulate (RA) codes for parallel binary erasurechannels (BECs) and of turbo codes for parallel additive whiteGaussian noise (AWGN) channels are numerically computed andcompared with simulation results based on iterative decoding.An application of results to the analysis of communicationsover a block-fading Gaussian channel is also described.The remainder of this paper is organized as follows: we introduce and discuss the system model and the notation in Section II.Based on the random assignment argument, an average parallel-channel Gallager bound is derived in Section III. A numberof special cases of the average parallel-channel Gallager boundand the corresponding reliable channel regions are characterized in Section IV. Examples and an application are given inSection V. We summarize our results in Section VI.II. SYSTEM MODELIn this section, we introduce and discuss the channel modeland basic assumptions used throughout this paper. We begin3Here and hereafter, for simplicity, we say UB, SS, SF, and MSF reliablechannel regions for the region derived based on average UB, SS, SF, and MSFparallel-channel bounds, respectively. Similar definitions apply to the singlechannel noise threshold.with a description of the parallel-channel model. Next, we introduce a random channel assignment technique which plays animportant role in evaluating the average error probability performance of a code ensemble transmitted over parallel channels.Finally, we define the linear binary code ensemble of interest,and review the spectral characterization of good ensembles.A. Parallel Channel ModelThe channel model consists of parallel discrete memorylesschannels (DMCs). As shown in Fig. 1, input symbols are binary, output symbols belong toand belong to alphabetalphabet which may be either finite or continuous. Under theassumption that the channel is known at the receiver, parallelchannels are independent and the transition probability charac,,. For the derivationterizing Channel isof all error-rate bounds except for the union bound, it is necessary to assume that each component channel is output sym. The following two channelmetric, i.e.,characteristics are used in our analysis. The Bhattacharyya noiseparameter of Channel is(1)The mutual information of Channelequiprobable binary inputs isunder the assumption of(2)whereAlthough, the analysis is quite general, in this work, we frequently focus on parallel-channel models in which each component channel is described by a single parameter as, for example,the BEC described by its erasure rate or the AWGN channel described by its SNR.Example 1 (Parallel BECs): For parallel BECs with a set, the Bhattacharyyaof erasure probabilities , fornoise parameter and the mutual information of Channel areand(3)Example 2 (Parallel AWGN Channels): We here considerparallel binary-input AWGN channels. The transition probability of Channel isfor(4)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELSwheredenotes the received SNR of Channel . The Bhattacharyya noise parameter and the mutual information ofChannel are(5)B. Random Channel Assignmentbinary linear code transmitted over parConsider anindexallel channels. Let elements of the setbit positions in a codeword. As shown in Fig. 1,, fora mapping device partitions the set into subsets. A bit at positionis transmitted on Channelwith the transition probability,,,and.,We note (and argue later) that, for any given assignmentthe error probability performance analysis would be exceedingly complicated. To overcome this problem, we introduce aprobabilistic mapping called random assignment [5], [19], [20]described as follows. For a given codeword , we assume thata bit of is randomly assigned to Channel independentlyand with probability , for, i.e.,., as assignment rates.We will also refer to ,Several applications which can be analyzed based on theparallel-channel model with random assignments are discussedbelow.Example 3 (Block-Fading Channel): The block-fadingchannel model is especially suitable for wireless communication systems with low mobility [2], [3], [6]. The Dopplerof the channel and the slot duration are assumedspreadto satisfy, which implies that the fading coefficientis essentially invariant during one slot. A typical assumptionis that it is also different from a single slot to another. Eachsource message is encoded, interleaved, and partitioned intoslots. For a given fading channel realization, this problem canbe described using a -parallel channel model.Example 4 (Multicarrier System): In the case of a frequencyselective channel, a multicarrier signaling scheme allows for dividing the available spectrum into several nonoverlapping andorthogonal channels, in such a way that each channel experiences an almost-flat fading. Hence, for a given channel realization, we can describe the multicarrier transmission system as aset of parallel channels in frequency domain.1407We shall use the symbol to denote a binary code,to denote a set of binary codes of length , andto denote abinary code ensemble. For example, turbo and RA code ensembles can be defined as follows.Example 5 (Turbo Code Ensemble): For a given set of returbo code [21] alcursive convolutional encoders, anpossible choices for the each of random interlows fordifferentturbo codes is denotedleavers. The set of. And, a turbo code ensemblewill refer to a setbyof turbo code sequences with a common rate.RA codeExample 6 (RA Code Ensemble): An[22] is defined as follows. The information block of length isrepeated times, reordered by a random interleaver of size ,and, finally, encoded by a rate one accumulator, i.e., a truncatedrate one recursive convolutional encoder with a transfer function.is a set ofpossible RA codes of length, each corresponding to a permutation of the interleaver. Anis a sequence of RA code sets with aRA code ensemblecommon rate.linear code , letdenote the numberFor a givenof codewords of Hamming weight , termed the weight enumerator (WE), anddenote the number of codewordswith Hamming weight over the (assignment) index set,, termed split weight enumerator (SWE). Then,forthe average weight enumerator (AWE) and average split weightare given byenumerator (ASWE) for the code setand(6)Letforbe the set of normalized weight (relative distances); the normalized weight spectrum is defined as(7)The asymptotic normalized weight spectrum can be written as(8)Example 7 (Weight Spectrum of Random Binary Codes): Forthe random binary code ensemble, a closed-form AWE,normalized weight spectrum, and asymptotic normalized weightspectrum expressions exist as follows:C. Code Ensemble and its Weight SpectrumSince the performance of a given code is hard to analyze,the performance of an appropriate code ensemble is usuallyconsidered.Definition 1: A binary linear code ensembleis a sequence, whereisof binary linear code setscodes with common ratefor all .a set ofwherefunction.is the entropy

1408IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006D. Good Binary Code Ensemblesis finite for. In related research on theand thatminimum distance of turbo codes with two parallel branches,Breiling in [26] has shown that the minimum distance of; moreover, the auturbo codes is upper-bounded bythors in [27]–[29] showed that, for some specially constructedinterleavers, the minimum distance of turbo codes grows as. Hence, we conclude that the ensemble of two-branchturbo codes with good interleavers satisfies Condition 1. Therest of this paper will consider only such good binary codeensembles.A good code ensemble as defined by MacKay [1] achieves arbitrarily small average word error probability up to some maximum rate that may be lower than the channel capacity. In particular, we consider a family of good code ensembles whose weightspectra satisfy the following condition.Condition 1: For a given binary code ensemblea. there exists a sequence of integers() such thatandIII. PARALLEL-CHANNEL GALLAGER BOUNDS(9)b. for all sequencessuch thatand(10)is finite.This condition was introduced in [16] (see also [23]), whereisthe authors showed that a binary linear code ensemblesatisfies Condition 1. Condition 1a illustrates thatgood ifthe probability that a code in the ensemblehas minimumapproaches zero as the code lengthdistance smaller than, and Condition 1b implies that the “low” normalizedweight spectrum has the order of. The proof and discussionregarding the necessity and sufficiency conditions on code andcode ensemble goodnss can be found in [23].We observe that most binary random-like code ensembles satisfy Condition 1. In his thesis [24], Gallager has shown thatthe minimum distance of most codes in the low-density paritycheck (LDPC) code ensemble increases linearly with the blocklength . Divsalar et al. showed in [22] that for an RA code ensemble with code rate(11), andis bounded. For an ensemblewhereparallel branches, Kahale and Urof turbo codes withbanke [25] proved that the minimum distance of a turbo codewithwith a randomly chosen interleaver is at leasthigh probability. For the same ensemble, Jin and McEliece havesuch thatshown [16] that there exists a sequencefor any(12)In 1961 [24], Gallager introduced a general upper bound onthe ML decoding error probability for block codes transmittedover a BISO channel as a function of the weight spectrum of thecode (ensemble). Recently, Shamai and Sason proposed a seriesof variations on the Gallager bound [30]. In this section, we consider the parallel-channel Gallager bound. We first summarizethe general parallel-channel Gallager bound based on ASWEsfor a particular assignment rule. Next, for simplicity of performance analysis, we derive a parallel-channel Gallager bound averaged over the random channel assignment.Lemma 1: For BISO memoryless parallel channels and a, the Galcodeword symbol to channel assignment rulelager bound can be written as in (13) at the bottom of the page,where(14)andis an arbitrary even nonnegative function of , i.e.,for all and .Proof: See Appendix A.Further analysis of (13) is difficult and ASWEsarenot available in the general case. Hence, instead of considering agiven assignment rule, we resort to the random assignment technique, and find the average performance over all possible assignments where each bit of a codeword is independently assigned, where. Theto Channel with probabilityexpected (and asymptotic as) number of bits assigned. The (average) parallel channel Gallagerto Channel isbound is given in the following theorem.Theorem 1: For BISO memoryless parallel channels witha set of assignment ratesand a code ensemble, theGallager bound averaged over all possible channel assignments(13)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELSis as in (15) at the bottom of the page, where1409channel region. The UB parallel-channel bound is a special, namelycase of the bound (15) forProof: See Appendix B.Compared with the bound (13), the average Gallager bound(15) is not dependent on ASWEs, but only on AWEs of the code, which significantly reduces the computational comsetplexity. Inequality (15) is a powerful bound and many othersimple upper bounds can be derived based on this bound. Inthe next section, we will address several bounds which inducesimple descriptions of respective reliable channel regions.IV. RELIABLE CHANNEL REGIONSIn this section, we study the asymptotic parallel-channel error. The perforprobability performance of a code ensemblemance analysis is based on special cases of the average parallel-channel Gallager bound (15) and the corresponding reliable channel regions. We define the reliable channel region andthe noise boundary as follows.Definition 2: Consider a code ensembleand a -tuple of.codeword-symbol to channel assignment rates A parallel-channel transition probability -tuple is saidto be reliable if the corresponding average ML decodingword error probability (bound) approaches zero with thecodeword length. A reliable channel region ofis a set of reliable -tupleof parallel channel transition probabilities. The boundary of the reliable channel region is called the. For the special single-channelnoise boundary forcase, the noise boundary is reduced to the noise thresholdof .In particular, if each component channel is described by asingle parameter, a reliable channel region is defined as a set ofreliable channel parameter values. Note that the reliable channelregion, as defined here, is an achievable region under ML decoding. This implies that one can enlarge the reliable channelregion by tightening the parallel-channel bound. However, mosttight bounds are complicated. In the following, we focus on a setof simple and instructive reliable channel regions based on average UB, SS, SF, and MSF parallel-channel bounds.(16)where(17)denotes the Bhattacharyya noise parameter averaged over parallel channel assignments. This bound leads to the characterization of the UB reliable channel region as illustrated in the following theorem.Theorem 2: Let symbols of a good binary code ensemblebe randomly assigned to binary-input arbitrary-output parallelwhere assignDMCs with Bhattacharyya noise parametersment rates are. If(18)where(19)is the UB noise threshold of , then the average ML decoding.word error probability satisfiesProof: The UB noise threshold definition (19) implies thatthe AWE of weight incan be bounded by a function ofas(20)implies that the inequality holds for large enough .whereThus, the bound (16) can be rewritten as(21)whereis a sequence of integers defined in Condition 1. Since, the average error probability can beupper-bounded asA. Union-Bhattacharyya (UB) Reliable Channel RegionWe first study the reliable channel region based on the UBparallel-channel bound. Although there are more powerfulbounding technique, we start with the simple UB bound toillustrate main ideas and present a simple form of a reliable(22)(15)

1410IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006whereobtain. By combining (9) and (22), we.Inequality (18) describes the UB reliable channel region ofthe code ensemble , where the average Bhattacharyya noiseis a function of both channel transiparametertion probabilities and assignment rates; and UB noise thresholdcharacterizes weight spectrum properties of the code ensemble .Thus, average ML word error probability decays to zero with. Thethe codeword length whenever the received SNRreliable channel region (26) contains the UB reliable channel.region (18), and two regions are equal when we setC. Shulman–Feder (SF) Reliable Channel RegionThe SF parallel-channel bound (see [32] for the singlechannel counterpart) is a special case of the average parallel-channel Gallager bound (15) forB. Simplified-Sphere (SS) Reliable Channel Region forParallel AWGN ChannelsWe consider binary-input parallel AWGN channels. Thechannel transition probabilities and related channel characteristics are described in Example 2. We describe how the parallelAWGN-channel SS bound (see [13] for the single-channel version) follows from the average parallel-channel Gallager bound.and using (4), (5), and (14), weBy settinghave thatand(28)Lemma 2: For BISO memoryless parallel channels, a setof assignment rates, and a code ensembleof rate ,the average SF parallel-channel bound can be written asfor(29)andwhere(23)Let, now (23) and (15) imply that a parallel-AWGNchannel SS bound is(30)is a Gallager function with uniform input, and(31)and(24)where the second inequality holds since (see [31, Appendix 3A])is the SF distance for the code ensembleProof: See Appendix D.It can be easily verified thatforand(25)(32)whereThe following theorem describes the SS reliable channel regionfor parallel binary-input AWGN channels.ofTheorem 3: Let symbols of a good binary code ensemblebe randomly assigned to parallel binary-input AWGN channels where the set of assignment rates is. If(26)then the average ML decoding word error probability satisfies.Proof: See Appendix C.Note that by settingin (26), the noise boundary reducesto the simplified-sphere noise threshold for the single-channelcase for the code ensemble(27)(33)denotes the average binary channel mutual information for a setof parallel channels. Following a parallel approach to the onein [33, pp. 141–143], we obtain the following theorem.Theorem 4: For BISO memoryless parallel channels withand a code ensembleof ratea set of assignment rates, if(34)where(35)then the average ML decoding word error probability satisfies.

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS1411random binary code ensemble of the same rate (also see [34,Fig. 4] for the asymptotic weight spectrum of turbo code ensembles). A simple way to tighten this bound has been proposedin [35], where the authors considered the LDPC code ensembletransmitted over a single channel and suggested to first partitionthe code weights into two subsets, and then to employ the UBbound for the “large” and “small” code weight subsets, and theSF bound for the “medium” code weight subset. We denote theusing a pair ofpartition of the Hamming weight setand, i.e.,disjoint subsetsandThus, by combining UB and SF bounding techniques, we obtainthe following lemma which describes a tighter parallel-channelbound.Fig. 2. Comparison of normalized weight spectrum rensemble versus random binary code ensemble (R 1 5).( ): RA codeLemma 3: For BISO memoryless parallel channels with aset of assignment ratesand a code ensembleof rate ,a MSF bound is given byfor(36)where(37)Proof: See Appendix E.Letand(38)We define(39)Fig. 3. Comparison of normalized weight spectrum r ( ): turbo codeensemble versus random binary code ensemble (R 1 3).andProof: Following a parallel approach to the one in [33,pp. 141–143].(40)We note that the asymptotic SF distance,, measures theweight spectrum distance between the random binary code enand the code ensemble . Hence, (34) shows thatsemblewe can achieve the average binary-input parallel-channel capacity by employing the random binary code ensemble and theis the gap betweenrandom assignment rule. The parameterthe binary-input parallel-channel capacity and the (achievable)rate of .as the restriction code UB noise threshold and SF distance correand, respectively. Basedsponding to the weight subsetson the MSF parallel-channel bound, we can derive the followingreliable channel region.of rateTheorem 5: Let symbols of a good code ensemblebe randomly assigned to BISO parallel memoryless channels using a set of assignment rates. Ifand(41)D. Modified Shulman–Feder (MSF) Reliable Channel RegionThe SF bound is not tight for some practical codes for whichthere is a large gap between the weight spectrum of the code andthat of the random binary code ensemble for “small” or “large”code weights. See Figs. 2 and 3 which, respectively, comparethe spectrum of RA and turbo codes with the spectrum of thethen the average ML decoding word error probability satisfies.Proof: See Appendix E.Theorem 5 suggests partitioning the set of normalizedinto a pair of mutually exclusive subsetsandweights

1412Fig. 4.IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 20060 ln versus H( ) (I 0 1) ln 2.Fig. 5. Restriction code parametersand, corresponding toand, respectively, characterize an MSF reliable channelregion in (41). Thus, for a given fixed-weight partition, we canfind the corresponding MSF reliable channel region. Moreover,the UB reliable channel region is a special case of the MSFandpartition. The largestregion for theMSF (LMSF) reliable channel region is the union of regionscorresponding to all possible partitions. More precisely, we canwrite the LMSF reliable channel region asThus, for a given pair(42), the optimum weight partition isotherwise.To illustrate the LMSF reliable channel region and the optimumweight partition, we consider the following example.Example 8: Consider a rateRA code ensemble transmitted over three parallel-AWGN channels using the assignment ratesOptimum weight partition versus r( ).The asymptotic normalized weight spectrum of the RA codeis (see [36]) given (45) at the bottom of the page.ensembleAs shown in Fig. 5, sincethe SNR setis in the LMSF reliable channelRA code ensemble, theregion, i.e., by employing the ratetransmission over three parallel-AWGN channels is reliableunder ML decoding.E. Numerical Calculation of Noise Boundaries forParallel-AWGN ChannelsWe now focus on the computation of the UB, SS, MSF, andthe LMSF noise boundary for parallel-AWGN channel with. As described in Example 2, both the Bhattacharyya noise parameter and the channel mutual information are functions of received SNRs of component AWGN channels. Thus, for a given-tuple of assignment rates, a boundary point corresponds.to a received SNR -tupleFirst, we consider a class of boundary points at which onlyone channel, say Channel 1 (without loss of generality), has anonzero received SNR andandfor(46)for receiver SNRsandUsing (5), (17), and (33), we can calculate channel parametersand . As shown in the Fig. 4, there exist two solutions,, such that(43)Thus, the optimum-weight partition is given byand(44)Hence,and, for. Our goal isfor Channel 1 such thatto find the required received SNR. Inequalities (18), (26), (41), and (42)isimply (47)–(50) at the bottom of the next page, where. Clearly, the setisthe inverse function ofa noise boundary point of the UB, SS, MSF, and the LMSF reliis wellable channel region whenever the corresponding, the SNR -tupledefined. Furthermore, for anyis in the reliable channel region. Condition (46)is (randomly)is equivalent to assuming that the ensemblepunctured, and that the punctured code ensemble (wit

work was supported by the National Science Foundation under Grants CCR-0205362 and SPN-0338805. The material in this paper was presented in part . R. Liu and P. Spasojevic are with WINLAB, Department of Electrical and Computer Engineering, Rutgers University, North Brunswick, NJ 08902 USA (e-mail: liurh@winlab.rutgers.edu; spasojev@winlab .