Question 1 |

Consider communication over a memoryless binary symmetric channel using a
(7, 4) Hamming code. Each transmitted bit is received correctly with probability (1-\epsilon ) , and flipped with probability \epsilon . For each codeword transmission, the receiver
performs minimum Hamming distance decoding, and correctly decodes the message
bits if and only if the channel introduces at most one bit error.

For \epsilon =0.1 , the probability that a transmitted codeword is decoded correctly is _________ (rounded off to two decimal places).

For \epsilon =0.1 , the probability that a transmitted codeword is decoded correctly is _________ (rounded off to two decimal places).

0.25 | |

0.65 | |

0.85 | |

0.94 |

Question 1 Explanation:

Here (7, 4) Hamming code is given

P(0/1) = P(1/0) (due to bindary symmetry channel) = 0.1

When n bits are transmitted then probability of getting error in \gamma bits = ^nC_rP^r(1-p)^{n-r}

P:Bit error probability

P_c=C_0(0.1)^0[1-0.1]^{7-0}+^7C_1(0.1)(1-0.1)^{7-1}=(0.9)^7+7 \times 0.1 \times (0.9)^6=0.85

P_C : Prob of all most one bit error.

P(0/1) = P(1/0) (due to bindary symmetry channel) = 0.1

When n bits are transmitted then probability of getting error in \gamma bits = ^nC_rP^r(1-p)^{n-r}

P:Bit error probability

P_c=C_0(0.1)^0[1-0.1]^{7-0}+^7C_1(0.1)(1-0.1)^{7-1}=(0.9)^7+7 \times 0.1 \times (0.9)^6=0.85

P_C : Prob of all most one bit error.

Question 2 |

The transition diagram of a discrete memoryless channel with three input symbols
and three output symbols is shown in the figure. The transition probabilities are as
marked.

The parameter \alpha lies in the interval [0.25, 1]. The value of \alpha for which the capacity of this channel is maximized, is ________ (rounded off to two decimal places).

The parameter \alpha lies in the interval [0.25, 1]. The value of \alpha for which the capacity of this channel is maximized, is ________ (rounded off to two decimal places).

-1 | |

-2 | |

2 | |

1 |

Question 2 Explanation:

Channel capacity,

C_s=Max[I(X:Y)]

I[(X:Y)]=H(Y)-H\left ( \frac{Y}{X} \right )

Where,

H\left ( \frac{Y}{X} \right )=-\sum_{i=1}^{3}\sum_{i=1}^{3}P(x_i,y_i) \log_2P\left ( \frac{y_i}{x_i} \right )

P\left ( \frac{Y}{X} \right )=\begin{matrix} &y_1 & y_2 & y_3\\ x_1 &1-\alpha & \alpha & 0\\ x_2 &0 &1-\alpha &\alpha \\ x_3 & \alpha & 0 & 1-\alpha \end{matrix}

For simplication convenience, let [P(X)]=[1\;\;0\;\;0]

\begin{aligned} [P(X,Y)]&=[P(X)]_d\cdot \left [ P\left ( \frac{Y}{X} \right ) \right ]\\ [P(X,Y)]&=\begin{bmatrix} 1-\alpha & \alpha &0 \\ 0& 0&0 \\ 0 & 0 &0 \end{bmatrix}\\ H\left ( \frac{Y}{X} \right )&=-((1-\alpha ) \log_2 (1-\alpha )+\alpha \log _2 \alpha )\\ I(X:Y)&=H(Y)+(1-\alpha ) \log_2 (1-\alpha )+ \log _2 \alpha \\ C_s&=MAX\left \{ (X:Y) \right \}\\ &=MAX\left \{ H(Y)+(1-\alpha ) \log_2 (1-\alpha )+ \log _2 \alpha \right \}\\ &=log_2 3+(1-\alpha ) \log_2 (1-\alpha )+\alpha \log _2 \alpha \end{aligned}

Plot of (1-\alpha ) \log_2 (1-\alpha )+\alpha \log _2 \alpha

C_s will be maximum at \alpha =0 and 1.

Given \alpha \in [0.25,1]

Hence, \alpha =1 is correct answer.

Question 3 |

A digital transmission system uses a (7,4) systematic linear Hamming code for transmitting data over a noisy channel. If three of the message-codeword pairs in this code (\text{m}_{i} ; \text{c}_{i}), where \text{c}_{i} is the codeword corresponding to the i^{th} message \text{m}_{i}
, are known to be (1100; 0101 100), ( 1110; 0011110) and (0110; 1000110), then which of the following is a \text{valid codeword} in this code?

1101001 | |

1011010 | |

0001011 | |

0110100 |

Question 3 Explanation:

Given codewords: \left.\begin{array}{l} C_{1}=0101100 \\ C_{2}=0011110 \\ C_{3}=1000110 \end{array}\right\}

in given above codewords, first n - k = 3 bits are parity bits and last K = 4 bits are message bits.

Given is (7, 4) systematic linear hamming code. For a linear code, sum of two codewords belong to the code is also a codeword belonging to the code.

\begin{aligned} C_{1} \oplus C_{2}&=0110010=C_{4} \\ C_{2} \oplus C_{3}&=1011000=C_{5} \\ C_{1} \oplus C_{3}&=1101010=C_{6} \\ C_{3} \oplus C_{4}&=1110100=C_{7} \end{aligned}

Based on codewords C_{6} and C_{7} options (a) and (d) will incorrect.

Given codewords in the form of \Rightarrow \underbrace{P_{1} \quad P_{2} \quad P_{3}}_{\text {Parity bits }} \;\underbrace{d_{1}\quad d_{2} \quad d_{3} \quad d_{4} }_{\text {Message bits }}

\begin{aligned} \text { From observation } \Rightarrow & P_{1}=d_{1} \oplus d_{2} \oplus d_{4} \\ & P_{2}=d_{2} \oplus d_{3} \oplus d_{4} \\ & P_{3}=d_{1} \oplus d_{2} \oplus d_{3} \end{aligned}

From given options, option (C) only satisfying above relations.

So, that option (C) will be correct.

in given above codewords, first n - k = 3 bits are parity bits and last K = 4 bits are message bits.

Given is (7, 4) systematic linear hamming code. For a linear code, sum of two codewords belong to the code is also a codeword belonging to the code.

\begin{aligned} C_{1} \oplus C_{2}&=0110010=C_{4} \\ C_{2} \oplus C_{3}&=1011000=C_{5} \\ C_{1} \oplus C_{3}&=1101010=C_{6} \\ C_{3} \oplus C_{4}&=1110100=C_{7} \end{aligned}

Based on codewords C_{6} and C_{7} options (a) and (d) will incorrect.

Given codewords in the form of \Rightarrow \underbrace{P_{1} \quad P_{2} \quad P_{3}}_{\text {Parity bits }} \;\underbrace{d_{1}\quad d_{2} \quad d_{3} \quad d_{4} }_{\text {Message bits }}

\begin{aligned} \text { From observation } \Rightarrow & P_{1}=d_{1} \oplus d_{2} \oplus d_{4} \\ & P_{2}=d_{2} \oplus d_{3} \oplus d_{4} \\ & P_{3}=d_{1} \oplus d_{2} \oplus d_{3} \end{aligned}

From given options, option (C) only satisfying above relations.

So, that option (C) will be correct.

Question 4 |

A speech signal, band limited to 4\:\text{kHz}, is sampled at 1.25 times the Nyquist rate. The speech samples, assumed to be statistically independent and uniformly distributed in the range -5\:V to +5\:V, are subsequently quantized in an 8-bit uniform quantizer and then transmitted over a voice-grade \text{AWGN} telephone channel. If the ratio of transmitted signal power to channel noise power is 26\:\text{dB}, the minimum channel bandwidth required to ensure reliable transmission of the signal with arbitrarily small probability of transmission error (rounded off to two decimal places) is _____ \text{kHz}.

9.25 | |

3.65 | |

8.65 | |

4.56 |

Question 4 Explanation:

\begin{aligned} f_{m}&=4 \mathrm{kHz} \\ f_{s}&=1.25 \mathrm{NR}=1.25 \times\left(2 f_{m}\right)=10 \mathrm{kHz} \end{aligned}

n=8 bits/sample and \frac{S}{N}=26 \mathrm{~dB}=10^{2.6}

For arbitrarily small probability of transmission error,

\begin{aligned} C & \geq R_{b} \Rightarrow B \log _{2}\left(1+\frac{S}{N}\right) \geq n f_{S} \\ \text{Blog}_{2}\left(1+10^{2.6}\right) & \geq 8 \times 10 \mathrm{~K} \Rightarrow B \geq 9.25 \mathrm{kHz} \\ B_{\min } &=9.25 \mathrm{kHz} \end{aligned}

n=8 bits/sample and \frac{S}{N}=26 \mathrm{~dB}=10^{2.6}

For arbitrarily small probability of transmission error,

\begin{aligned} C & \geq R_{b} \Rightarrow B \log _{2}\left(1+\frac{S}{N}\right) \geq n f_{S} \\ \text{Blog}_{2}\left(1+10^{2.6}\right) & \geq 8 \times 10 \mathrm{~K} \Rightarrow B \geq 9.25 \mathrm{kHz} \\ B_{\min } &=9.25 \mathrm{kHz} \end{aligned}

Question 5 |

A digital communication system transmits a block of N bits. The probability of error in
decoding a bit is \alpha. The error event of each bit is independent of the error events of
the other bits. The received block is declared erroneous if at least one of the its bits
is decoded wrongly. The probability that the received block is erroneous is

N(1-\alpha ) | |

\alpha ^n | |

1-\alpha ^n | |

1-(1-\alpha )^n |

Question 5 Explanation:

Probability of error in decoding single bit=\alpha

Then probability of no error will be 1- \alpha.

Total N-bits transmitted, so that probability of no error in received block

=(1-\alpha )(1-\alpha )...N \, \text{ Times}

=(1-\alpha) ^{N}

The Probability of received block is erroneous is =1-(1-\alpha) ^{N}

Then probability of no error will be 1- \alpha.

Total N-bits transmitted, so that probability of no error in received block

=(1-\alpha )(1-\alpha )...N \, \text{ Times}

=(1-\alpha) ^{N}

The Probability of received block is erroneous is =1-(1-\alpha) ^{N}

Question 6 |

A linear Hamming code is used to map 4-bit messages to 7-bit codewords. The encoder mapping is linear. If the message 0001 is mapped to the codeword 0000111, and the message 0011 is mapped to the codeword 1100110, then the message 0010 is mapped to

10011 | |

1100001 | |

1111000 | |

1111111 |

Question 6 Explanation:

Question 7 |

Consider a binary channel code in which each codeword has a fixed length of 5 bits. The
Hamming distance between any pair of distinct codewords in this code is at least 2. The
maximum number of codewords such a code can contain is _________.

15 | |

16 | |

17 | |

18 |

Question 7 Explanation:

According to the Plotkin bound.

\begin{aligned} d_{\min } &\leq \frac{n 2^{k-1}}{2^{k}-1}\\ n&= \text{ Length of each code word }\\ k&= \text{ Length of each message word} \\ \text{Given that.}\\ d_{\min }&=2 \text { and } n=5\\ \text{So, } \frac{2^{k}}{2\left(2^{k}-1\right)} &\geq \frac{2}{5} \\ \frac{2^{k}}{2^{k}-1} &\geq \frac{4}{5} \end{aligned}

For any value of k, the above bound will be satisfied. But as n > k and n is fixed at 5. The maximum value of k that can be selected is 4.

So, the maximum number of codewords possible is 2_{4} = 16.

\begin{aligned} d_{\min } &\leq \frac{n 2^{k-1}}{2^{k}-1}\\ n&= \text{ Length of each code word }\\ k&= \text{ Length of each message word} \\ \text{Given that.}\\ d_{\min }&=2 \text { and } n=5\\ \text{So, } \frac{2^{k}}{2\left(2^{k}-1\right)} &\geq \frac{2}{5} \\ \frac{2^{k}}{2^{k}-1} &\geq \frac{4}{5} \end{aligned}

For any value of k, the above bound will be satisfied. But as n > k and n is fixed at 5. The maximum value of k that can be selected is 4.

So, the maximum number of codewords possible is 2_{4} = 16.

Question 8 |

Consider a binary memory less channel characterized by the transition probability diagram shown in the figure.

The channel is

The channel is

Lossless | |

Noiseless | |

Useless | |

Deterministic |

Question 8 Explanation:

Given that

\left[P\left(\frac{Y}{X}\right)\right]=\left[\begin{array}{ll} 0.25 & 0.75 \\ 0.25 & 0.75 \end{array}\right]

Hmesual irformation I(X;Y)= O for every possible input distribution, then the channel is called as useless (or) zero-capacity channel.

\begin{aligned} Let \quad[P(X)]&=[\alpha(1-\alpha)] \\ \text{Then } H(x)&=\\ -\alpha \log _{2} \alpha-(1-\alpha) &\log _{2}(1-\alpha)\text{ bits/symbol}\\ [P(Y)]&=[P(X)]\left[P\left(\frac{Y}{X}\right)\right]=[0.25\; \; 0.75] \end{aligned}

\begin{array}{c} [P(X, Y)]=\left[\begin{array}{cc} \frac{\alpha}{4} & \frac{3 \alpha}{4} \\ \frac{(1-\alpha)}{4} & \frac{3(1-\alpha)}{4} \end{array}\right] \\ \left[P\left(\frac{X}{Y} \right ) \right ]=\frac{[P(X,Y)]}{[P(Y)]_{d}}=\left[\begin{array}{cc} \alpha & \alpha \\ (1-\alpha) & (1-\alpha ) \end{array} \right ]\\ H\left(\frac{X}{Y}\right)=-\sum_{i} \sum_{j} P\left(x_{i}, y_{i}\right) \log _{2} P\left(\frac{x_{i}}{y_{i}}\right) \text { bits/symb } \\ =-\alpha \log _{2} \alpha -(1-\alpha) \log _{2}(1-\alpha) \text { bits/symbol } \\ I(X ; Y)=H(X)-H\left(\frac{X}{Y}\right)=0 \end{array}

So, the given binary memoryless channel is "useless" channel.

\left[P\left(\frac{Y}{X}\right)\right]=\left[\begin{array}{ll} 0.25 & 0.75 \\ 0.25 & 0.75 \end{array}\right]

Hmesual irformation I(X;Y)= O for every possible input distribution, then the channel is called as useless (or) zero-capacity channel.

\begin{aligned} Let \quad[P(X)]&=[\alpha(1-\alpha)] \\ \text{Then } H(x)&=\\ -\alpha \log _{2} \alpha-(1-\alpha) &\log _{2}(1-\alpha)\text{ bits/symbol}\\ [P(Y)]&=[P(X)]\left[P\left(\frac{Y}{X}\right)\right]=[0.25\; \; 0.75] \end{aligned}

\begin{array}{c} [P(X, Y)]=\left[\begin{array}{cc} \frac{\alpha}{4} & \frac{3 \alpha}{4} \\ \frac{(1-\alpha)}{4} & \frac{3(1-\alpha)}{4} \end{array}\right] \\ \left[P\left(\frac{X}{Y} \right ) \right ]=\frac{[P(X,Y)]}{[P(Y)]_{d}}=\left[\begin{array}{cc} \alpha & \alpha \\ (1-\alpha) & (1-\alpha ) \end{array} \right ]\\ H\left(\frac{X}{Y}\right)=-\sum_{i} \sum_{j} P\left(x_{i}, y_{i}\right) \log _{2} P\left(\frac{x_{i}}{y_{i}}\right) \text { bits/symb } \\ =-\alpha \log _{2} \alpha -(1-\alpha) \log _{2}(1-\alpha) \text { bits/symbol } \\ I(X ; Y)=H(X)-H\left(\frac{X}{Y}\right)=0 \end{array}

So, the given binary memoryless channel is "useless" channel.

Question 9 |

Which one of the following graphs shows the Shannon capacity (channel capacity) in bits of a memory less binary symmetric channel with crossover probability P?

A | |

B | |

C | |

D |

Question 9 Explanation:

The channel capacity of a memoryless binary symmetric channel can be expressed as,

C=1+p \log _{2} p+(1-p) \log _{2}(1-p)

C=1+p \log _{2} p+(1-p) \log _{2}(1-p)

Question 10 |

Let (X_{1},X_{2}) be independent random variables. X_{1} has mean 0 and variance 1, while X_{2} has mean 1 and variance 4. The mutual information I(X_{1};X_{2}) between X_{1} and X_{2} in bits is

1 | |

2 | |

3 | |

0 |

Question 10 Explanation:

Mutual information of two random variables is a
measure of the mutual dependence of the two variables.

Given that, X and Y are independent. Hence, I(X:Y)=0 .

Given that, X and Y are independent. Hence, I(X:Y)=0 .

There are 10 questions to complete.