Information Theory

Description: This quiz covers the fundamental concepts and principles of Information Theory, including entropy, mutual information, channel capacity, and coding theorems.
Number of Questions: 15
Created by:
Tags: information theory shannon entropy mutual information channel capacity coding theorems
Attempted 0/15 Correct 0 Score 0

What is the unit of information in Information Theory?

  1. Bit

  2. Byte

  3. Hertz

  4. Decibel


Correct Option: A
Explanation:

The unit of information in Information Theory is the bit, which represents the basic unit of information content.

The entropy of a random variable $X$ is defined as:

  1. $H(X) = \sum_x p(x) \log p(x)$

  2. $H(X) = \sum_x p(x) \log_2 p(x)$

  3. $H(X) = \sum_x p(x)^2 \log p(x)$

  4. $H(X) = \sum_x p(x)^2 \log_2 p(x)$


Correct Option: B
Explanation:

The entropy of a random variable $X$ is defined as $H(X) = \sum_x p(x) \log_2 p(x)$, where $p(x)$ is the probability of occurrence of each possible value of $X$.

The mutual information between two random variables $X$ and $Y$ is defined as:

  1. $I(X;Y) = H(X) + H(Y)$

  2. $I(X;Y) = H(X) - H(Y)$

  3. $I(X;Y) = H(X,Y) - H(X)$

  4. $I(X;Y) = H(X,Y) - H(X) - H(Y)$


Correct Option: D
Explanation:

The mutual information between two random variables $X$ and $Y$ is defined as $I(X;Y) = H(X,Y) - H(X) - H(Y)$, where $H(X,Y)$ is the joint entropy of $X$ and $Y$, $H(X)$ is the entropy of $X$, and $H(Y)$ is the entropy of $Y$.

The channel capacity of a communication channel is defined as:

  1. $C = \max_{p(x)} I(X;Y)$

  2. $C = \max_{p(x)} H(X)$

  3. $C = \max_{p(x)} H(Y)$

  4. $C = \max_{p(x)} H(X,Y)$


Correct Option: A
Explanation:

The channel capacity of a communication channel is defined as $C = \max_{p(x)} I(X;Y)$, where $I(X;Y)$ is the mutual information between the input and output of the channel, and $p(x)$ is the probability distribution of the input.

The Shannon-Hartley theorem states that the channel capacity of a band-limited additive white Gaussian noise (AWGN) channel is given by:

  1. $C = B \log_2 (1 + \frac{S}{N})$

  2. $C = B \log_2 (1 + \frac{N}{S})$

  3. $C = B \log_2 (1 + \frac{S}{N^2})$

  4. $C = B \log_2 (1 + \frac{N^2}{S})$


Correct Option: A
Explanation:

The Shannon-Hartley theorem states that the channel capacity of a band-limited AWGN channel is given by $C = B \log_2 (1 + \frac{S}{N})$, where $B$ is the bandwidth of the channel, $S$ is the average signal power, and $N$ is the average noise power.

The source coding theorem states that the minimum number of bits required to represent a source with entropy $H$ is:

  1. $R = H$

  2. $R = H + 1$

  3. $R = H - 1$

  4. $R = 2H$


Correct Option: A
Explanation:

The source coding theorem states that the minimum number of bits required to represent a source with entropy $H$ is $R = H$, where $R$ is the rate of the source code.

The channel coding theorem states that it is possible to achieve reliable communication over a noisy channel with a capacity of $C$ by using a code with a rate:

  1. $R < C$

  2. $R = C$

  3. $R > C$

  4. $R \ge C$


Correct Option: A
Explanation:

The channel coding theorem states that it is possible to achieve reliable communication over a noisy channel with a capacity of $C$ by using a code with a rate $R < C$.

The Huffman coding algorithm is a:

  1. Prefix-free code

  2. Variable-length code

  3. Fixed-length code

  4. Non-unique code


Correct Option: A
Explanation:

The Huffman coding algorithm is a prefix-free code, which means that no codeword is a prefix of any other codeword.

The Lempel-Ziv-Welch (LZW) algorithm is a:

  1. Lossless data compression algorithm

  2. Lossy data compression algorithm

  3. Huffman coding algorithm

  4. Arithmetic coding algorithm


Correct Option: A
Explanation:

The LZW algorithm is a lossless data compression algorithm, which means that it can be used to compress data without losing any information.

The JPEG image compression standard uses:

  1. Discrete cosine transform (DCT)

  2. Discrete Fourier transform (DFT)

  3. Walsh-Hadamard transform (WHT)

  4. Haar wavelet transform


Correct Option: A
Explanation:

The JPEG image compression standard uses the discrete cosine transform (DCT) to convert the image into a frequency domain representation, which is then quantized and compressed.

The MP3 audio compression standard uses:

  1. Perceptual audio coding (PAC)

  2. Linear predictive coding (LPC)

  3. Adaptive differential pulse-code modulation (ADPCM)

  4. Transform coding


Correct Option: A
Explanation:

The MP3 audio compression standard uses perceptual audio coding (PAC), which exploits the psychoacoustic properties of the human ear to remove inaudible information from the audio signal.

The H.264 video compression standard uses:

  1. Block-based motion compensation

  2. Discrete cosine transform (DCT)

  3. Quantization

  4. Entropy coding


Correct Option:
Explanation:

The H.264 video compression standard uses a combination of block-based motion compensation, discrete cosine transform (DCT), quantization, and entropy coding to achieve high compression ratios.

The information content of a message is measured in:

  1. Bits

  2. Bytes

  3. Hertz

  4. Decibels


Correct Option: A
Explanation:

The information content of a message is measured in bits, which represent the basic unit of information.

The rate of a source code is defined as:

  1. The number of bits required to represent a single source symbol

  2. The number of bits required to represent a block of source symbols

  3. The average number of bits required to represent a source symbol

  4. The maximum number of bits required to represent a source symbol


Correct Option: C
Explanation:

The rate of a source code is defined as the average number of bits required to represent a source symbol.

The efficiency of a source code is defined as:

  1. The ratio of the entropy of the source to the rate of the code

  2. The ratio of the rate of the code to the entropy of the source

  3. The ratio of the average length of a codeword to the entropy of the source

  4. The ratio of the entropy of the source to the average length of a codeword


Correct Option: A
Explanation:

The efficiency of a source code is defined as the ratio of the entropy of the source to the rate of the code.

- Hide questions