User:Ajraymond/PolarCodes
In information theory, a polar code is a linear block error correcting code developed by Erdal Arıkan [1]. It is the first code to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC).
History
[edit]Capacity-approaching codes such as LDPC codes and Turbo codes have garnered a large body of research due to their excellent error-correction performance, throughput, and latency characteristics.
Comparatively, polar codes are still in their infancy, having been introcuced in 2008. They differ from capacity-approaching schemes by provably achieving the capacity of several communication channels for infinite code lengths. Although initialy only defined for symmetric B-DMC channels, polar codes were later expanded to all discrete memoryless channels[2].
Description
[edit]Linear block codes work by encoding an information vector into a codeword by multiplying it with a generator matrix , using . This resulting vector is then transmitted over a communication channel. A decoder receives a noisy version of called from the output of the channel. This decoder then estimates the original information vector , yielding .
An polar code is a code whose codewords contain bits of information represented in a vector of length . Each of those bits encodes the result of a parity function of matrix linking one or more bits of the information vector . Alternatively, polar codes can also be represented using a systematic approach[3], where bits of are used to store information vector while the remaining bits contain parity information.
Polar codes are based on the observation that specific bits in the input data tend to be better protected from noise than others. Arıkan observed that as the code length grows larger, individual bits in the input word tend to become either very well or very poorly protected. Polar codes are constructed by identifying those well-protected bit indices in the information vector and using them to transmit information. Those idices are named information set while the remaining positions form the frozen set, which is usually set to a predetermined value known by both the encoder and the decoder.
Polar codes were initially constructed for the binary erasure channel through the use of the erasure probability of the decoded bits. Later research provided approximate construction methods for other communication channels[4].
Although their capacity-achieving characteristic was initially proved under successive cancellation decoding, the belief propagation algorithm was also shown to ???[citation needed].
Code Construction
[edit]Originally, polar codes were constructed using a generator matrix created using the Kronecker power of the base matrix . This construction method yields polar codes whose lengths are powers of two.
For example, the generator matrix of an polar code is:
Polar codes can also be illustrated using a graph representation. In this case, the information vector is presented on the left-hand side of the graph and the resulting decoded codeword is obtained on the right-hand side. The symbols represent XOR operations.
Later research proved that polarization could be obtained by constructing polar codes using any lower triangular base matrix[citation needed].
Example
[edit]Consider a polar code with frozen indices set to . The information bits are stored in the information vector . Using as defined above, the corresponding codeword can be calculated as . (TODO: check order)
Decoding
[edit]Successive Cancellation Decoding
[edit]The successive cancellation decoding algorithm is based on the sum-product algorithm. It makes use of the equality and parity constraints introduced by the encoding graph to carry out a soft estimation of the information vector.
Successive cancellation decoding requires a constant number of operations, , in order to decode a received vector. However, the data dependency of the decoding graph severely limit the parallelism which can be exploited in the decoding process; in a fully-parallel implementation, this would still require time steps. This property means that the throughput achievable by this decoding algorithm is not dependent on the [signal-to-noise ratio] of the underlying channel.
The following figure describes the graph used to decode an polar code. A received vector is presented on the left-hand side of the graph, and the resulting estimated information vector is obtained on the right-hand side. The symbols represent the parity check function whereas represents the equality constraint function .
Let the likelihood ratio and the log-likelihood ratio .
Equations
[edit]The following equations are used in the successive cancellation decoding of polar codes, provided inputs are expressed as likelihood ratio:
Equivalent equations can be derived in the logarithmic domain
The equation for can in turn be simplified through the use of the min-sum approximation[5] used in LDPC codes:
Derivation
[edit]Assume a binary phase-shift keying modulation scheme where binary value is encoded as and binary value , by . A threshold detector is used in this scheme to determine the binary value associated with a soft information value according to the following rule:
To simplify notation, let refer to the binary value associated with soft information value . Thus,
The following derivations are based on the following graph, where is a party operator and , an equality operator. Soft information is presented on the left-hand side whereas the result of both operators are presented on the right-hand side of the graph.
a-----+-----f | b-----o-----g
Parity-Check Constraint
[edit]Let the likelihood ratio of function be:
According to the parity constraint (), the hard decisions on both inputs and the output must satisfy .
There are two possibilities for and to satisfy this constraint for . Therefore, the probability that is
Similarly for :
Returning to the definition of :
Equality Constraint
[edit]Similarly, equality equation is derived as follows:
- .
According to the equality constraint (), the hard decisions on both sides of the node must match. Two conditions for and satisfy this constraint for :
Similarly for :
Returning to the definition of :
Both equations can be combined, yielding:
Relation to Reed-Muller codes
[edit]Polar codes can be viewed as a specific case of the Reed-Muller codes with parameters ..., where the frozen bits are not chosen according to their weight, but rather ... [citation needed].
See also
[edit]References
[edit]- ^ E. Arikan, "Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels," IEEE Transactions on Information Theory, vol.55, no.7, pp.3051-3073, July 2009.
- ^ E. Sasoglu, E. Telatar, and E. Arikan, “Polarization for Arbitrary Discrete Memoryless Channels,” in Proceedings of IEEE Information Theory Workshop (ITW), pp. 144–148, 2009.
- ^ E. Arikan, "Systematic Polar Coding," IEEE Communications Letters, vol.15, no.8, pp.860-862, August 2011.
- ^ I. Tal, A. Vardy, "How to Construct Polar Codes," arXiv:1105.6164v2.
- ^ M.P.C. Fossorier, M. Mihaljevic, and H. Imai, “Reduced Complexity Iterative Decoding of Low-Density Parity Check Codes Based on Belief Propagation,” IEEE Trans. on Comm., vol. 47, no. 5, May 1999.
TODO
[edit]- Modify Channel coding to add ref to polar codes