Introducing Low-Density Parity-Check
Codes
Sarah J. Johnson
School of Electrical Engineering and Computer Science
The University of Newcastle
Australia
email: sarah.johnson@newcastle.edu.au
Topic 1: Low-Density
Parity-Check Codes
1.1 Introduction
Low-density parity-check (LDPC) codes are forward error-correction codes,
first proposed in the 1962 PhD thesis of Gallager at MIT. At the time, their
incredible potential remained undiscovered due to the computational demands
of simulation in an era when vacumm tubes were only just being replaced by
the first transistors. They remained largely neglected for over 35 years. In the
mean time the field of forward error correction was dominated by highly struc-
tured algebraic block and convolutional codes. Despite the enormous practical
success of these codes, their performance fell well short of the theoretically
achievable limits set down by Shannon in his seminal 1948 paper. By the late
1980s, despite decades of attempts, researchers were largely resigned to this
seemingly insurmountable theory–practice gap.
The relative quiescence of the coding field was utterly transformed by the
introduction of “turbo codes,” proposed by Berrou, Glavieux and Thitimajshima
in 1993, wherein all the key ingredients of successful error correction codes
were replaced: turbo codes involve very little algebra, employ iterative, distrib-
uted algorithms, focus on average (rather than worst-case) performance, and
rely on soft (or probabilistic) information extracted from the channel. Overnight,
the gap to the Shannon limit was all but eliminated, using decoders with man-
ageable complexity.
As researchers struggled through the 1990s to understand just why turbo
codes worked as well as they did, two researchers, McKay and Neal, intro-
duced a new class of block codes designed to posses many of the features of the
new turbo codes. It was soon recognized that these block codes were in fact a
rediscovery of the LDPC codes developed years earlier by Gallager. Indeed, the
algorithm used to decode turbo codes was subsequently shown to be a special
case of the decoding algorithm for LDPC codes presented by Gallager so many
years before.
New generalizations of Gallager’s LDPC codes by a number of researchers
including Luby, Mitzenmacher, Shokrollahi, Spielman, Richardson and Ur-
banke, produced new irregular LDPC codes which easily outperform the best
turbo codes, as well as offering certain practical advantages and an arguably
cleaner setup for theoretical results. Today, design techniques for LDPC codes
exist which enable the construction of codes which approach the Shannon’s
capacity to within hundredths of a decibel.
So rapid has progress been in this area that coding theory today is in many
ways unrecognizable from its state just a decade ago. In addition to the strong
3
theoretical interest in LDPC codes, such codes have already been adopted in
satellite-based digital video broadcasting and long-haul optical communication
standards, are highly likely to be adopted in the IEEE wireless local area net-
work standard, and are under consideration for the long-term evolution of third-
generation mobile telephony.
1.2 Error correction using parity-checks
Here we will only consider binary messages and so the transmitted messages
consist of strings of 0’s and 1’s. The essential idea of forward error control cod-
ing is to augment these message bits with deliberately introduced redundancy
in the form of extra check bits to produce a codeword for the message. These
check bits are added in such a way that codewords are sufficiently distinct from
one another that the transmitted message can be correctly inferred at the re-
ceiver, even when some bits in the codeword are corrupted during transmission
over the channel.
The simplest possible coding scheme is the single parity check code (SPC).
The SPC involves the addition of a single extra bit to the binary message, the
value of which depends on the bits in the message. In an even parity code, the
additional bit added to each message ensures an even number of 1s in every
codeword.
Example 1.1.
The 7-bit ASCII string for the letter S is 1010011, and a parity bit is to be added
as the eighth bit. The string for S already has an even number of ones (namely
four) and so the value of the parity bit is 0, and the codeword for S is 10100110.
More formally, for the 7-bit ASCII plus even parity code we define a code-
word c to have the following structure:
c = [c
1
c
2
c
3
c
4
c
5
c
6
c
7
c
8
],
where each c
i
is either 0 or 1, and every codeword satisfies the constraint
c
1
⊕ c
2
⊕ c
3
⊕ c
4
⊕ c
5
⊕ c
6
⊕ c
7
⊕ c
8
= 0. (1.1)
Equation (1.1) is called a parity-check equation, in which the symbol ⊕ repre-
sents modulo-2 addition.
Example 1.2.
A 7-bit ASCII letter is encoded with the single parity check code from Exam-
ple 1.1. The resulting codeword was sent though a noisy channel and the string
y = [1 0 0 1 0 0 1 0] was received. To check if y is a valid codeword we test y
with (1.1).
y
1
⊕ y
2
⊕ y
3
⊕ y
4
⊕ y
5
⊕ y
6
⊕ y
7
⊕ y
8
= 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0 = 1.
Since the sum is 1, the parity-check equation is not satisfied and y is not a
valid codeword. We have detected that at least one error occurred during the
transmission.
Introducing Low-Density Parity-Check Codes,
Sarah Johnson 4
ACoRN Spring School
version 1.1
While the inversion of a single bit due to channel noise can easily be de-
tected with a single parity check code, this code is not sufficiently powerful to
indicate which bit, or indeed bits, were inverted. Moreover, since any even num-
ber of bit inversions produces a string satisfying the constraint (1.1), patterns of
even numbers of errors go undetected by this simple code. Detecting more than
a single bit error calls for increased redundancy in the form of additional parity
bits and more sophisticated codes contain multiple parity-check equations and
each codeword must satisfy every one of them.
Example 1.3.
A code C consists of all length six strings
c = [c
1
c
2
c
3
c
4
c
5
c
6
],
which satisfy all three parity-check equations:
c
1
⊕ c
2
⊕ c
4
= 0
c
2
⊕ c
3
⊕ c
5
= 0
c
1
⊕ c
2
⊕ c
3
⊕ c
6
= 0
(1.2)
Codeword constraints are often written in matrix form and so the constraints
of (1.2) become
1 1 0 1 0 0
0 1 1 0 1 0
1 1 1 0 0 1
|
{z }
H
c
1
c
2
c
3
c
4
c
5
c
6
=
0
0
0
. (1.3)
The matrix H is called a parity-check matrix. Each row of H corresponds
to a parity-check equation and each column of H corresponds to a bit in the
codeword. Thus for a binary code with m parity-check constraints and length n
codewords the parity-check matrix is an m × n binary matrix. In matrix form a
string y = [c
1
c
2
c
3
c
4
c
5
c
6
] is a valid codeword for the code with parity-check
matrix H if and only if it satisfies the matrix equation
Hy
T
= 0. (1.4)
1.2.1 Encoding
To distinguish between the message bits and parity bits in the codeword in Ex-
ample 1.3 we re-write the code parity-check constraints so that each one solves
for a different codeword bit.
Introducing Low-Density Parity-Check Codes,
Sarah Johnson 5
ACoRN Spring School
version 1.1