Universal Coding (Part II)

Last time, we presented universal codes and two simple code. This week, let’s revisit the topic and present Elias codes, which are much better universal codes than Chaitin’s and modified Chaitin codes.

We will use the idea of recursion introduced before, but we will push the idea further, getting codes with lengths O(\lg n + \lg\lg n + \lg\lg\lg n + \cdots), which are pretty much as good as it gets.

Elias (in 1975) proposed to use recursion to code of the length of the number, that is, we code the length of the length of the length … of the code, rather than directly the length. It’s an unusual “bottom up” recursive scheme, where we start by the end. But since we have to know how large the end is, we will decide on a minimum code size, d, say d=2.

The Elias code for n is

\displaystyle C_E(n)=\begin{cases}0&\mathrm{if~}n=1\\H_E(n):0&\mathrm{otherwise}\end{cases}

where : denotes simple bit-wise concatenation; and where

H_E(n)=\begin{cases}0&\mathrm{if~}n=1\\10&\mathrm{if~}n=2\\11&\mathrm{if~}n=3\\H_E(\lfloor\lg n\rfloor):C_b(n)&\mathrm{otherwise}\end{cases}

This generates, for d=2, codes such as:

n H_E(n)
1 0
2 10 0
3 11 0
4 10 100 0
5 10 101 0
6 10 110 0
7 10 111 0
8 11 1000 0
14 11 1110 0
15 11 1111 0
16 10 100 10000 0
17 10 100 10001 0
..

The length of the code is \approx \lg n + \lg\lg n + \lg\lg\lg n + \cdots, and it’s much better than the Chaitin codes with O(2\lg n) bits.

*
* *

So why is universal coding interesting at all? If you know a little bit about compression, you known that an efficient code will be close to the entropy, an information measure introduced by Shannon in the 1940s. Roughly speaking, the entropy is the lower-bound on the average code length one can reach with an uniquely decodable encoding. Here, we pretty much give that up and limit ourselves to codes that eventually yield O(\lg n) bits for n.

But it happens often that we have to code numbers that are a priori unbounded, and that we do not know much on the distribution except that small numbers are more likely than large numbers (something that is trivially true, pretty much independently of what we call “small” (see here)). In this case, universal codes are ideal. They’re in general relatively simple, and we can create all kind of variations, including those that explicitly optimize for a given (approximated) distribution on small numbers while allowing arbitrary large numbers as part of the tail. And one can also sacrifice finesse for machine-efficiency (an euphemism for “simple programming”), such as in the case of MIDI VLQ, or include error-checking and resynchronization capabilities, like in UTF-8.

One Response to Universal Coding (Part II)

  1. […] Universal Coding (part II), we explored the idea of variable length codes that encode integers of any magnitude in bits, but […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: