Semigraphics Compression

November 7, 2017

ANSI art and poor resolution may appeal to the nostalgic, those in want of the time when BBS were still it and the IBM PC’s programmable character set was the nec plus ultra of semigraphics, but they’re not really useful. At best, we can use them to dispense ourselves from using ncurse and still getting some colors and effects

However, “semigraphics” may have their use in lossy data compression, were we allow some data to be lost to gain some more compression. That may be especially true when we have very little computing power or if we want to have many simple CPUs in parallel doing the decoding.

Read the rest of this entry »


The 1 bit = 6 dB Rule of Thumb, Revisited.

March 28, 2017

Almost ten years ago I wrote an entry about the “1 bit = 6 dB” rule of thumb. This rule states that for each bit you add to a signal, you add 6 dB of signal to noise ratio.

The first derivation I gave then was focused on the noise, where the noise maximal amplitude was proportional to the amplitude represented by the last bit of the (encoded) signal. Let’s now derive it from the most significant bit of the signal to its least significant.

Read the rest of this entry »


New Block Order

March 21, 2017

The idea of reorganizing data before compression isn’t new. Almost twenty five years ago, Burrows and Wheeler proposed a block sorting transform to reorder data to help compression. The idea here is to try to create repetitions that can be exploited by a second compression engine.

But the Burrows-Wheeler transform isn’t the only possible one. There are a few other techniques to generate (reversible) permutations of the input.

Read the rest of this entry »


8-bit Audio Companding (part II)

February 28, 2017

A few weeks back, I presented an heuristic for audio companding, making the vague assumption that the distribution of values—sound samples—is somewhat exponentially-distributed. But is it the case?

ssound-blocks

Let’s then find out the distribution of the samples. As before, I will use the Toréador file and a new one, Jean Michel Jarre’s Electronica 1: Time Machine (the whole CD). The two are very different. One is classical music, the other electronic music. One is adjusted in loudness so that we can here the very quiet notes as well as the very loud one, the other is adjusted for mostly for loudness, to maximum effect.

So I ran both through a sampler. For display as well as histogram smoothing purposes, I down-sampled both channels from 16 to 8 bits (therefore from 0 to 255). In the following plots, green is the left channel and (dashed) red the right. Toréador shows the following distribution:

toreador

or, in log-plot,

toreador-log

Turns out, the samples are Laplace distributed. Indeed, fitting a mean \mu=127 and a parameter \beta\approx{7.4} agrees with the plot (the ideal Laplacian is drawn in solid blue):

toreador-log-laplace

Now, what about the other file? Let’s see the plots:

time-machine

and in log-plot,

time-machine-log

and with the best-fit Laplacian superimposed:

time-machine-log-laplace

Now, to fit a Laplacian, the best parameters seem to be \mu=127 and \beta\approx{27}. While the fit is pretty good on most of the values, it kind of sucks at the edge. That’s the effect of dynamic range compression, a technique used to limit a signal’s dynamic range, often in a non-uniform way (the signal values near or beyond the maximum value target get more squished). This explains the “ears” seen in the log-plot, also seen in the (not log-)plot.

*
* *

Making the hypothesis that the samples are Laplace-distributed will allow us to devise an efficient quantization scheme for both the limits of the bins and the reconstruction value. In S-law, if we remember, the reconstructed value used is the value in the center of the interval. But, if the distribution is not uniform in this interval, the most representative value isn’t in its center. It’s the value that minimizes the squared expected error. Even if the expression for the moments of a Laplace-distributed random variable isn’t unwieldy, we should arrive at a very good, and parametric, quantization scheme for the signal.


8-bit Audio Companding

February 7, 2017

Computationally inexpensive sound compression is always difficult, at least if you want some quality. One could think, for example, that taking the 8 most significant bits of 16 bits will give us 2:1 (lossy) compression but without too much loss. However, cutting the 8 least significant bits leads to noticeable hissing. However, we do not have to compress linearly, we can apply some transformation, say, vaguely exponential to reconstruct the sound.

ssound-blocks

That’s the idea behind μ-law encoding, or “logarithmic companding”. Instead of quantizing uniformly, we have large (original) values widely spaced but small (original) value, the assumption being that the signal variation is small when the amplitude is small and large when the amplitude is great. ITU standard G.711 proposes the following table:

Read the rest of this entry »


Fractional Bits (Part III)

October 18, 2016

A long time ago, we discussed the problem of using fractions of bits to encode, jointly, values without wasting much bits. But we considered then only special cases, but what if we need to shift precisely by, say, half a bit?

But what does shifting by half a bit even means?

Read the rest of this entry »


Of Colorspaces and Image Compression (Part III)

November 11, 2014

Last week, we continued on color spaces, and we showed—although handwavingly—that we’re not very good at seeing color differences in hue and saturation, and that we’re only good at seeing difference in brightness. In this last part, we’ll have a look at how JPEG exploits that for image compression.

P1131283

Read the rest of this entry »