I’ve visited the median problem a couple of times already. The median is the value in the middle of a sorted list of values. If the list is already sorted, that’s not much of a problem. If it isn’t, then we must use some efficient way to do it.
Quite a while ago, while discussing Monte Carlo Integration with my students, the topic of choosing sample locations came up, and we discussed low-discrepancy sequences (a.k.a. quasi-random sequences). In a low-discrepancy sequence, values generated look kind of uniform-random, but avoids clumping. A closer examination reveal that they are suspiciously well-spaced. That’s what we want in Monte Carlo integration.
But how do we generate such sequences? Well, there are many ways to do so. Some more amusing than other, some more structured than others. One of the early example, Halton sequences (c. 1964) is particularly well behaved: it generates 0, 0.5, then 0.25 and 0.75, then 0.125, 0.375, 0.625, and 0.875, etc. It does so with a rather simple binary trick.
Looking for something else in old notebooks, I found a diagram with no other indication, but clearly a low-cost random generator.
So, why not test it?
The idea of reorganizing data before compression isn’t new. Almost twenty five years ago, Burrows and Wheeler proposed a block sorting transform to reorder data to help compression. The idea here is to try to create repetitions that can be exploited by a second compression engine.
But the Burrows-Wheeler transform isn’t the only possible one. There are a few other techniques to generate (reversible) permutations of the input.
A rather long time ago, I wrote a blog entry on branchless equivalents of simple functions such as sex, abs, min, max. The Sing EXtension instruction propagates the sign bit in the upper bits, and is typically used in the promotion of, say, a 16 bits signed value into a 32 bits variable.
But this time, I needed something a bit different: I only wanted the sign-extended part. Could I do much better than last time? Turns out, the compiler has a mind of its own.
A few weeks back, I presented an heuristic for audio companding, making the vague assumption that the distribution of values—sound samples—is somewhat exponentially-distributed. But is it the case?
Let’s then find out the distribution of the samples. As before, I will use the Toréador file and a new one, Jean Michel Jarre’s Electronica 1: Time Machine (the whole CD). The two are very different. One is classical music, the other electronic music. One is adjusted in loudness so that we can here the very quiet notes as well as the very loud one, the other is adjusted for mostly for loudness, to maximum effect.
So I ran both through a sampler. For display as well as histogram smoothing purposes, I down-sampled both channels from 16 to 8 bits (therefore from 0 to 255). In the following plots, green is the left channel and (dashed) red the right. Toréador shows the following distribution:
or, in log-plot,
Turns out, the samples are Laplace distributed. Indeed, fitting a mean and a parameter agrees with the plot (the ideal Laplacian is drawn in solid blue):
Now, what about the other file? Let’s see the plots:
and in log-plot,
and with the best-fit Laplacian superimposed:
Now, to fit a Laplacian, the best parameters seem to be and . While the fit is pretty good on most of the values, it kind of sucks at the edge. That’s the effect of dynamic range compression, a technique used to limit a signal’s dynamic range, often in a non-uniform way (the signal values near or beyond the maximum value target get more squished). This explains the “ears” seen in the log-plot, also seen in the (not log-)plot.
Making the hypothesis that the samples are Laplace-distributed will allow us to devise an efficient quantization scheme for both the limits of the bins and the reconstruction value. In S-law, if we remember, the reconstructed value used is the value in the center of the interval. But, if the distribution is not uniform in this interval, the most representative value isn’t in its center. It’s the value that minimizes the squared expected error. Even if the expression for the moments of a Laplace-distributed random variable isn’t unwieldy, we should arrive at a very good, and parametric, quantization scheme for the signal.
Computationally inexpensive sound compression is always difficult, at least if you want some quality. One could think, for example, that taking the 8 most significant bits of 16 bits will give us 2:1 (lossy) compression but without too much loss. However, cutting the 8 least significant bits leads to noticeable hissing. However, we do not have to compress linearly, we can apply some transformation, say, vaguely exponential to reconstruct the sound.
That’s the idea behind μ-law encoding, or “logarithmic companding”. Instead of quantizing uniformly, we have large (original) values widely spaced but small (original) value, the assumption being that the signal variation is small when the amplitude is small and large when the amplitude is great. ITU standard G.711 proposes the following table: