Halton Sequences (Generating Random Sequences VII)

September 7, 2017

Quite a while ago, while discussing Monte Carlo Integration with my students, the topic of choosing sample locations came up, and we discussed low-discrepancy sequences (a.k.a. quasi-random sequences). In a low-discrepancy sequence, values generated look kind of uniform-random, but avoids clumping. A closer examination reveal that they are suspiciously well-spaced. That’s what we want in Monte Carlo integration.

But how do we generate such sequences? Well, there are many ways to do so. Some more amusing than other, some more structured than others. One of the early example, Halton sequences (c. 1964) is particularly well behaved: it generates 0, 0.5, then 0.25 and 0.75, then 0.125, 0.375, 0.625, and 0.875, etc. It does so with a rather simple binary trick.

Read the rest of this entry »


Much Ado About Nothing

March 7, 2017

A rather long time ago, I wrote a blog entry on branchless equivalents of simple functions such as sex, abs, min, max. The Sing EXtension instruction propagates the sign bit in the upper bits, and is typically used in the promotion of, say, a 16 bits signed value into a 32 bits variable.

But this time, I needed something a bit different: I only wanted the sign-extended part. Could I do much better than last time? Turns out, the compiler has a mind of its own.

Read the rest of this entry »


8-bit Audio Companding

February 7, 2017

Computationally inexpensive sound compression is always difficult, at least if you want some quality. One could think, for example, that taking the 8 most significant bits of 16 bits will give us 2:1 (lossy) compression but without too much loss. However, cutting the 8 least significant bits leads to noticeable hissing. However, we do not have to compress linearly, we can apply some transformation, say, vaguely exponential to reconstruct the sound.

ssound-blocks

That’s the idea behind μ-law encoding, or “logarithmic companding”. Instead of quantizing uniformly, we have large (original) values widely spaced but small (original) value, the assumption being that the signal variation is small when the amplitude is small and large when the amplitude is great. ITU standard G.711 proposes the following table:

Read the rest of this entry »


Stretching samples

January 31, 2017

So for an experiment I ended up needing conversions between 8 bits and 16 bits samples. To upscale an 8 bit sample to 16 bits, it is not enough to simply shift it by 8 bits (or multiply it by 256, same difference) because the largest value you get isn’t 65535 but merely 65280. Fortunately, stretching correctly from 8 bit to 16 bit isn’t too difficult, even quite straightforward.

stretching-snorlax

Read the rest of this entry »


Strings in C++ Switch/Case statements

January 10, 2017

Something that used to bug me—used to, because I am so accustomed to work around it that I rarely notice the problem—is that in neither C nor C++ you can use strings (const char * or std::string) in switch/case statement. Indeed, the switch/case statement works only on integral values (an enum, an integral type such as char and int, or an object type with implicit cast to an integral type). But strings aren’t of integral types!

In pure C, we’re pretty much done for. The C preprocessor is too weak to help us built compile-time expression out of strings (or, more exactly, const char *), and there’sn’t much else in the language to help us. However, things are a bit different in C++.

Read the rest of this entry »


Logarithms (Part I?)

January 3, 2017

The traditional—but certainly not the best—way to compute the value of the logarithm of some number x is to use a Taylor series, for example

\displaystyle \ln x = (x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\frac{1}{4}(x-1)^4+\cdots

but that expansion is only valid for 0<x\leqslant{2}, or so, because it is the Taylor expansion of \ln x "around 1", and the convergence radius of this particular expression isn't very large. Furthermore, it needs a great deal of terms before converging.

Read the rest of this entry »


Padé Approximants

September 20, 2016

Sometimes, you need to compute a function that’s complex, cumbersome, or which your favorite language doesn’t quite provide for. We may be interested in an exact implementation, or in a sufficiently good approximation. For the later, I often turn to Abramovitz and Stegun’s Handbook of Mathematical Function, a treasure trove of tables and approximations to hard functions. But how do they get all these approximations? The rational function approximations seemed the most mysterious of them all. Indeed, how can one come up with

e^x \approx 1-0.9664x+0.3536x^2

for e^x, for 0\leqslant x\leqslant\ln 2?

Well, turns out, it’s hard work, but there’s a twist to it. Let’s see how it’s done!

Read the rest of this entry »