## Rational Approximations (Part II)

October 10, 2017

Last week, we noticed a fun connection between lattices and fractions, which helped us get rational approximations to real numbers. Since only points close to the (real-sloped) line are of interests, only those points representing fractions are candidates for rational approximation, and the closer to the line they are, the better.

But what if we find a point real close to the line? What information can we use to refine our guess?

## Rational Approximations

October 3, 2017

Finding rational approximations to real numbers may help us simplify calculations in every day life, because using

$\displaystyle \pi=\frac{355}{113}$

makes back-of-the-envelope estimations much easier. It also may have some application in programming, when your CPU is kind of weak and do not deal well with floating point numbers. Floating point numbers emulated in software are very slow, so if we can dispense from them an use integer arithmetic, all the better.

However, finding good rational approximations to arbitrary constant is not quite as trivial as it may seem. Indeed, we may think that using something like

$\displaystyle a=\frac{1000000 c}{1000000}$

will be quite sufficient as it will give you 6 digits precision, but why use 3141592/1000000 when 355/113 gives you better precision? Certainly, we must find a better way of finding approximations that are simultaneously precise and … well, let’s say cute. Well, let’s see what we could do.

## Halton Sequences (Generating Random Sequences VII)

September 7, 2017

Quite a while ago, while discussing Monte Carlo Integration with my students, the topic of choosing sample locations came up, and we discussed low-discrepancy sequences (a.k.a. quasi-random sequences). In a low-discrepancy sequence, values generated look kind of uniform-random, but avoids clumping. A closer examination reveal that they are suspiciously well-spaced. That’s what we want in Monte Carlo integration.

But how do we generate such sequences? Well, there are many ways to do so. Some more amusing than other, some more structured than others. One of the early example, Halton sequences (c. 1964) is particularly well behaved: it generates 0, 0.5, then 0.25 and 0.75, then 0.125, 0.375, 0.625, and 0.875, etc. It does so with a rather simple binary trick.

## In an Old Notebook (Generating Random Sequences VI)

April 4, 2017

Looking for something else in old notebooks, I found a diagram with no other indication, but clearly a low-cost random generator.

So, why not test it?

## The 1 bit = 6 dB Rule of Thumb, Revisited.

March 28, 2017

Almost ten years ago I wrote an entry about the “1 bit = 6 dB” rule of thumb. This rule states that for each bit you add to a signal, you add 6 dB of signal to noise ratio.

The first derivation I gave then was focused on the noise, where the noise maximal amplitude was proportional to the amplitude represented by the last bit of the (encoded) signal. Let’s now derive it from the most significant bit of the signal to its least significant.

## Choosing the Right Pseudoinverse

January 17, 2017

On a number of previous occasions, I have used the pseudoinverse of a matrix solve systems of equations, and do other things such as channel mixing. However, the demonstration I gave before isn’t entirely correct. Let’s see now why it’s important to make the difference between a left and a right pseudoinverse.

## Logarithms (Part I?)

January 3, 2017

The traditional—but certainly not the best—way to compute the value of the logarithm of some number $x$ is to use a Taylor series, for example

$\displaystyle \ln x = (x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\frac{1}{4}(x-1)^4+\cdots$

but that expansion is only valid for $0, or so, because it is the Taylor expansion of $\ln x$ "around 1", and the convergence radius of this particular expression isn't very large. Furthermore, it needs a great deal of terms before converging.