Last week, we noticed a fun connection between lattices and fractions, which helped us get rational approximations to real numbers. Since only points close to the (real-sloped) line are of interests, only those points representing fractions are candidates for rational approximation, and the closer to the line they are, the better.

But what if we find a point real close to the line? What information can we use to refine our guess?

Geometrically, the gap between a point and the line also gives you the angle between the real number (seen as a line) and the rational approximation (also seen as a line, shown in blue in the figure):

So we can use this information to correct our estimate. But since the angle is itself irrational (since an irrational minus a rational is an irrational), we kind of end up with the same problem: to get a corrected rational approximation, we should get a rational approximation of the error!

Well, it’s not as bad as it seems. Since the error is (hopefully) tiny, the approximation of the error should also have itself a tiny error. So we can guess somewhat roughly, and add that correction to the current approximation. How do we get a rough approximation rapidly?

If is the (rather small) error, we can choose some denominator such that

.

Turns out, if we put

,

we just have found an acceptable fraction.

Then the algorithm is pretty simple. We start at some initial guess (that doesn’t have to be very good), say something like

,

for the constant of interest, . We set , find , and compute

.

We iterate until we reach machine-precision.

* * *

The surprise is how fast this simple algorithm converges. Using Mathematica (because of arbitrary precision arithmetic), for , we get the first few iterations:

Iter

Fraction

Numerically

Error

0

3.1

0.0415926535

1

3.14

0.0015926535

2

3.1415923566…

2.97 × 10^{-7}

3

3.141592653589…

6.97 × 10^{-15}

4

3.141592653589…

2.25 × 10^{-29}

* * *

So now we have an algorithm that more or less doubles the number of significant digits per iteration. That’s fine, but that still doesn’t solve the really interesting problem that is, how do we get “cute” and precise rational approximations?

This entry was posted on Tuesday, October 10th, 2017 at 18:39 pm and is filed under algorithms, Mathematics. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

RT @bgreene: General relativity's central feature of mass warping spacetime is often illustrated in 2d--here's a nice attempt to visualize… 3 hours ago

RT @chatisticien: Vidéo de maltraitance animale. RT si tu veux gagner un cadre + que je ne parle plus de groupe contrôle à mon chat.
https:… 7 hours ago