Finding rational approximations to real numbers may help us simplify calculations in every day life, because using

makes back-of-the-envelope estimations much easier. It also may have some application in programming, when your CPU is kind of weak and do not deal well with floating point numbers. Floating point numbers emulated in software are very slow, so if we can dispense from them an use integer arithmetic, all the better.

However, finding good rational approximations to arbitrary constant is not quite as trivial as it may seem. Indeed, we may think that using something like

will be quite sufficient as it will give you 6 digits precision, but why use 3141592/1000000 when 355/113 gives you *better* precision? Certainly, we must find a better way of finding approximations that are simultaneously precise and … well, let’s say *cute*. Well, let’s see what we could do.

The first approach, just scaling by a large number, say 1000000 as above, then reducing the faction may, or may not yield interesting results. For example,

isn’t terribly sexy. It *undershoots* by , while Zǔ Chōngzhī ‘s approximation, is better with an *overshoot* of .

But how do we find these good, cute, approximations? Of course, we could go with a special, ad hoc procedure for each constant. Just as Archimedes used a 96-sided polygon to get to

.

I guess we could do the same for other constants, like , , , etc. But that would be very tedious.

*

* *

There seems to be no obvious relation between and , yet the approximation is very good. How do we find this type of approximations without resorting to some complex ad hoc procedure? One way of looking at the problem is to consider all fractions laid out on a plane. Let’s say the xs are the denominators and the ys the numerator. Each point is associated to a fraction, all points laying on the same line represent the same fraction… Indeed, , and all lie on the same line, and all represent the same fraction.

The straight line associated to an arbitrary real, irrational number, will not pass exactly through a point of that grid, but it will come close some of the points. These are the points that are of interest.

So let’s take an example. Say, , the so-called golden ratio.

Tracing ‘s line against the grid:

With 1=(1,1) in the lower-left corner. While it doesn’t go right through a point, we see that the red line come *very close* to some of them. But How do we find them? We see that the only points we should consider are immediately on either side of the line:

(Yes, the line is still straight… optical illusion is a bonus.) All of them gives you approximations with errors that are upper-bounded by , where is the denominator of the fraction. That’s an upper-bound, and while it says that the larger the denominator, the better the approximation, it doesn’t say much about how small the error can be. Say we limit ourselves to a particular range. Let’s say we search for a two-digit denominator less than 25 (for some reason). We only have the few points to consider:

For each denominator, we only look at the two points below and above the line. Ultimately, we pick the one with the smallest (absolute) error.

*

* *

But how does that help us finding good rational approximations? Well, we now know that for each denominator we have exactly two numerator to check. We just have to try them all, I guess.

Fortunately, the program isn’t too complicated, and since we only check two points per denominator, it’s also relatively fast.

#include <iostream> #include <iomanip> #include <cmath> // for crummy #defines like M_PI #include <limits> template <typename T> T sqr(const T & a) { return a*a; } template <typename T> T abs(const T & a) { return (a<0) ? -a: a; } const double phi=1.6180339887498948482; int main() { const int64_t limit=1000000000; const double constante= phi; //M_LN2; //M_E; //M_PI; double best_error=std::numeric_limits<double>::max(); std::cout << std::setprecision(12); for (int64_t denum=1;denum<limit;denum++) { for (int64_t dessus=0;dessus<2;dessus++) { int64_t numer=constante*denum+dessus; // avec troncage double approx=numer/(double)denum; double e=abs(constante-approx); if (e<best_error) { best_error=e/10; std::cout << std::noshowpos // hides + << numer << '/' << denum << "\t " << approx << "\t " << std::showpos // shows + << constante-approx << std::endl; if (best_error==0) // we have reached machine-precision! // we're done! return 0; } } } return 0; }

Since there’sn’t any point in printing all good approximations, just the very best, I added that each time a good approximation is found, the next one should add at least one digit of precision. For , the program outputs:

1/1 1 +0.61803398875 5/3 1.66666666667 -0.0486326779168 21/13 1.61538461538 +0.00264937336528 89/55 1.61818181818 -0.000147829431923 377/233 1.61802575107 +8.23767693348e-06 1597/987 1.61803444782 -4.59071787029e-07 6765/4181 1.61803396317 +2.55831884566e-08 28657/17711 1.61803399018 -1.42570222295e-09 121393/75025 1.61803398867 +7.94517784897e-11 514229/317811 1.61803398875 -4.42756942221e-12 2178309/1346269 1.61803398875 +2.46691556072e-13 9227465/5702887 1.61803398875 -1.37667655054e-14 39088169/24157817 1.61803398875 +8.881784197e-16 165580141/102334155 1.61803398875 +0

For , we get:

3/1 3 +0.14159265359 22/7 3.14285714286 -0.00126448926735 333/106 3.14150943396 +8.32196275291e-05 355/113 3.14159292035 -2.66764189405e-07 94763/30164 3.14159262697 +2.66172430763e-08 103283/32876 3.14159265117 +2.41568454129e-09 208341/66317 3.14159265347 +1.22356347276e-10 833719/265381 3.14159265358 +8.71525074331e-12 4272943/1360120 3.14159265359 +4.04121180964e-13 5419351/1725033 3.14159265359 -2.22044604925e-14 80143857/25510582 3.14159265359 +4.4408920985e-16 245850922/78256779 3.14159265359 +0

We find Zǔ Chōngzhī’s approximation, , but some that are much better. Notice also the program may output a different number of approximations. In the first case, we have more than in the second. Also, you’ve probably noticed that suspicious 0 error at the end. Well, that’s where we hit machine-precision: we can’t do better with `double`s.

*

* *

Finding rational approximations of real-constants is interesting on its own. Indeed, it makes us think about numbers a bit differently. Above, I used a *plane* and a *line* to represent a constant. That’s not our usual way of thinking about number, but it proved fruitful.

You can find pi=355/133 by use of continued fractions, see https://en.wikipedia.org/wiki/Continued_fraction

You start with pi, subtract its integral part 3, then compute the reciprocal of what remains, which gives you 7.06251330… Subtract the integral part 7 again, compute the reciprocal, which gives you near 16. Continue if you need higher precision.

Then the continued fraction can be simplified into a single fraction, i.e. 3+1/(7+1/16) = 355/113.