While reading on the rather precise approximation 355/113 for π, I’ve wondered how many useful approximation we could find.

First, we need to define what “useful” means. Certainly precise would be part of the equation, but precision is modulated by the actual need for accuracy. If you’re working on 16 bits, the best you need will be an error somewhat proportional to one 65536^{th}. We could probably impose conditions on the size of the integers involved in the approximations, the smallest being the better (a justified requirement, if we suppose that integer division and multiplication are faster when using smaller operands). However, we must understand that precision will be linked to the number of digits in the numerator and denominator: we expect “larger” fraction to give a better precision.

*

* *

So, let’s have a look at a few rational approximations of π I found (most are already known, I’m sure). The following table shows you the four best approximation found by the program, in decreasing order of precision, further arranged by the number of digits in the denominator.

Range | Fraction | Approx | Error |

1—9 | |||

22/7 | 3.142857142857143 | 0.001264489267349678 | |

25/8 | 3.125 | -0.01659265358979312 | |

28/9 | 3.111111111111111 | -0.03048154247868196 | |

3/1 | 3 | -0.1415926535897931 | |

10—99 | |||

311/99 | 3.141414141414141 | -0.0001785121756516794 | |

289/92 | 3.141304347826087 | -0.0002883057637061981 | |

267/85 | 3.141176470588235 | -0.0004161830015578794 | |

245/78 | 3.141025641025641 | -0.0005670125641521473 | |

100—999 | |||

355/113 | 3.141592920353983 | 2.667641894049666e-07 | |

2818/897 | 3.141583054626533 | -9.598963260248894e-06 | |

2862/911 | 3.141602634467618 | 9.980877824666834e-06 | |

2463/784 | 3.141581632653061 | -1.102093673210902e-05 | |

1000—9999 | |||

355/113 | 3.141592920353983 | 2.667641894049666e-07 | |

31218/9937 | 3.141592029787662 | -6.238021308391239e-07 | |

30863/9824 | 3.141592019543974 | -6.340458189590947e-07 | |

30508/9711 | 3.141592009061889 | -6.445279043809649e-07 | |

10000—99999 | |||

312689/99532 | 3.141592653618936 | 2.914335439641036e-11 | |

208341/66317 | 3.141592653467437 | -1.22356347276309e-10 | |

312334/99419 | 3.141592653315765 | -2.740283555624501e-10 | |

104348/33215 | 3.141592653921421 | 3.316280583476328e-10 | |

100000—999999 | |||

3126535/995207 | 3.14159265358865 | -1.142641536944211e-12 | |

1146408/364913 | 3.141592653591404 | 1.610711564126177e-12 | |

1980127/630294 | 3.141592653587056 | -2.736921800305936e-12 | |

2813846/895675 | 3.141592653585285 | -4.508393658397836e-12 |

*

* *

These are rather easy to find if you *already* know π to a high precision, as we do today. A quick visit to Wikipedia will get you π to 50 or so decimals, but in earlier times, the needed computation was so tedious that we had to wait 1400 or so to get to 10 digits precision.

The C standard library defines `M_PI`, with a precision of 16 digits. We can use that to brute-force search “useful” fractions. I wrote a simple program to do just that:

#include <cmath> // for M_PI #include <utility> #include <vector> #include <algorithm> #include <iostream> #include <iomanip> typedef std::pair<int,int> fraction; double sqr(double x) { return x*x; } double as_double(const fraction & a) { return a.first/(double)a.second; } double fraction_error(const fraction & a) { return sqr(as_double(a)-M_PI); } int gcd(int a, int b) { while (b) { uint64_t t=b; b=a % b; a=t; } return a; } fraction simplify(fraction a) { int d; while ((d=gcd(a.first,a.second))!=1) { a.first/=d; a.second/=d; } return a; } class fraction_with_error { public: fraction fract; double error; bool operator==(const fraction_with_error & other) const { return fract==other.fract; } fraction_with_error(int a, int b) : fract(simplify(fraction(a,b))), error(fraction_error(fract)) {} }; bool compare(const fraction_with_error & a, const fraction_with_error & b) { return a.error<b.error; } int by_decades(int max_range=3, size_t nb_bests=10) { for (int d=1,low=1,high=10;d<=max_range;d++,low=high,high*=10) { std::vector<fraction_with_error> bests; for (int denum=low;denum<high;denum++) for (int numer=(314*denum)/100,max_numer=(316*denum)/100; numer<=max_numer; numer++) { fraction_with_error this_fraction=fraction_with_error(numer,denum); if ((bests.size()==0) || (this_fraction.error<bests.back().error)) { if (std::find(bests.begin(),bests.end(),this_fraction)==bests.end()) { bests.insert( std::upper_bound( bests.begin(), bests.end(), this_fraction, compare), this_fraction); if (bests.size()>nb_bests) bests.pop_back(); //bests.resize(nb_bests); } } } std::cout << low << "--" << high-1 << std::endl; for(const fraction_with_error & this_fraction : bests) std::cout << '\t' << this_fraction.fract.first << '/' << this_fraction.fract.second << '\t' << as_double(this_fraction.fract) << '\t' << as_double(this_fraction.fract)-M_PI << std::endl; bests.clear(); } } int main() { std::cout << std::setprecision(16); by_decades(8,4); return 0; }

*

* *

The definition of “useful” may vary, of course. Maybe it’d be interesting to find fractions that involve numbers with very few ones in their binary representation. Why? Well, I’ve see micro-controllers and CPUs that had fast shifts and add, but a very slow—or none at all—multiplication instruction. In this case, we must emulate multiplication using shifts. The fewer shifts, the faster the operation.