Scanning for π

In a previous episode, we looked at how we could use random sampling to get a good estimate of \pi, and found that we couldn’t, really, unless using zillions of samples. The imprecision of the estimated had nothing to do with the “machine precision”, the precision at which the machine represents numbers. The method essentially counted (using size_t, a 64-bits unsigned integer—at least on my box) the number of (random) points inside the circle versus the total number of points drawn.


Can we increase the precision of the estimate by using a better method? Maybe something like numerical integration?

The first idea that comes to mind (and not a very good one, as we will see) is to divide the circle—or a quadrant—into a discrete n\times{}n grid, with n rather large, or, conversely, with very small tile size. A straightforward implementation of this idea yields something like:

void pitify(float_x step, size_t & total, size_t & in)

  for (float_x x=0; x<=1.0; x+=step)
    for (float_x y=0; y<=1.0; y+=step,total++)
     if (sqr(x)+sqr(y)<=1)

where step is something like 1e-10, or any suitably small magnitude, and float_x is a typedef to either float or double. The function reports the number of points in the circle (up to machine precision and step size), and you multiply by four to get the total on the circle (or you get an approximation to \frac{\pi}{4}). Then you chose the step. One possible strategy is to start to, say, 1, and refine by making step smaller, either by factor of two, or a factor of 10. Our “real number” intuition tells us that the factor shouldn’t make a difference. ha! ha!, it does!

Refining precision using (negative) powers of 2 or 10 doesn’t gives the same precision:


Using (negative) powers of 2, the errors goes smoothly down to about 2.99e-6 with step size 9.5e-7, while with (negative) powers of 10, the errors decreases, then increases, then decreases again, with a step size of 1e-06, the error is 4e-9, much smaller than with the powers of two, but we see from the above plot that while the powers of 2 series is imprecise, it is stable; and the powers of 10 series eventually yields a smaller error, but is unstable.

* *

We want to approximate the following integral:

\displaystyle \int_0^1 \sqrt{1-x^2}\partial x

which we know evaluates to \frac{\pi}{4} (and has definite integral \frac{1}{2}(\sqrt{1-x^2}+\sin^{-1}(x))+c). If we want to evaluate it numerically, we can replace it by

\displaystyle \int_0^1 \sqrt{1-x^2}\partial x \approx \Delta{x}\sum_{i=0}^{\Delta{x}^{-1}} \sqrt{1-(i\Delta{x})^2}

Graphically, that means we approximate the integral by a series of rectangles, staring by a quadrant:


with more


and more


rectangles. In principle, as \Delta{x} goes to zero, the sum is the integral. But since we can’t afford the ghost of the departed x, we will settle for some small step. A rather simple implementation gives:

float_x integral_pitify(float_x step)
  float_x total=0;

  for (float_x x=0; x<=1.0; x+=step)

  return step*total*4;

Launching the (numerical) integration with steps that are (again) (negative) powers of two and with (negative) powers of ten tell a similar story as before:


except that this time, the powers of two are both more precise and more stable than the powers of ten. The best precision for powers of two is obtained with a step of size 7.2e-12 with an error of -1.3e-11, while the powers of ten, without exactly diverging, yields the rather disappointing error of -1.2e-8 at a step size of 1e-9 only.

* *

So what goes wrong? Well, the simple answer is: floats. Why? because floats are only approximation to rationals, and since the underlying representation is binary, they’re not very good with things that aren’t powers of two. For example, 0.1 is rather elegant in base 10, but in binary, it’s represented as 0.00011001100110011… and therefore will be rounded up to machine-precision. So you’ll have 0.099609375 or something if you truncate it to 0.000110011. Powers of two, however, are machine friendly: 0.5 is 0.1 in base 2, and 0.25 is 0.01, exactly, and it gets represented as 1\times2^{-2} by the float standard; so basically multiplying or dividing with powers of two only affect the exponent and does not introduce other type of rounding errors.


If you endeavor to compute \pi to a very high precision, you can’t possibly rely on float or double. In the bestest case ever, you’ll end-up with a machine precision \pi, that is, \pi to 7 or 16 digits; which is more or less subphrasmotic. No, we would have to use an arbitrary precision library. In C, that will be quite cumbersome, but in C++ that might actually be even elegant.

One Response to Scanning for π

  1. […] but can we do better? In previous posts, we have examined how to approximate by rejection and by approximating the area of a circle on a discrete grid. The rejection method showed very slow convergence and the scanning method was […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: