Scanning for π

27/08/2013

In a previous episode, we looked at how we could use random sampling to get a good estimate of \pi, and found that we couldn’t, really, unless using zillions of samples. The imprecision of the estimated had nothing to do with the “machine precision”, the precision at which the machine represents numbers. The method essentially counted (using size_t, a 64-bits unsigned integer—at least on my box) the number of (random) points inside the circle versus the total number of points drawn.

pi-pie

Can we increase the precision of the estimate by using a better method? Maybe something like numerical integration?

Read the rest of this entry »


Amicable Numbers (part I)

20/08/2013

I know of no practical use for Amicable numbers, but they’re rather fun. And rare. But let’s start with a definition. Let n be a natural number (a positive integer), and let

\displaystyle \sigma(n)=\sum_{d|n}d

with d \in \mathbb{N} and 1\leqslant{d}\leqslant{n}, be the sum of the divisors of n. We’re in fact interested in the sum of the proper divisors of n, that is,

s(n)=\sigma(n)-n

Now we’re ready to define amicable numbers!

Amicable numbers: Two numbers, n, m \in \mathbb{N} are amicable, if, and only if, s(n)=m and s(m)=n.

Given n, we can find m=s(n) and test if n=s(m)=s(s(n)). But to do that efficiently, we need to compute s(n) (or \sigma(n)) very rapidly. The first expression above requires O(n), but we can do much better. Let’s see how.

Read the rest of this entry »


Computing Binomials (Part I)

13/08/2013

We often start thinking about something, make some progress, and eventually realize it’s not going anywhere, or, anyway, the results aren’t very phrasmotic—certainly not what you hoped for. Well, this week’s post is one of those experiments.

trombone-small

So I was wondering, how can I compute the binomial coefficient, \binom{n}{k}, and minimize the size of the integers involved?

Read the rest of this entry »


Of Drunkards

06/08/2013

In a city with orthogonal streets and regular city block, a party-goer leaves a bar and walks back home. His home is North East from the bar, N blocks up, and E blocks east. At each intersection, he decide to go either north or east randomly—but whatever’s left of his sobriety keeps from backtracking: he always moves closer to home. How many ways can the drunkard get get back home?

city

Read the rest of this entry »


Best. Number Base. Ever.

30/07/2013

One of the first things we learn when we begin programming is that there are different number bases: base 10, the usual one, but also binary, octal, and hexadecimal. Their expressive power is strictly equivalent, but we also notice that hexadecimal numbers are written much more compactly than binary, and so, hexadecimal is “more efficient.”

coin-obverse-small

But “efficiency” begs the question: how do we define it, and, once efficiency defined, which base is the best? Let’s find out!

Read the rest of this entry »


The conjecture of 8

23/07/2013

The other day I found an amusing short from numberphile about “happy numbers”. Not unlike the Collatz problem, a number is happy if, through successive transformations, it reaches one. The transformation is rather numerological in nature (i.e., it’s arbitrary and so why not): To get the next number in the series to happiness, you take the current number and sum the squares of its digits.

The example in the video is that 7 is a happy number:

7 \to 7^2=49

49 \to 4^2+9^2=97

97 \to 9^2+7^2=130

130 \to 1^2+3^2+0^2=10

10 \to 1^2+0^2=1

Therefore 7 is a happy number. That’s cool, but are all numbers happy, and, if not, what happens when a number is unhappy?

Read the rest of this entry »


π by rejection

16/07/2013

In the 18th century, Georges-Louis Leclerc, Count of Buffon, proposed an experiment to estimate \pi. The experiment (now somewhat famous as it appears in almost all probability textbooks) consists in dropping randomly matches on a floor made of parallel floorboards and using the number of time the matches land across a junction to estimate \pi.

pi-pie

To perform the Count’s experiment, you do not need a lot of math. You only need to test if

\displaystyle x\leqslant\frac{w}{2}\sin\alpha

with x\sim\mathcal{U}(0,w) and \alpha\sim\mathcal{U}(0,\frac{\pi}{2}) are both uniform random variables, and w is the width of the floorboards. You may remark that we use \frac{\pi}{2} and that it looks like a circular definition until you think that \frac{\pi}{2} radians is 45°, and you can estimate it using other means. Then you start throwing zillions of virtual matches and count the number of intersections, then use a laboriously obtained probability distribution to estimate \pi.

Lucky for us, there’s a variant that does not require the call to sines, not complicated integrals. Just squaring, counting, and a division. Let’s have a look.

Read the rest of this entry »


Strange Change

08/07/2013

A classical example of a greedy algorithm is the algorithm to return change given a set of face values. In Canada (and the US) the usual coins are 25¢, 10¢, 5¢ and 1¢ coins—there’s also a 50¢ coin, but they’re rare nowadays. The algorithm proceeds as follows. You are given a number of cents to return, and you start with the highest face value, v_1. You subtract v_1 from the numbers of cents while the number of cents still to return is greater or equal to v_1. If the number of cents still to return is smaller than v_1 but not zero, you repeat using the next denomination, v_2, then if there’s still some, with v_3, etc. Eventually, you return all the change, using a minimal number of coins.

coin-obverse-small

So the next interesting question is whether our current coinage is optimal, as measured by the (average) quantity of returned change? No. It’s not. Let’s see why.

Read the rest of this entry »


Euclid and Primality Testing (III)

02/07/2013

So in previous installments, we looked at how to use the euclidean algorithm to devise a Las Vegas-type algorithm for primality testing. We also found that, in general, simply testing factors one after the other is much more efficient (but that doesn’t mean that there are not better algorithms to test for primality!).

Euklid-von-Alexandria_1

We also considered only relatively small primorials (the products of the first n prime numbers) since they rapidly exceeded 2^{32}. But just how fast do primorials grow?

Read the rest of this entry »


Euclid, Primes numbers, and a Bit of Algorithm Analysis

25/06/2013

Last time we had a look at using the greatest common divisor and Euclid’s algorithm to devise a Las Vegas algorithm for primality testing. We also had a look at how the inclusion exclusion principle helps us determine the proportion of the numbers correctly tested.

turbo-napkin

However, we finished by asking ourselves if the method is actually efficient compared to, say, simply testing small divisors, one by one. Let us now find out.

Read the rest of this entry »