Best. Number Base. Ever.

July 30, 2013

One of the first things we learn when we begin programming is that there are different number bases: base 10, the usual one, but also binary, octal, and hexadecimal. Their expressive power is strictly equivalent, but we also notice that hexadecimal numbers are written much more compactly than binary, and so, hexadecimal is “more efficient.”

coin-obverse-small

But “efficiency” begs the question: how do we define it, and, once efficiency defined, which base is the best? Let’s find out!

Read the rest of this entry »


The conjecture of 8

July 23, 2013

The other day I found an amusing short from numberphile about “happy numbers”. Not unlike the Collatz problem, a number is happy if, through successive transformations, it reaches one. The transformation is rather numerological in nature (i.e., it’s arbitrary and so why not): To get the next number in the series to happiness, you take the current number and sum the squares of its digits.

The example in the video is that 7 is a happy number:

7 \to 7^2=49

49 \to 4^2+9^2=97

97 \to 9^2+7^2=130

130 \to 1^2+3^2+0^2=10

10 \to 1^2+0^2=1

Therefore 7 is a happy number. That’s cool, but are all numbers happy, and, if not, what happens when a number is unhappy?

Read the rest of this entry »


π by rejection

July 16, 2013

In the 18th century, Georges-Louis Leclerc, Count of Buffon, proposed an experiment to estimate \pi. The experiment (now somewhat famous as it appears in almost all probability textbooks) consists in dropping randomly matches on a floor made of parallel floorboards and using the number of time the matches land across a junction to estimate \pi.

pi-pie

To perform the Count’s experiment, you do not need a lot of math. You only need to test if

\displaystyle x\leqslant\frac{w}{2}\sin\alpha

with x\sim\mathcal{U}(0,w) and \alpha\sim\mathcal{U}(0,\frac{\pi}{2}) are both uniform random variables, and w is the width of the floorboards. You may remark that we use \frac{\pi}{2} and that it looks like a circular definition until you think that \frac{\pi}{2} radians is 45°, and you can estimate it using other means. Then you start throwing zillions of virtual matches and count the number of intersections, then use a laboriously obtained probability distribution to estimate \pi.

Lucky for us, there’s a variant that does not require the call to sines, not complicated integrals. Just squaring, counting, and a division. Let’s have a look.

Read the rest of this entry »


Strange Change

July 8, 2013

A classical example of a greedy algorithm is the algorithm to return change given a set of face values. In Canada (and the US) the usual coins are 25¢, 10¢, 5¢ and 1¢ coins—there’s also a 50¢ coin, but they’re rare nowadays. The algorithm proceeds as follows. You are given a number of cents to return, and you start with the highest face value, v_1. You subtract v_1 from the numbers of cents while the number of cents still to return is greater or equal to v_1. If the number of cents still to return is smaller than v_1 but not zero, you repeat using the next denomination, v_2, then if there’s still some, with v_3, etc. Eventually, you return all the change, using a minimal number of coins.

coin-obverse-small

So the next interesting question is whether our current coinage is optimal, as measured by the (average) quantity of returned change? No. It’s not. Let’s see why.

Read the rest of this entry »


Euclid and Primality Testing (III)

July 2, 2013

So in previous installments, we looked at how to use the euclidean algorithm to devise a Las Vegas-type algorithm for primality testing. We also found that, in general, simply testing factors one after the other is much more efficient (but that doesn’t mean that there are not better algorithms to test for primality!).

Euklid-von-Alexandria_1

We also considered only relatively small primorials (the products of the first n prime numbers) since they rapidly exceeded 2^{32}. But just how fast do primorials grow?

Read the rest of this entry »