Computing Binomials (Part I)

13/08/2013

We often start thinking about something, make some progress, and eventually realize it’s not going anywhere, or, anyway, the results aren’t very phrasmotic—certainly not what you hoped for. Well, this week’s post is one of those experiments.

trombone-small

So I was wondering, how can I compute the binomial coefficient, \binom{n}{k}, and minimize the size of the integers involved?

Read the rest of this entry »


The conjecture of 8

23/07/2013

The other day I found an amusing short from numberphile about “happy numbers”. Not unlike the Collatz problem, a number is happy if, through successive transformations, it reaches one. The transformation is rather numerological in nature (i.e., it’s arbitrary and so why not): To get the next number in the series to happiness, you take the current number and sum the squares of its digits.

The example in the video is that 7 is a happy number:

7 \to 7^2=49

49 \to 4^2+9^2=97

97 \to 9^2+7^2=130

130 \to 1^2+3^2+0^2=10

10 \to 1^2+0^2=1

Therefore 7 is a happy number. That’s cool, but are all numbers happy, and, if not, what happens when a number is unhappy?

Read the rest of this entry »


π by rejection

16/07/2013

In the 18th century, Georges-Louis Leclerc, Count of Buffon, proposed an experiment to estimate \pi. The experiment (now somewhat famous as it appears in almost all probability textbooks) consists in dropping randomly matches on a floor made of parallel floorboards and using the number of time the matches land across a junction to estimate \pi.

pi-pie

To perform the Count’s experiment, you do not need a lot of math. You only need to test if

\displaystyle x\leqslant\frac{w}{2}\sin\alpha

with x\sim\mathcal{U}(0,w) and \alpha\sim\mathcal{U}(0,\frac{\pi}{2}) are both uniform random variables, and w is the width of the floorboards. You may remark that we use \frac{\pi}{2} and that it looks like a circular definition until you think that \frac{\pi}{2} radians is 45°, and you can estimate it using other means. Then you start throwing zillions of virtual matches and count the number of intersections, then use a laboriously obtained probability distribution to estimate \pi.

Lucky for us, there’s a variant that does not require the call to sines, not complicated integrals. Just squaring, counting, and a division. Let’s have a look.

Read the rest of this entry »


Strange Change

08/07/2013

A classical example of a greedy algorithm is the algorithm to return change given a set of face values. In Canada (and the US) the usual coins are 25¢, 10¢, 5¢ and 1¢ coins—there’s also a 50¢ coin, but they’re rare nowadays. The algorithm proceeds as follows. You are given a number of cents to return, and you start with the highest face value, v_1. You subtract v_1 from the numbers of cents while the number of cents still to return is greater or equal to v_1. If the number of cents still to return is smaller than v_1 but not zero, you repeat using the next denomination, v_2, then if there’s still some, with v_3, etc. Eventually, you return all the change, using a minimal number of coins.

coin-obverse-small

So the next interesting question is whether our current coinage is optimal, as measured by the (average) quantity of returned change? No. It’s not. Let’s see why.

Read the rest of this entry »


Euclid and Primality Testing (III)

02/07/2013

So in previous installments, we looked at how to use the euclidean algorithm to devise a Las Vegas-type algorithm for primality testing. We also found that, in general, simply testing factors one after the other is much more efficient (but that doesn’t mean that there are not better algorithms to test for primality!).

Euklid-von-Alexandria_1

We also considered only relatively small primorials (the products of the first n prime numbers) since they rapidly exceeded 2^{32}. But just how fast do primorials grow?

Read the rest of this entry »


Euclid, Primes numbers, and a Bit of Algorithm Analysis

25/06/2013

Last time we had a look at using the greatest common divisor and Euclid’s algorithm to devise a Las Vegas algorithm for primality testing. We also had a look at how the inclusion exclusion principle helps us determine the proportion of the numbers correctly tested.

turbo-napkin

However, we finished by asking ourselves if the method is actually efficient compared to, say, simply testing small divisors, one by one. Let us now find out.

Read the rest of this entry »


Fast Path Finding (part II)

18/06/2013

Last week we had a first look at a simple path finding algorithm with application to games. This week, let us have a look a the relative performance of the implementations considered: C# with a naïve implementation that scans the whole map multiple times; a C# implementation that uses lists to maintain and visit only the border of the solved region (affording massive speed-ups, one hopes); and C++ versions of these two variants, plus a C++ variant that uses set (as bitmaps) to maintain the border list.

monster-small

In any quantitative study, the first thing to do is to collect data. The best way is to automate the collection, by, for example, adding a function that puts the player in all legal positions on the map, compute all the shortest paths, and measures how long it takes. One may also consider disabling the on-demand power policy as the CPU may jump (progressively?) in high gear as the data collection progresses, introducing bias.

Read the rest of this entry »


Fast Path Finding (part I)

11/06/2013

The other day in class I was telling my students that sometimes simple strategies simply do not cut it. As an easy-to-understand example, I discussed the behavior of baddies in video games. If your game asks for the baddies to chase the player, the first thing you might try is to move the baddies in the direction of the player, without any other sophistication.

monster-small

So the algorithm in this case is that you move the baddie closer to the player (or try to, as there might be obstacles), and this translates to something like two lines of code:

Read the rest of this entry »


Euclid, Prime numbers, and the Inclusion-Exclusion Principle

28/05/2013

The Euclidean algorithm, the algorithm to compute the greatest common divisor, or \gcd, of two numbers, dates back to antiquity (obviously). We can use it to make a fast test for primality, at least up to some confidence—and that’s where the inclusion-exclusion principle comes in.

Euklid-von-Alexandria_1

Let us begin by the Euclidean algorithm. Originally, the algorithm was done by successive subtraction (and becomes quickly tedious as numbers grow), but the modern version, at least the one that I know of, uses divisions and remainders (modulo), and computes the \gcd(a,b) of two numbers a and b in O(\lg\min(a,b)) (counting division as an O(1) operation), which is very fast.

Read the rest of this entry »


Average node depth in a Full Tree

14/05/2013

While doing something else I stumbled upon the interesting problem of computing the average depth of nodes in a tree. The depth of a node is the distance that separates that node from the root. You can either decide that the root is at depth 1, or you can decide that it is at depth zero, but let’s decide on depth 1. So an immediate child of the root is at depth two, and its children at depth 3, and so on until you reach leaves, nodes with no children.

tree-diagram7

So the calculation of the average node depth (including leaves) in a tree comes interesting when we want to know how far a constructed tree is from the ideal full tree, as a measure of (application-specific) performance. After searching a bit on the web, I found only incomplete or incorrect formulas, or stated with proof. This week, let us see how we can derive the result without (too much) pain.

Read the rest of this entry »