13/08/2013
We often start thinking about something, make some progress, and eventually realize it’s not going anywhere, or, anyway, the results aren’t very phrasmotic—certainly not what you hoped for. Well, this week’s post is one of those experiments.

So I was wondering, how can I compute the binomial coefficient,
, and minimize the size of the integers involved?
Read the rest of this entry »
Leave a Comment » |
algorithms, Mathematics, programming | Tagged: Binomial, Factorial, numerical method |
Permalink
Posted by Steven Pigeon
23/07/2013
The other day I found an amusing short from numberphile about “happy numbers”. Not unlike the Collatz problem, a number is happy if, through successive transformations, it reaches one. The transformation is rather numerological in nature (i.e., it’s arbitrary and so why not): To get the next number in the series to happiness, you take the current number and sum the squares of its digits.
The example in the video is that
is a happy number:





Therefore
is a happy number. That’s cool, but are all numbers happy, and, if not, what happens when a number is unhappy?
Read the rest of this entry »
Leave a Comment » |
algorithms, C, C-plus-plus, Mathematics, programming | Tagged: 8, Conjecture, Happy Numbers, number theory, numberphile |
Permalink
Posted by Steven Pigeon
16/07/2013
In the 18th century, Georges-Louis Leclerc, Count of Buffon, proposed an experiment to estimate
. The experiment (now somewhat famous as it appears in almost all probability textbooks) consists in dropping randomly matches on a floor made of parallel floorboards and using the number of time the matches land across a junction to estimate
.

To perform the Count’s experiment, you do not need a lot of math. You only need to test if

with
and
are both uniform random variables, and
is the width of the floorboards. You may remark that we use
and that it looks like a circular definition until you think that
radians is 45°, and you can estimate it using other means. Then you start throwing zillions of virtual matches and count the number of intersections, then use a laboriously obtained probability distribution to estimate
.
Lucky for us, there’s a variant that does not require the call to sines, not complicated integrals. Just squaring, counting, and a division. Let’s have a look.
Read the rest of this entry »
2 Comments |
algorithms, Mathematics | Tagged: 3.141592, BBP, Buffon's needle, darts, π, Plouffe, spigot algorithm |
Permalink
Posted by Steven Pigeon
08/07/2013
A classical example of a greedy algorithm is the algorithm to return change given a set of face values. In Canada (and the US) the usual coins are 25¢, 10¢, 5¢ and 1¢ coins—there’s also a 50¢ coin, but they’re rare nowadays. The algorithm proceeds as follows. You are given a number of cents to return, and you start with the highest face value,
. You subtract
from the numbers of cents while the number of cents still to return is greater or equal to
. If the number of cents still to return is smaller than
but not zero, you repeat using the next denomination,
, then if there’s still some, with
, etc. Eventually, you return all the change, using a minimal number of coins.

So the next interesting question is whether our current coinage is optimal, as measured by the (average) quantity of returned change? No. It’s not. Let’s see why.
Read the rest of this entry »
2 Comments |
algorithms, Mathematics, wtf | Tagged: cent, cents, change, coin, dollar, fave value, greedy algorithms, strange |
Permalink
Posted by Steven Pigeon
02/07/2013
So in previous installments, we looked at how to use the euclidean algorithm to devise a Las Vegas-type algorithm for primality testing. We also found that, in general, simply testing factors one after the other is much more efficient (but that doesn’t mean that there are not better algorithms to test for primality!).

We also considered only relatively small primorials (the products of the first
prime numbers) since they rapidly exceeded
. But just how fast do primorials grow?
Read the rest of this entry »
1 Comment |
algorithms, Mathematics, programming | Tagged: GCD, prime, primorials, relatively prime, SAGE |
Permalink
Posted by Steven Pigeon
25/06/2013
Last time we had a look at using the greatest common divisor and Euclid’s algorithm to devise a Las Vegas algorithm for primality testing. We also had a look at how the inclusion exclusion principle helps us determine the proportion of the numbers correctly tested.

However, we finished by asking ourselves if the method is actually efficient compared to, say, simply testing small divisors, one by one. Let us now find out.
Read the rest of this entry »
2 Comments |
algorithms, Mathematics, programming | Tagged: algorithm analysis, Euclid, GCD, prime, relatively prime |
Permalink
Posted by Steven Pigeon
18/06/2013
Last week we had a first look at a simple path finding algorithm with application to games. This week, let us have a look a the relative performance of the implementations considered: C# with a naïve implementation that scans the whole map multiple times; a C# implementation that uses lists to maintain and visit only the border of the solved region (affording massive speed-ups, one hopes); and C++ versions of these two variants, plus a C++ variant that uses set (as bitmaps) to maintain the border list.

In any quantitative study, the first thing to do is to collect data. The best way is to automate the collection, by, for example, adding a function that puts the player in all legal positions on the map, compute all the shortest paths, and measures how long it takes. One may also consider disabling the on-demand power policy as the CPU may jump (progressively?) in high gear as the data collection progresses, introducing bias.
Read the rest of this entry »
1 Comment |
algorithms, C-plus-plus, csharp, embedded programming, hacks, programming | Tagged: Comparing speed-ups, Map, optimizations, quantitative approach, shortest path, speed-up |
Permalink
Posted by Steven Pigeon
11/06/2013
The other day in class I was telling my students that sometimes simple strategies simply do not cut it. As an easy-to-understand example, I discussed the behavior of baddies in video games. If your game asks for the baddies to chase the player, the first thing you might try is to move the baddies in the direction of the player, without any other sophistication.

So the algorithm in this case is that you move the baddie closer to the player (or try to, as there might be obstacles), and this translates to something like two lines of code:
Read the rest of this entry »
6 Comments |
algorithms, Artificial Intelligence, C-plus-plus, csharp, embedded programming, hacks, programming | Tagged: baddies, Dynamic Programming, Map, maps, real time, tiles, video games |
Permalink
Posted by Steven Pigeon
28/05/2013
The Euclidean algorithm, the algorithm to compute the greatest common divisor, or
, of two numbers, dates back to antiquity (obviously). We can use it to make a fast test for primality, at least up to some confidence—and that’s where the inclusion-exclusion principle comes in.

Let us begin by the Euclidean algorithm. Originally, the algorithm was done by successive subtraction (and becomes quickly tedious as numbers grow), but the modern version, at least the one that I know of, uses divisions and remainders (modulo), and computes the
of two numbers
and
in
(counting division as an
operation), which is very fast.
Read the rest of this entry »
4 Comments |
algorithms, Mathematics | Tagged: Euclid, GCD, Inclusion Exclusion Principle |
Permalink
Posted by Steven Pigeon
14/05/2013
While doing something else I stumbled upon the interesting problem of computing the average depth of nodes in a tree. The depth of a node is the distance that separates that node from the root. You can either decide that the root is at depth 1, or you can decide that it is at depth zero, but let’s decide on depth 1. So an immediate child of the root is at depth two, and its children at depth 3, and so on until you reach leaves, nodes with no children.

So the calculation of the average node depth (including leaves) in a tree comes interesting when we want to know how far a constructed tree is from the ideal full tree, as a measure of (application-specific) performance. After searching a bit on the web, I found only incomplete or incorrect formulas, or stated with proof. This week, let us see how we can derive the result without (too much) pain.
Read the rest of this entry »
2 Comments |
algorithms, data structures, Mathematics | Tagged: average depth, Binary Tree, full tree, path, path depth, Tree |
Permalink
Posted by Steven Pigeon