28/05/2013
The Euclidean algorithm, the algorithm to compute the greatest common divisor, or
, of two numbers, dates back to antiquity (obviously). We can use it to make a fast test for primality, at least up to some confidence—and that’s where the inclusion-exclusion principle comes in.

Let us begin by the Euclidean algorithm. Originally, the algorithm was done by successive subtraction (and becomes quickly tedious as numbers grow), but the modern version, at least the one that I know of, uses divisions and remainders (modulo), and computes the
of two numbers
and
in
(counting division as an
operation), which is very fast.
Read the rest of this entry »
4 Comments |
algorithms, Mathematics | Tagged: Euclid, GCD, Inclusion Exclusion Principle |
Permalink
Posted by Steven Pigeon
14/05/2013
While doing something else I stumbled upon the interesting problem of computing the average depth of nodes in a tree. The depth of a node is the distance that separates that node from the root. You can either decide that the root is at depth 1, or you can decide that it is at depth zero, but let’s decide on depth 1. So an immediate child of the root is at depth two, and its children at depth 3, and so on until you reach leaves, nodes with no children.

So the calculation of the average node depth (including leaves) in a tree comes interesting when we want to know how far a constructed tree is from the ideal full tree, as a measure of (application-specific) performance. After searching a bit on the web, I found only incomplete or incorrect formulas, or stated with proof. This week, let us see how we can derive the result without (too much) pain.
Read the rest of this entry »
2 Comments |
algorithms, data structures, Mathematics | Tagged: average depth, Binary Tree, full tree, path, path depth, Tree |
Permalink
Posted by Steven Pigeon
12/03/2013
Last week we looked at an alternative series to compute
, and this week we will have a look at the computation of
. The usual series we learn in calculus textbook is given by

We can factorize the expression as
Read the rest of this entry »
Leave a Comment » |
algorithms, C, C-plus-plus, C99, Mathematics | Tagged: convergence, exp, series |
Permalink
Posted by Steven Pigeon
05/03/2013
Numerical methods are generally rather hard to get right because of error propagation due to the limited precision of floats (or even doubles). This seems to be especially true with methods involving series, where a usually large number of ever diminishing terms must added to attain something that looks like convergence. Even fairly simple sequences such as

may be complex to evaluate. First,
is cumbersome, and
becomes small exceedingly rapidly.
Read the rest of this entry »
2 Comments |
algorithms, bit twiddling, hacks, Mathematics | Tagged: convergence, e, Euler, series |
Permalink
Posted by Steven Pigeon
19/02/2013
While reading Cryptography: The Science of Secret Writing [1], the author makes a remark (p. 36) (without substantiating it further) that the number of possible distinct rectangles composed of
squares (a regular grid) is quite limited.

Unsubstantiated (or, more exactly, undemonstrated) claims usually bug me if the demonstration is not immediate. In this case, I determined fairly rapidly the solution, but I guess I would have liked something less vague than “increased in size does not necessarily provide greater possibilities for combination.” Let’s have a look at that problem.
Read the rest of this entry »
Leave a Comment » |
Mathematics | Tagged: combinatorics, Cryptography, Integer factorization, number theory |
Permalink
Posted by Steven Pigeon
15/01/2013
Briggs‘ logarithms (often mistakenly understood to be Napier‘s logarithms) is such an useful function that most of us don’t really think about it, until we have to. Everyone’s familiar with its properties:


(1)

but suddenly,

What can we do with this last one?
Read the rest of this entry »
Leave a Comment » |
algorithms, Mathematics | Tagged: Briggs, Logarithm, Napier, Numerical Approximation |
Permalink
Posted by Steven Pigeon
03/07/2012
In the last four installments of this series, we have seen linear interpolation, cubic interpolation, Hermite splines, and lastly cardinal splines.

In this installment (which should be the last of the series; at least for a while), let us have a look at how we can implement these interpolation efficient.
Read the rest of this entry »
Leave a Comment » |
algorithms, Mathematics, programming | Tagged: ATLAS, Cubic Interpolation, Matrix, OpenBlas, Vector |
Permalink
Posted by Steven Pigeon
26/06/2012
In the last installment of this series, we left off Hermite splines asking how we should choose the derivatives at end points so that patches line up nicely, in a visually (or any other context-specific criterion) pleasing way.

Cardinal splines solve part of this problem quite elegantly. I say part of the problem because they address only the problem of the first derivative, ensuring that the curve resulting from neighboring patches are
-continuous, that is, the patches line up at the same point, and are
-continuous, that is, the first derivatives line up as well. We can imagine splines what are (up to)
-continuous, that is, patches lining up, and up to the
-th derivatives lining up as well.
Read the rest of this entry »
6 Comments |
algorithms, Mathematics, programming | Tagged: C-Continuous, Cardinal Spline, Hermite Spline, interpolation, Spline |
Permalink
Posted by Steven Pigeon