Euclid, Prime numbers, and the Inclusion-Exclusion Principle

28/05/2013

The Euclidean algorithm, the algorithm to compute the greatest common divisor, or \gcd, of two numbers, dates back to antiquity (obviously). We can use it to make a fast test for primality, at least up to some confidence—and that’s where the inclusion-exclusion principle comes in.

Euklid-von-Alexandria_1

Let us begin by the Euclidean algorithm. Originally, the algorithm was done by successive subtraction (and becomes quickly tedious as numbers grow), but the modern version, at least the one that I know of, uses divisions and remainders (modulo), and computes the \gcd(a,b) of two numbers a and b in O(\lg\min(a,b)) (counting division as an O(1) operation), which is very fast.

Read the rest of this entry »


Average node depth in a Full Tree

14/05/2013

While doing something else I stumbled upon the interesting problem of computing the average depth of nodes in a tree. The depth of a node is the distance that separates that node from the root. You can either decide that the root is at depth 1, or you can decide that it is at depth zero, but let’s decide on depth 1. So an immediate child of the root is at depth two, and its children at depth 3, and so on until you reach leaves, nodes with no children.

tree-diagram7

So the calculation of the average node depth (including leaves) in a tree comes interesting when we want to know how far a constructed tree is from the ideal full tree, as a measure of (application-specific) performance. After searching a bit on the web, I found only incomplete or incorrect formulas, or stated with proof. This week, let us see how we can derive the result without (too much) pain.

Read the rest of this entry »


A Special Case…

26/03/2013

Expressions with floors and ceilings (\lfloor x \rfloor and \lceil y \rceil) are usually troublesome to work with. There are cases where you can essentially remove them by a change of variables.

stairs

Turns out, one form that regularly comes up in my calculations is \lfloor \lg x \rfloor, and it bugged me a while before I figured out the right way of getting rid of them (sometimes).

Read the rest of this entry »


Compressed Series (Part II)

12/03/2013

Last week we looked at an alternative series to compute e, and this week we will have a look at the computation of e^x. The usual series we learn in calculus textbook is given by

\displaystyle e^x=\sum_{n=0}^\infty \frac{x^n}{n!}

We can factorize the expression as

Read the rest of this entry »


Compressed Series (Part I)

05/03/2013

Numerical methods are generally rather hard to get right because of error propagation due to the limited precision of floats (or even doubles). This seems to be especially true with methods involving series, where a usually large number of ever diminishing terms must added to attain something that looks like convergence. Even fairly simple sequences such as

\displaystyle e=\sum_{n=0}^\infty \frac{1}{n!}

may be complex to evaluate. First, n! is cumbersome, and \displaystyle \frac{1}{n!} becomes small exceedingly rapidly.

Read the rest of this entry »


On rectangles

19/02/2013

While reading Cryptography: The Science of Secret Writing [1], the author makes a remark (p. 36) (without substantiating it further) that the number of possible distinct rectangles composed of n squares (a regular grid) is quite limited.

Mondrian_CompRYB

Unsubstantiated (or, more exactly, undemonstrated) claims usually bug me if the demonstration is not immediate. In this case, I determined fairly rapidly the solution, but I guess I would have liked something less vague than “increased in size does not necessarily provide greater possibilities for combination.” Let’s have a look at that problem.

Read the rest of this entry »


Crates of Bottles

05/02/2013

The other day I was looking at a crate of 12 water bottles, and looked at the wasted space between them. Why not use a honeycomb? or even other bottle shapes?

Well, let us have a mathematical look at bottle packing. A round bottle occupies a surface (at the bottom of the box) of S=\pi r^2 (for some radius r) in a square of 4r^2. The occupancy is therefore

Read the rest of this entry »


Damn you, Napier!

15/01/2013

Briggs‘ logarithms (often mistakenly understood to be Napier‘s logarithms) is such an useful function that most of us don’t really think about it, until we have to. Everyone’s familiar with its properties:

\displaystyle\log_b a = \frac{\log a}{\log b}

\log b^x = x \log b

\log a b = \log a + \log b (1)

\displaystyle\log \frac{a}{b} = \log a - \log b

but suddenly,

\log (a+b) = ?

What can we do with this last one?

Read the rest of this entry »


Fast Interpolation (Interpolation, part V)

03/07/2012

In the last four installments of this series, we have seen linear interpolation, cubic interpolation, Hermite splines, and lastly cardinal splines.

In this installment (which should be the last of the series; at least for a while), let us have a look at how we can implement these interpolation efficient.

Read the rest of this entry »


Cardinal Splines (Interpolation, part IV)

26/06/2012

In the last installment of this series, we left off Hermite splines asking how we should choose the derivatives at end points so that patches line up nicely, in a visually (or any other context-specific criterion) pleasing way.

Cardinal splines solve part of this problem quite elegantly. I say part of the problem because they address only the problem of the first derivative, ensuring that the curve resulting from neighboring patches are C^0-continuous, that is, the patches line up at the same point, and are C^1-continuous, that is, the first derivatives line up as well. We can imagine splines what are (up to) C^k-continuous, that is, patches lining up, and up to the k-th derivatives lining up as well.

Read the rest of this entry »