Hermite Splines (Interpolation, part III)

12/06/2012

In previous posts, we discussed linear and cubic interpolation. So let us continue where we left the last entry: Cubic interpolation does not guaranty that neighboring patches join with the same derivative and that may introduce unwanted artifacts.

Well, the importance of those artifacts may vary; but we seem to be rather sensitive to curves that change too abruptly, or in unexpected ways. One way to ensure that cubic patches meet gracefully is to add the constraints that the derivative should be equal on both side of a joint. Hermite splines do just that.

Read the rest of this entry »


Cubic Interpolation (Interpolation, part II)

29/05/2012

In a previous entry, we had a look at linear interpolation and concluded that we should prefer some kind of smooth interpolation, maybe a polynomial.

However, we must use a polynomial of sufficient degree so that neighboring patches do not exhibit very different slopes on either side of known points. This pretty much rules out quadratic polynomials, because polynomials of the form ax^2+bx+c are only capable of expressing (strictly) convex (or concave) patches. A quadratic piece-wise function would look something like:

Read the rest of this entry »


Fast Fibonacci

22/05/2012

The Fibonacci numbers are very interesting for many reasons, from seed heads to universal coding (such as Taboo Codes). Just figuring how to compute them efficiently is plenty of fun.

The classical formulation of the Fibonacci numbers is given by the following recurrence:

Read the rest of this entry »


Linear Interpolation (Interpolation, part I)

15/05/2012

In a couple of different occasions I discussed the topic of interpolation, without really going into the details. Lately, I had to interpolate data and that got me interested (again) in interpolation; and I think I should share some of the things I learned.

In this first post (of a series), let us begin by the simplest interpolation of all (after constant interpolation): linear interpolation.

Read the rest of this entry »


Faster Collatz

01/05/2012

Quite a while ago, I presented the Collatz conjecture and I was then interested in the graphical representation of the problem—and not really going anywhere with it.

In this entry, let us have a look at the implementation of the Collatz function.

Read the rest of this entry »


Introduction to Gradient Descent

24/04/2012

In a previous post, I presented (generalized) linear regression from a linear algebra point of view (namely, the “normal equations”) and in this post, I discuss yet another take on the problem, this time using gradient descent.

Gradient descent isn’t particularly fascinating for this particular task (as we know closed, analytical expressions for obtaining the parameters), but linear regression is the simplest example of gradient descent I could come up with without being completely trivial.

Read the rest of this entry »


New Way of Computing Square Roots?

17/04/2012

I sometimes have quite nerdy readings. As of late, I’ve been reading Le fabuleux destin de \sqrt{2} (The Fabulous Destiny of \sqrt{2}, one might translate) by Benoît Rittaud. This book is a treasure trove of \sqrt{2} facts, and one caught my eye more than the others so far: an iterative method to compute square roots of any (positive) number.

When the method is first presented, he leaves to the reader to find a demonstration (though he gives one much later on, several chapters later), but let’s see what we can find.

Read the rest of this entry »


Finding Collisions

30/03/2012

Some time ago, a friend was trying to find an efficient way (storage and time complexity) to find collisions for a (secure) hash function. Hashing random keys until you get a collision is reminiscent of von Mises’ birthday paradox.

In is simplest form, the birthday paradox states that, amongst (randomly) gathered people, the probability that (at least) two share a birthday increases counter-intuitively fast with the number of gathered people. This mean that even if your hash function is random (i.e., strong), you may not have to try a very large number of random keys before finding collisions. Or does it?

Read the rest of this entry »


Drawing Random Numbers from Disjoint Ranges (Generating Random Sequences V)

20/03/2012

On a number of previous installment of this series, I’ve discussed permutations, generating uniform random points in a triangle, on a sphere, the inversion method, even recycling expensive random bits. In this entry, I will use the rejection method to generate random numbers from disjoint ranges.

For simplicity, let us consider only the uniform case, where all values are equally likely to be drawn. So instead of drawing a number x from a range, say [1,10], we’re interested in the case where the set is composed from several intervals, something like [1,10] \cup [20,30] \cup [40,50]. We may think of drawing uniformly from [1,50] and retry if we “fall in the gaps”, that is, if we draw a number in [11,19] or [31,39].

A first, it doesn’t seem like a bad plan, especially if the “holes” are rather small and easy to miss—that is, we have a good chance of hitting in an allowable range in a very few tries. For example, if the ranges we’re interested in look like [1,10] and [95,100], drawing from [1,100] really doesn’t sound like a good idea.

But what if we compacted the ranges?

Read the rest of this entry »


Quadrature Encoding

06/03/2012

The nice thing about the trigonometric circle is that it is, quite so indeed, a circle. But it’s also a disadvantage when you consider the implementations of functions manipulating angles, especially \mathrm{mod}~2\pi.

This is particularly a problem when you’re not so much interested by a specific angle (which is at all time always well-defined) than by a series of angles varying in time. If you’re certain that your angle is always within \pm\pi or within [0,2\pi], you do not expect problems. What if, on the contrary the angle wraps around?

Read the rest of this entry »