Whatever sums your floats

24/01/2017

While flipping the pages of a “Win this interview” book—just being curious, not looking for a new job—I saw this seemingly simple question: how would you compute the sum of a series of floats contained in a array? The book proceeded with the simple, obvious answer. But… is it that obvious?

Read the rest of this entry »


Choosing the Right Pseudoinverse

17/01/2017

On a number of previous occasions, I have used the pseudoinverse of a matrix solve systems of equations, and do other things such as channel mixing. However, the demonstration I gave before isn’t entirely correct. Let’s see now why it’s important to make the difference between a left and a right pseudoinverse.

otter

Read the rest of this entry »


Strings in C++ Switch/Case statements

10/01/2017

Something that used to bug me—used to, because I am so accustomed to work around it that I rarely notice the problem—is that in neither C nor C++ you can use strings (const char * or std::string) in switch/case statement. Indeed, the switch/case statement works only on integral values (an enum, an integral type such as char and int, or an object type with implicit cast to an integral type). But strings aren’t of integral types!

In pure C, we’re pretty much done for. The C preprocessor is too weak to help us built compile-time expression out of strings (or, more exactly, const char *), and there’sn’t much else in the language to help us. However, things are a bit different in C++.

Read the rest of this entry »


Logarithms (Part I?)

03/01/2017

The traditional—but certainly not the best—way to compute the value of the logarithm of some number x is to use a Taylor series, for example

\displaystyle \ln x = (x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\frac{1}{4}(x-1)^4+\cdots

but that expansion is only valid for 0<x\leqslant{2}, or so, because it is the Taylor expansion of \ln x "around 1", and the convergence radius of this particular expression isn't very large. Furthermore, it needs a great deal of terms before converging.

Read the rest of this entry »


Pretty Printing a Tree in a Terminal

06/12/2016

More often than I’d like, simple tasks turn out to be not that simple. For example, displaying (beautifully) a binary tree for debugging purpose in a terminal. Of course, one could still use lots of parentheses, but that does not lend itself to a very fast assessment of the tree. We could, however, use box drawing characters, as in DOS’s goode olde tymes, since they’re now part of the Unicode standard.

tree-01

Read the rest of this entry »


The infinite radicals of Ramanujan

29/11/2016

What if I asked you to find the numerical value of

\displaystyle \sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+5\sqrt{1+6\sqrt{\cdots}}}}}} ?

If you have difficulty figuring out what this thing is, don’t worry. You’re not the only one. This question is one the problems posed by Srinivasa Ramanujan, one of the most brilliant and, well, mysterious mathematicians of all time. In fact, it was head enough that Ramanujan had to give the answer a few months later.

Read the rest of this entry »


Square Roots (Part V)

15/11/2016

Last week we examined the complexity of obtaining k in the decomposition n=2^k+b for some integer n. This week, we’ll have a look at how we can use it to speed-up square root extraction. Recall, we’re interested in k because

2^k \leqslant 2^k+b < 2^{k+1},

with 0 \leqslant b < 2^k, which allows us to get easy bounds on \sqrt{n}. Better, we also have that

\sqrt{2^k} \leqslant \sqrt{2^k+b} \leqslant \sqrt{2^{k+1}},

and we know how to compute \sqrt{2^k}=2\frac{k}{2} (somewhat efficiently! Let’s combine all this and see how it speeds up things.

Read the rest of this entry »


Square Roots (part IV)

08/11/2016

In a previous post, we noticed that

\sqrt{2^k} \leqslant \sqrt{n} = \sqrt{2^k+b} \leqslant \sqrt{2^{k+1}},

where 0 \leqslant b < 2^k, could be used to kick-start Newton's (or another) square root finding algorithm. But how fast can we find k and b in this decomposition?

Read the rest of this entry »


Regula Falsi

01/11/2016

The regular falsi, or method of false position, is a method to solve an equation in one unknown, from an initial “guess”. Guesses are progressively refined, by a rather simple method as we will see, until the exact answer is reached or until convergence is reached. The method is useful when you don’t really know how to divide by arbitrary values (as it was the case in Ancient Times) or when the equation is cumbersome.

Read the rest of this entry »


Horner’s Method

25/10/2016

It’s not like evaluating polynomial is something that comes up that frequently in the average programmer’s day. In fact, almost never, but it brings up a number of interesting questions, such as, how do we evaluate it very efficiently and how much of a simplification in the computation is actually a simplification?

The generic polynomial has the form

a_0+a_1x+a_2x^2+a_3x^3+\cdots+a_nx^n.

If we evaluate this naïvely, we end up doing O(n^2) multiply and O(n) additions. Even using the fast exponentiation algorithm, we still use O(n \lg n) multiplies. But we can, very easily, get down to O(n).

Read the rest of this entry »