More often than I’d like, simple tasks turn out to be not that simple. For example, displaying (beautifully) a binary tree for debugging purpose in a terminal. Of course, one could still use lots of parentheses, but that does not lend itself to a very fast assessment of the tree. We could, however, use box drawing characters, as in DOS’s goode olde tymes, since they’re now part of the Unicode standard.
The infinite radicals of Ramanujan
29/11/2016What if I asked you to find the numerical value of
?
If you have difficulty figuring out what this thing is, don’t worry. You’re not the only one. This question is one the problems posed by Srinivasa Ramanujan, one of the most brilliant and, well, mysterious mathematicians of all time. In fact, it was head enough that Ramanujan had to give the answer a few months later.
Tweet time!
22/11/2016I’ve been using twitter for about five years, and I wondered if my use of it changed over time, and more precisely, linked to my wake/sleep cycle. That’s fortunately kind of simple to check because you can simply request your whole Twitter archive, delivered as a plain CSV File! Let’s see how we can juice it.
Square Roots (Part V)
15/11/2016Last week we examined the complexity of obtaining in the decomposition
for some integer
. This week, we’ll have a look at how we can use it to speed-up square root extraction. Recall, we’re interested in
because
,
with , which allows us to get easy bounds on
. Better, we also have that
,
and we know how to compute (somewhat efficiently! Let’s combine all this and see how it speeds up things.
Square Roots (part IV)
08/11/2016In a previous post, we noticed that
,
where , could be used to kick-start Newton's (or another) square root finding algorithm. But how fast can we find
and
in this decomposition?
Regula Falsi
01/11/2016The regular falsi, or method of false position, is a method to solve an equation in one unknown, from an initial “guess”. Guesses are progressively refined, by a rather simple method as we will see, until the exact answer is reached or until convergence is reached. The method is useful when you don’t really know how to divide by arbitrary values (as it was the case in Ancient Times) or when the equation is cumbersome.
Horner’s Method
25/10/2016It’s not like evaluating polynomial is something that comes up that frequently in the average programmer’s day. In fact, almost never, but it brings up a number of interesting questions, such as, how do we evaluate it very efficiently and how much of a simplification in the computation is actually a simplification?
The generic polynomial has the form
.
If we evaluate this naïvely, we end up doing multiply and
additions. Even using the fast exponentiation algorithm, we still use
multiplies. But we can, very easily, get down to
.
Fractional Bits (Part III)
18/10/2016A long time ago, we discussed the problem of using fractions of bits to encode, jointly, values without wasting much bits. But we considered then only special cases, but what if we need to shift precisely by, say, half a bit?
But what does shifting by half a bit even means?
Count like an Egyptian (Part III)
11/10/2016In previous posts, I’ve looked into unit fractions, also known as “Egyptian fractions”, fractions where the numerator is and the denominator
, some integer. We have noticed that finding good representation for a fraction
as a sum of unit fraction is not as trivial as it may first seem. Indeed, we noticed that the greedy algorithm, the one that always pick the largest unitary fraction that does not exceed the rest as the next term, may lead to long series of unit fractions, sometimes with large denominators, even for simple fraction. For example,
.
In a previous post, I suggested decomposing in an “easy way”, which was often better than the greedy algorithm. Surely, we can do better, but how?
Yet Another Square Root Algorithm (part III)
04/10/2016Finding good, fast, and stable numerical algorithms is usually hard, even for simple problems like the square root, subject we’ve visited a number of times. One method we haven’t explored yet, is using the Taylor series. The Taylor series for around
is given by:
Can we exploit that to compute efficiently, and with some precision, square roots?

Posted by Steven Pigeon