Bresenham’s Pie


Approximating \pi is always fun. Some approximations are well known, like Zu Chongzhi’s, \frac{355}{113}, that is fairly precise (it gives 6 digits).

We can derive a good approximation using continue fractions, series, or some spigot algorithm. We can also use Monte Carlo methods, such as drawing uniformly distributed values inside a square, count those who falls in the circle, then use the ratio of inside points to outside points to evaluate \pi. This converges slowly. How do we evaluate the area of the circle then? Well, we could do it exhaustively, but in a smart way.

Read the rest of this entry »

Binary Trees are optimal… except when they’re not.


The best case depth for a search tree is O(\log_b n), if b is the arity (or branching) of the tree. Intuitively, we know that if we increase b, the depth goes down, but is that all there is to it? And if not, how do we chose the optimal branching b?


While it’s true that an optimal search tree will have depth O(\log_b n), we can’t just increase b indefinitely, because we also increase the number of comparisons we must do in each node. A b-ary tree will have (in the worst case) b-1 keys, and for each, we must do comparisons for lower-than and equal (the right-most child will be searched without comparison, it will be the “else” of all the comparisons). We must first understand how these comparisons affect average search time.

Read the rest of this entry »



Preparing lecture notes (or videos) sometimes brings you to revisit things you’ve known for a while, but haven’t really taken time to formalize properly. One of those things is fixed-point arithmetic.

Read the rest of this entry »

Mirror, Mirror on the Tripod, Who’s the Fairest of them All?


This week, I’ll show my mirror assembly to reverse the image “in hardware” for the lightboard.

Read the rest of this entry »

Møre Lïtbørd


Last time, I gave the instruction on how to build a lightboard, but not much in terms of how you actually use it. Now I’ve been giving lectures from it (with graduate students as test subjects), I’ve started recording for the undergrad courses, and so I’ve tweaked my setup and learnt a few tricks. This week, I’ll discuss some of them

Read the rest of this entry »



The C99 <stdint.h> header provides a plethora of type definition for platform-independent safe code: int_fast16_t, for example, provides an integer that plays well with the machine but has at least 16 bits. The int_fastxx_t and int_leastxx_t defines doesn’t guarantee a tight fit, they provide an machine-efficient fit. They find the fastest type of integer for that machine that respects the constraint.

But let’s take the problem the other way around: what about defines that gives you the smallest integer, not for the number of bits (because that’s trivial with intxx_t) but from the maximum value you need to represent?

Read the rest of this entry »

Lïtbørd (more than some assembly required)


As you may have noticed, a global pandemics got many of us working from home. While one can argue that you can do accounting from home, it’s a lot more complicated to teach from home. Pretty much everyone is trying to figure that one out. For my part, I decided that zoom and the virtual whiteboard is not very interesting. Like many, I decided to use a lightboard.

So, the problem is, where do you get a lightboard on short notice? Well, you can build one. Let’s see how:

Read the rest of this entry »

Evaluating polynomials


Evaluating polynomials is not a thing I do very often. When I do, it’s for interpolation and splines; and traditionally those are done with relatively low degree polynomials—cubic at most. There are a few rather simple tricks you can use to evaluate them efficiently, and we’ll have a look at them.

Read the rest of this entry »

Factorial Approximations


n! (and its logarithm) keep showing up in the analysis of algorithm. Unfortunately, it’s very often unwieldy, and we use approximations of n! (or \log n!) to simplify things. Let’s examine a few!

Read the rest of this entry »

How many bits?


In this quarantine week, let’s answer a (not that) simple question: how many bits do you need to encode sound and images with a satisfying dynamic range?

Let’s see what hypotheses are useful, and how we can use them to get a good idea on the number of bits needed.

Read the rest of this entry »