This week, let’s discuss dithering, a technique based on error diffusion used in print and computer graphics to soften the effects of harsh color quantization. There are many types of dithering (sometimes called halftoning, despite techniques not being all of the same type), some optimized for monochrome or color screen printing, some merely convenient to use in computer graphics.


When we quantize an image to a very small number of colors, all kind of artifacts appear: false contours resulting from two too different colors that aught to be smoothly faded, noticeable color changes, etc. Dithering tries to alleviate this problem by mixing colors using random-looking dot patterns, replacing color resolution by sampling aliasing. Despite the image being composed of dots of very different colors, we still perceive the color mix, and that effect, while not adding any objective quality to the image, does augment the perceived quality of the image.

Read the rest of this entry »

Suggested Reading: Demystifying Machine Intelligence


Piero Scaruffi — Demystifying Machine Intelligence (Why the Singularity is not coming anytime soon and other Meditations on the Post-Human Condition and the Future of Intelligence) — ISBN 978-0-9765531-9-9

(Buy from author)

(Buy from author)

The title of the book is as explicit as it gets: it’s a collection of meditations on the nature of artificial intelligence, what it is not, and what it may mean for us in the near future, especially making the case that intelligent machines (for whatever definition we want to give that) aren’t making us smarter, but may actually make us more stupid, more dependent, and more influenceable. And that the singularity isn’t coming anytime soon because there’s a deep disconnect between computing power and the understanding necessary for it to happen.

Fast exponentiation


A while ago, we looked at the computation of the GCD and found that the simplest implementation was, by far, the fastest. This week, we’ll have a look at another numerical problem: fast exponentiation (modulo machine-size integers).


The exponentiation a^b looks like an expensive operation because, at first, we think it needs b multiplications to be computed. Fortunately for us, and because of the arithmetic of exponentiation, we can compute a^b in O(\lg b) steps, which is much, much faster, even when b is smallish. Let’s us see how.

Read the rest of this entry »

Lancement de la bourse Frances Wilson pour l’avancement de la science


La Fondation de l’UQAR est heureuse de mettre en place le tout nouveau concours pour la Bourse Frances Wilson pour l’avancement de la science d’une valeur de 2500$, rendu possible grâce à générosité du professeur d’informatique Steven Pigeon de l’UQAR. La bourse, remise annuellement, vise à reconnaître la qualité des publications réalisées par des étudiantes et des étudiants de l’UQAR, toutes disciplines confondues.

Pour plus de détails:




Let’s consider a rather simple puzzle:


where each letter is assigned a value from 0 to 9. How would you solve

Read the rest of this entry »

The Speed of GCD


The subject of computing the GCD was brought up a couple of times lately, and we assumed that the straightforward divide-and-remained implementation was the most efficient. But is it?


Before writing this post, I knew of basically two versions, one due to Euclid, invented sometimes in Antiquity of course, and one that used the remainder (that is, modulo) to do basically the same thing (which can be implemented either recursively or iteratively). Turns out that there are other algorithms, for example the so-called binary GCD that tries to somewhat speed up the process by removing multiples of two. But which one is the most efficient?

Read the rest of this entry »

Universal Coding (Part II)


Last time, we presented universal codes and two simple code. This week, let’s revisit the topic and present Elias codes, which are much better universal codes than Chaitin’s and modified Chaitin codes.

We will use the idea of recursion introduced before, but we will push the idea further, getting codes with lengths O(\lg n + \lg\lg n + \lg\lg\lg n + \cdots), which are pretty much as good as it gets.

Read the rest of this entry »