Fixed-Points

25/08/2020

Preparing lecture notes (or videos) sometimes brings you to revisit things you’ve known for a while, but haven’t really taken time to formalize properly. One of those things is fixed-point arithmetic.

Read the rest of this entry »


Evaluating polynomials

05/05/2020

Evaluating polynomials is not a thing I do very often. When I do, it’s for interpolation and splines; and traditionally those are done with relatively low degree polynomials—cubic at most. There are a few rather simple tricks you can use to evaluate them efficiently, and we’ll have a look at them.

Read the rest of this entry »


Factorial Approximations

31/03/2020

n! (and its logarithm) keep showing up in the analysis of algorithm. Unfortunately, it’s very often unwieldy, and we use approximations of n! (or \log n!) to simplify things. Let’s examine a few!

Read the rest of this entry »


YoU CanT MaKE BuBBleSorT FaSTER With ASseMbLY

14/01/2020

In one of the classes I teach, we end up writing assembly language programs. And while I explain the (sometimes very relative) benefits of writing assembly language, I use bubble sort as an example where even carefully crafted assembly language doesn’t mean much: it’s a bad algorithm to start with.

YoU CanT MaKE BuBBleSorT FaSTER With ASseMbLY

Except that it’s not quite true.

Read the rest of this entry »


Fast Exponentiation, revisited

12/11/2019

Quite a while ago, I presented a fast exponentiation algorithm that uses the binary decomposition of the exponent n to perform O(\log_2 n) products to compute x^n.

While discussing this algorithm in class, a student asked a very interesting question: what’s special about base 2? Couldn’t we use another base? Well, yes, yes we can.

Read the rest of this entry »


(Sub)bit-fields (Coding with fractions of bits, Part II)

13/08/2019

Last week, we used the 6×7×6 palette as an example of very simple fraction-of-a-bit coding1. However, we can generalize still a bit more to allow single field extraction and modification

Read the rest of this entry »


The 6×7×6 palette (Coding with fractions of bits, Part I)

06/08/2019

Remember ye olde dayes when we had to be mindful of the so-called “web safe palette“? Once upon a time, screens could display 24-bits colors, but only 256 at a time in some “hi-res” modes. But that’s not what I’m going to tell you about: I’d rather tell you about the encoding of the palette, and about a somewhat better palette. And also about using fractions of bits for more efficient encodings.

Read the rest of this entry »


Discrete Inversion (Generating Random Sequences XII)

30/07/2019

While this sounds something like a shameful family secret, discrete inversion is only the finite-valued variation on the method of inversion for the generation of random numbers with a given distribution (as I’ve discussed quite a while ago here). The case we’ll consider here is a random variable with few possible outcomes, each with different odds

Read the rest of this entry »


Mœud deux

07/08/2018

Pairing functions are fun. Last week, we had a look at the Cantor/Hopcroft and Ullman function, and this week, we’ll have a look at the Rosenberg-Strong function—and we’ll modify it a bit.

Read the rest of this entry »


Mœud

31/07/2018

Pairing functions are used to reversibly map a pair of number onto a single number—think of a number-theoretical version of std::pair. Cantor was the first (or so I think) to propose one such function. His goal wasn’t data compression but to show that there are as many rationals as natural numbers.

Cantor’s function associates pairs (i,j) with a single number:

…but that’s not the only way of doing this. A much more fun—and spatially coherent—is the boustrophedonic pairing function.

Read the rest of this entry »