Suggested Reading: The Information


James Gleick — The Information: A History, A Theory, A Flood — Pantheon Books, 2011, 544 pp. ISBN 978-0375423727

(Buy at

This book is written for a non-technical audience. It introduces the reader to information theory, from Ancient Times to quantum computers. There is very little math—well, there are two or three formulas—but the text focuses on giving the reader the essential gist of information theory, that is, the nature of information itself and how information necessarily uses energy to be processed or to even exist.

It’s not a book that will change your life forever, but is still worth the read. A good summer book.

Surrogate Functions


In some optimizations problems, the objective-function is just too complicated to evaluate directly at every iteration, and in this case, we use surrogate functions, functions that mimic most of the properties of the true objective-function, but that is much simpler analytically and/or computationally.

Lately, I have done some thinking about what properties a surrogate function (or surrogate model) should have to be practical.

Read the rest of this entry »

Wallpaper: 5:59


(5:59, 1920×1200)

Building a Personal Library (Part II)


Quite a while ago, I blogged about how to find used books to fill your personal library on a budget. But used books have a major drawback compared to new books: they’re used. Well, yes, of course, but that means they may be in less than perfect state. They can be scratched, missing a few pages, have a damaged cover.

Fortunately, minor defects are rather easy to fix with a little creativity and surprisingly little material.

Read the rest of this entry »

Learning Python


When you come from different programming languages like C and C++, where you are used to control very finely what the machine does, learning Python is quite disconcerting, especially when it comes to performance.

Without being exactly an optimization freak, I rather like to use the best algorithms, favoring O(n) algorithms over an O(n^2) naïve algorithms/implementations, so you can guess my surprise when after replacing the O(n^2) initial implementation by an O(n) implementation I did not observe the expected speed-up.

Read the rest of this entry »

Implicit Coding


I’m always playing around with data compression ideas, and I recently came across a particular problem that inspired this week’s entry about implicit coding.

The idea of implicit coding is to exploit some constraint in the data that lets you reconstruct one (or more) value(s) unambiguously from the others. The textbook example of implicit coding is to use a constraint such as a known sum. For example, If you have a right triangle you necessarily have that

Read the rest of this entry »

Wallpaper: Red/Blue


(Red/Blue, 1920×1200)