Ad Hoc Compression Methods: RLE (part II)

15/04/2009

Yestermorning I presented an ad hoc compression method known as RLE, or run-length encoding and I got quite more reactions than I thought I would. Many suggested doing plenty of other stuff despite the fact that the post was about RLE, not about <insert your favorite compression method>. But something very nice happened: people got interested and many suggestions came forth.

I decided to implement the median-based prediction method as suggested by Dark Shikari here. Let us see the results!

Read the rest of this entry »


Ad Hoc Compression Methods: RLE

14/04/2009

Suppose you have been sub-contracted by a video game company, say Nintendo, to come up with an low power, low-memory, implementation of a popular video game, say, an installment of the infamous Pokémon series. The envisaged device is very slow (it has a 2MHz processor) but has a 220×176 pixels color screen capable of displaying 32768 different colors. It also has a hardware sprite mode, but sprites are limited to 4 distinct colors. You have to encode the complete database of Pokémons, which are several hundreds, in as little space as possible. Each image vary somewhat in size, but can only have 4 distinct colors from the possible palette of 32768 colors.

A typical Pokémon would look like:

A typical Pokémon image, shown enlarged and in original size.

A typical Pokémon image, shown enlarged and in original size.


Since the artists put a lot of efforts into drawing the critters in 4 colors, and because of the hardware requirements, you cannot use a compression algorithm that modifies or creates more colors than the original image had, therefore excluding JPEG. Moreover, the JPEG codec would be too complex to port to the platform considered. You must therefore use a simple lossless codec.

We will be comparing 5 codecs for the Pokémon image database. We will be looking at a “block” codec, which will serve as a comparison for a non-RLE codec. We will be looking at 5 variations on the theme of run length encoding, or RLE, namely, the All Counts, the Bit Select, and Auto-Trigger RLE. The last variant will help Auto-Trigger RLE by using predictors, transforming the input image into one that is easier to compress using RLE.

Read the rest of this entry »


Compact Tree Storage

07/04/2009

Implementing data structures in a way that uses efficiently memory should always be on your mind. I do not mean going overboard and micro-optimizing memory allocation right down to the bit. I mean organize data structures in memory so that you can avoid pointers, so that you can use contiguous memory segments, etc. Normally, minimizing storage by avoiding extra pointers when possible will benefit your program in at least two ways.

First, the reduced memory requirement will make your data structure fit in cache more easily. Remember that if pointers are 4 bytes long in 32 bits programming, they are 8 bytes long in 64 bits environments. This yields better run time performance because you maximize your chances of having the data you need in cache.

Second, contiguous memory layouts also allow for efficient scans of data structures. For example, if you have a classical binary tree, implemented using nodes having each two pointers, you will have to use a tree traversal algorithm, possibly recursive, to enumerate the tree’s content. If you don’t really care about the order in which the nodes are visited, what’s quite cumbersome.

It turns out that for special classes of trees, complete trees, there is a contiguous, and quite simple, layout.

Read the rest of this entry »


Suggested Reading: The Race for a New Game Machine

04/04/2009

David Shippy, Mickie Phipps — The Race for a New Game Machine — Citadel Press, 2009, 256 pp. ISBN 978-080653101-4

(buy at Amazon.com)

(buy at Amazon.com)

This book, strongly reminescent of Tracy Kidder’s Pulitzer-winning Soul of a New Machine, relates the history of the development of the Cell, Xenon, and Broadway processors, the hearts of Sony’s PS3, Microsoft’s Xbox 360, and Nintendo’s Wii game machines, respectively.

Read the rest of this entry »


Back from Nashville

04/04/2009

I am just back from the IEEE Symposium Series on Computational Intelligence 2009 in Nashville, Tennessee. That was quite an adventurous trip (complete with tornado alerts and all) but more importantly, I got to present a paper accepted at the CIMSVP conference (see here for the paper).


How many eyeballs to make a bug shallow?

31/03/2009

In the The Cathedral and the Bazaar, Eric Raymond said that “Given enough eyeballs, all bugs are shallow.” Of course, what he meant is when you have a large enough tester base and lots of developers, almost all bugs are found, characterized (i.e., how to make them reproducible and what are the effects) and fixed by someone in the developer group. This appears to me as somewhat optimistic and begs the question… how many eyeballs does it take to make a bug shallow?

Read the rest of this entry »


Soft-Ignore: Filter buggers in XChat

24/03/2009

If you, like me, hang out once in a while on IRC to chat with fellow programmers (or with fellow practitioner of your favorite hobby), you may find that some individuals are just not worth your full attention. One easy, and rather definitive way to deal with the problem is to use the /ignore command that allows your IRC client to filter incoming messages from those people, and you just never see them again… quite literally.

However, just /ignoring someone is rude, and may prevent you from keeping a eye on them. You know, the “keep your friends close, your enemies closer” kind of thing.

A long time ago, with a friend, I wrote a mIRC script that shaded the “ignored” people’s text so that it was hard to read (like dark gray on blue), but the text was still available. To view the text, you could either squint or select the text. This week, I present a python version of that script, for XChat, based on the work of Albert W. Hopkins, a.k.a. Marduk, released under the GPL.

Read the rest of this entry »


Suggested Readings: Short-Cut Math

23/03/2009

Gerard W. Kelly — Short-Cut Math — Dover, 1984, 112 pp. ISBN 978-0-486-24611-6

(Buy at Amazon.com)

(Buy at Amazon.com)

This little book—not even 120 pages—will help you sharpen your mental math skills using a series of tricks, strategems, and special cases that will get you to the exact result, or at least to an accurate approximation, with greater speed. The book is not exactly mind-shattering either; still, it may be useful to learn a few new tricks.


What the Happy Hacking Keyboard should have been

17/03/2009

The Quest for the Perfect Hacking Keyboard is indeed an eternal one. Over the years, I had great many keyboards and most of them went the way of the dodo. Recently, despite not having other Apple products, I tried the (wired) thin Apple aluminum keyboard. As I prefer very thin keyboard over thick ones, and laptop-style keys to the big M Type keys; it was a very good match. However, after a while, I got unnerved by the extra, useless keypad. In short, the keyboard is too long: as I am right handed, it’s always somehow in the way of the mouse so I started looking for a better keyboard. Again.

apple-keyboard-2

So I got the Apple aluminium keyboard, wireless version.

Read the rest of this entry »


Suggested Reading: The Myths of Innovation

16/03/2009

Scott Berkun — The Myths of Innovation — O’Reilly, 2007, 178 pp. ISBN 978-0-596-52705-1

(buy at Amazon.com)

(buy at Amazon.com)

In this book, Berkun debunks the myths surrounding the new, the innovation, the invention that will change the world, surrounding the inventor, lonely and strange, superman or genius. In ten chapters, he systematically demolishes a series of myths, starting by the myth of epiphany, that is, how an innovation is “revealed” to its inventor. He explains how and why such epiphanies do not exist, and, if they exist, how they cannot be anything but the final step of long, tedious work.

Read the rest of this entry »