Fast Path Finding (part II)

June 18, 2013

Last week we had a first look at a simple path finding algorithm with application to games. This week, let us have a look a the relative performance of the implementations considered: C# with a naïve implementation that scans the whole map multiple times; a C# implementation that uses lists to maintain and visit only the border of the solved region (affording massive speed-ups, one hopes); and C++ versions of these two variants, plus a C++ variant that uses set (as bitmaps) to maintain the border list.

monster-small

In any quantitative study, the first thing to do is to collect data. The best way is to automate the collection, by, for example, adding a function that puts the player in all legal positions on the map, compute all the shortest paths, and measures how long it takes. One may also consider disabling the on-demand power policy as the CPU may jump (progressively?) in high gear as the data collection progresses, introducing bias.

Read the rest of this entry »


Fast Path Finding (part I)

June 11, 2013

The other day in class I was telling my students that sometimes simple strategies simply do not cut it. As an easy-to-understand example, I discussed the behavior of baddies in video games. If your game asks for the baddies to chase the player, the first thing you might try is to move the baddies in the direction of the player, without any other sophistication.

monster-small

So the algorithm in this case is that you move the baddie closer to the player (or try to, as there might be obstacles), and this translates to something like two lines of code:

Read the rest of this entry »


Trigonometric Tables Reconsidered

February 28, 2012

A couple of months ago (already!) 0xjfdube produced an excellent piece on table-based trigonometric function computations. The basic idea was that you can look-up the value of a trigonometric function rather than actually computing it, on the premise that computing such functions directly is inherently slow. Turns out that’s actually the case, even on fancy CPUs. He discusses the precision of the estimate based on the size of the table and shows that you needn’t a terrible number of entries to get decent precision. He muses over the possibility of using interpolation to augment precision, speculating that it might be slow anyway.

I started thinking about how to interpolate efficiently between table entries but then I realized that it’s not fundamentally more complicated to compute a polynomial approximation of the functions than to search the table then use polynomial interpolation between entries.

Read the rest of this entry »


Lossless Coding of CD Audio

January 17, 2012

Once upon a time, I discussed how to pick bit-rate for MP3, while considering re-ripping all my CDs. But if I’m to re-rip everything, I might as well do it one last time and use lossless compression.

In this post, we’ll discuss the simple script I cooked up to do just that, and a bit on how Flac works, and how it compares to MP3.

Read the rest of this entry »


Coding Maps (Part I)

August 30, 2011

Quite a while ago, I discussed map generation in the classic 80s-era game Tunnels of Doom. After a short correspondence with Kevin Kenney himself (who kindly answered my questions; and I hope he is aware that he contributed to the fascination of great many kids in computer science), I manage to, not exactly reproduce his algorithm, but create a number of fun variations.

Tunnels of Doom Combat Screen.

This raises the question as to how do we encode maps efficiently in the computer’s memory, not only for Tunnels of Doooooom but also for any number of other games.

Read the rest of this entry »


Programming Challenge: Luminance

August 9, 2011

A few years ago, I posted on my personal web page a number of programming challenges for my friends. This week, I present one of the old challenges: computing luma from RGB triplets using integer arithmetic only.

Read the rest of this entry »


RetroComputing

February 1, 2011

There are plenty of web sites and museums dedicated to the computers of yore. While most of them now seems quaint, and delightfully obsolete, there are probably a lot of lessons we could re-learn and apply today, with our modern computers.

If you followed my blog for some time, you know that I am concerned with efficient computation and representation of just about everything, applied to workstation, servers, and embedded systems. I do think that retro-computing (computing using old computers or the techniques of old computer) has a lot to teach us, and not only from an historical perspective.

Read the rest of this entry »