Quite a while ago, I discussed map generation in the classic 80s-era game Tunnels of Doom. After a short correspondence with Kevin Kenney himself (who kindly answered my questions; and I hope he is aware that he contributed to the fascination of great many kids in computer science), I manage to, not exactly reproduce his algorithm, but create a number of fun variations.
This raises the question as to how do we encode maps efficiently in the computer’s memory, not only for Tunnels of Doooooom but also for any number of other games.
In the case of the Tunnels of Doom map, I used a rather simple bit-field coding where one bit selected the type of tile, either a room or a tunnel, and depending on this bit, the four least significant bits encoded either tunnel connectivity or room type. Although it needs only five bits per tile, for simplicity of implementation (and to some extend of exposition), let us consider using only bytes. The tunnel/room selection bit is placed in bit 5:
If the T/R bit is set to T (that is, 0), the four least significant bits encode connectivity, with one bit for each direction:
If the T/R bit is set to R (obligatorily 1), then three bits encode the room type:
The room types are “normal room”, “stairs up”, “stairs down”, “fountain”, and “inn”—each corresponding to game-specific functions into which we needn’t to dwell.
In the original post, map representation was not explicitly used for coding/storage because rematerialization (but not this kind of rematerialization) was entirely possible; from a seed (say 16 bits), the map could be regenerated in something like 10 µs.
A similar encoding could be used for a game such as Hunt the Wumpus. While the game is not terribly interesting in itself, encoding its map could be interesting.
But there’s a small difference. Notice the winding, but non-intersecting, tunnels. Despite the apparent complexity, the tunnels connect in a very simple way: either N with E and S with W, or N with W and S with E. This requires only one bit of encoding. There are also rooms (but only two types, I think: normal and with a pit), so 2 bits are needed. Using something simple as Huffman Coding would probably get us nowhere further because the distributions aren’t skewed enough to generate a much shorter code on average.
For other types of maps, for example the “overworld” of the original Dragon Warrior IV game, other techniques are needed. The map is large (for this type of game console—256×256 tiles) but does not show a lot of complexity.
Indeed, if we look at the map, there are large patches of constant values, and the detail lies essentially in the borders of the patches. A long time ago, I tried compressing the map to a really tiny file, but I didn’t get very good results: the 64K map was compressed to about 1.5K using a quad-tree such that leaves corresponds to uniform patches and internal nodes to patches that are split in four. The split/leaf encoding requires only one bit, and the tile-type for leaf itself can be Huffman-coded in less than three or four bits on average (there are 38 different types of tiles, but the distribution is really skewed.
The depth of the quad-tree is shown in the following picture:
I was first set to write something about how choosing the right map representation lets you conceive fast algorithms with efficient implementations, especially for large-scale simulations, but somehow the interesting ideas haven’t quite matured yet. I have been working with spherical mapping quite a bit lately—for other purposes than simulation—and I may write some more about this later on.