RetroComputing

There are plenty of web sites and museums dedicated to the computers of yore. While most of them now seems quaint, and delightfully obsolete, there are probably a lot of lessons we could re-learn and apply today, with our modern computers.

If you followed my blog for some time, you know that I am concerned with efficient computation and representation of just about everything, applied to workstation, servers, and embedded systems. I do think that retro-computing (computing using old computers or the techniques of old computer) has a lot to teach us, and not only from an historical perspective.

The microprocessors of the era were a lot simpler than current CPUs. For one thing, processors such as the MOS Technologies 6502, the Motorola 6809 (on which I learnt assembly language the first time), or the Z80 all had very simple instruction sets with most, if not all, instructions encoded on one byte.

While complex instruction sets like the x86 let you write a given routine using a relatively small number of instructions (because most of the complexities are managed by the instructions themselves, for example, address calculations) the average instruction length is rather high. You just have to use objdump -D -MIntel (or the equivalent on your operating system) on any object or executable file on your computer to convince yourself of this fact. These small computers, on the other hand, would let you manage code complexity by devolving the complexities such as address calculations to the programmer rather than the instruction set.

This is a different set of trade-offs, where you think memory organization maybe a bit more carefully so to avoid complex address calculations, something there’s no real reason to do when you can have automagic address calculations such as [eax+4*edx+offset]. And since you’re even more starved for registers, you’d also think register allocation quite more carefully.

The graphics on these machines were also quite crude by today’s standards: two bits per color, if you were lucky. Still, sprites, textures, and background images somehow had to fit in the available memory. Because you have to realize that the available memory was something like 8, 16, or 32K. Yet, people managed to fit quite a lot in there making games and applications possible. Managing very small amounts of memory down to the bit would have us reuse the same data over and over again, possibly with a different interpretation. Draw it with palette 1, it’s a cloud, with palette 2, it’s a bush.

Exchanging storage for computation was a trick that was used extensively. Textures would be generated using a (poor) pseudo-random generator. Objects would be drawn using different palettes. I remember a TRS-80 (or maybe it was the original IBM PC?) solitaire card game that rendered the cards at launch using the draw command, which supported a turtle-graphics like set of graphics primitives. It was, in essence, a vector representation of graphics, 20-ish years before SVG.

So, what can we learn about this that we can apply to modern computers? Of course, 32K is now but merely the size of a fat temporary variable. Pointers are 32 or 64 bits wide rather than 16. The instruction sets are complex rather than simple. Operating systems are complex and feature-rich rather than very minimal and ROM-based. Programming languages are also much more complex. Still, the economy with which we used the computer resources (memory and computation) can be applied in today’s software to, I believe, great extent with beneficial performance results.

I’m not saying we should use flat data structures and write everything in assembly language; I’m saying that we should exercise judgement when we write code. There are questions we must ask ourselves every single time we write code. Will this data structure splurge on memory? Will this piece of code be slow because it uses an inefficient data representation or a bad algorithm for what it has to do? Can we reorganize memory so that we avoid wasting memory to pointers and other bookkeeping information, or can we have our algorithms scan it in a cache-friendly way? Will this code be slow because I use the wrong expressions for my ideas and the code generated by the compiler will be necessarily be inadequate?

*
* *

The Dynalogic Hyperion computer photo is (c) www.old-computers.com and is used with permission.

*
* *

Further readings:

  • www.6502.org, has tons of info on the MOS Tech. 6502, with source code repositories.
  • The 6809 emulation page has information on emulators, the instructions set, and code repositories for the 6809.
  • www.z80.info has the same kind of information about the Zilog Z80

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: