My first true computer, a TI 99/4a (which I talk about also in a previous entry), had 16 or 48 KB of RAM, depending whether or not you had one of the memory expansion cartridges, and that limited quantity of memory severely curbed the complexity of the programs one could write on the machine. However, the limited complexity of the programs, the relative crudeness of the development environment (a BASIC shell) and the slow execution speeds weren’t very obvious to me back then. They were somewhat mitigated by the novelty of the computer itself as a machine, and by the perpetual intense excitement of discovery. The arduous design of programs to save memory, fit more graphics or more code, or even getting our programs to work at all was less about constraints than challenge.
The same kind of constraints—or challenge—followed me over the years as I moved on to different computers. Despite their being more powerful, both faster and sporting more memory, the challenge remained there because while the computers got better, so did I at programming. I kept asking more out of the machine, writing increasingly complex programs needing either more memory or more speed, often both. That meant better algorithms, better data structures, and better implementations1.
In addition to resorting to nasty machine-specific hacks, I often had recourse to what one could call frugal data structures; structures that store the information in a space-efficient way, sometimes exchanging computation for storage, using alternative encodings, but always avoiding wasting memory. The practice of writing frugal programs is a demanding one but brings its just rewards when done correctly: increased usability, enhanced portability, lower system load, and often better throughput. Writing programs that are frugal by design isn’t so much as premature optimisation as a different mindset altogether.
But let us not loose the perspective here. If frugal programs are more often then not to be prefered over memory hogs, sometimes using available memory quite liberally is the way to go. Not only can it very often lead to better run-time performance by avoiding more complex data structures it also, consequently, greatly simplify code. Indeed, wouldn’t you rather use a simple array for a look-up table offering constant-time access than an intricate data structure that uses barely more memory than strictly necessary but increases the run-time significantly because it has a higher algorithmic complexity?
While it is very obviously a trick question, where the answer lies within the delicate balance between run-time and resources constraints (which entirely depends on your computing eco-system), one must keep in mind that if conditions are right, memory is expendable.
If there was a time when a few megabytes were mass storage, today, a few megabytes are hardly more than a somewhat fat temporary variable, given the current state of high-end workstations and servers. Basically, a few megabytes are worth, quite literally, only a pocketful of spare change.2
I am not advocating the careless use of memory, of bad design and of laziness—quite the contrary, in fact. But I am saying that one should have the wisdom to weight in pros and cons in memory usage design decisions, freed from that nagging feeling that using more memory that would be theoretically possible using some complicated alternative implementation is tantamount to waste gasoline at 10$ a gallon.
1 Not better in absolute terms; just relative to a more naïve approach.
2 Let’s say american coins, because canadian money has coins for 1$ and 2$.