In a recent blog entry, Jeff Atwood makes the case that “hardware is Cheap, Programmers are Expensive”, basically arguing that in a lot of cases, it’s cost-efficient to add cheap, faster, hardware to a performance problem rather than investing time to actually enhance the code at the algorithmic and implementation level.
I do, however, disagree with him on many points.
First, throwing in more hardware makes sense only until a certain point. For exactly one project and one installation, it may make perfect sense to just add an extra blade to keep the system running without major tweaks. After a number of blades, it stops making economic sense for your costumers. Clearly, the more blades you ask them to buy to run your system, the more they’ll look at the competition. Why buy the solution with 32 server blades when there’s one with only 4? Second, you (and your customer) will quickly run into power consumption and heat dissipation problems. While this sounds all cute and funny when you have a 5 blade rack, it’ll be quite less so when you get your local hydro company to install a 500KW transformer in your backyard and that half of it will be for powering the cooling systems.
Second, adding extra hardware (or waiting for Moore’s law to provide faster computers) may not be realistic at all for your customer. If your software stops being cost-effective for your customers, they’ll turn to other solutions (unless you are already in a dominant position and keep your customers in some kind of lock-in). Indeed, it’s not realistic to ask your customers to get a quad-core computer to use your spreadsheet or word processor application. Your only solution is to provide better software.
Providing superior software means having superior design from the start. And I’m not talking about premature optimization where all kinds of wacky micro-optimization hacks are pointlessly included in early development. I am talking about committing to better programming, that is, taking time to think out the right abstractions, taking time to think how your tasks can exploit thread– or core-level parallelism (or even outright distribution), taking time to carefully select the right data structures and algorithms for your use-cases, making room for hardware acceleration and advanced instruction sets, fighting feature creep and over-engineering, and even resorting to use-case directed minimalism.