Cargo Cult Programming (part 1)

Programmers aren’t always the very rational beings they please themselves to believe. Very often, we close our eyes and take decisions based on what we think we know, and based on what have been told by more or less reliable sources. Such as, for example, taking red-black trees rather than AVL trees because they are faster, while not being able to quite justify in the details why it must be so. Programming using this kind of decision I call cargo cult programming.

cargo

Originally, I wanted to talk about red-black vs. AVL trees and how they compare, but I’ll rather talk about the STL std::map that is implemented using red-black trees with G++ 4.2, and std::unordered_map, a hash-table based container introduced in TR1.

TR1 std::unordered_map is a map that does not maintain any particular order between the keys of the data it contains. It is therefore implemented as a hash table. The std::map is implemented, at least in G++ 4.2, as a red-black tree. Self-balancing trees change their shape with each insertion in order to maintain most leaves at an equal depth, ensuring an O(\lg n) access time. The difference between an AVL tree and a red-black tree is the way the tree is rebalanced, and the red-black tree does about half as much operations as an AVL tree on insertion, making somewhat faster—how much faster remains to be quantified.

So I ran a few simple experiments to compare std::unordered_map and std::map. First, I got hold of the Zingarelli word list, containing some 585,000 Italian words (I can’t find a working URL, but I got the list a few years ago on a Scrabble-related page). I ran three tests: insertion, successful search, and failed search. The list is broken into two parts, one containing 90% of the words, to be inserted, and 10% of the words where kept for the unsuccessful search test.

The insertion test consisted into inserting all of Zingarelli’s list into both data structures, in the most simplistic way: scanning the list sequentially and inserting items one by one.

wall time op/s
std::map 0.75s ~699000
std::unordered_map 0.76s ~690000

Insertion times are about the same; so we cannot conclude anything special here except maybe that the insertion time is largely dominated by memory allocation and copy. For the successful search (10,000 tries in each case):

wall time op/s
std::map 0.047s ~209000
std::unordered_map 0.006s ~1.6×106

Map look-up is immensely faster with std::unordered_map, a ratio of about 8:1! Failed searches exhibit the same behavior. For ~60,000 failed searches:

wall time op/s
std::map 0.263s ~37700
std::unordered_map 0.037s ~268000

We see the same kind of differences here again, but the failed searches are much slower than the successful searches.

When we repeat the experiment with integers (with the same kind of numbers; 500,000 integers of which 10% are randomly chosen for the failed searches), we get essentially the same picture but a massive speedup as strings are rather costly to copy. Indeed, for the insertion:

wall time op/s
std::map 0.19s 2.3×106
std::unordered_map 0.10s 4.5×106

Now, we see the algorithmic difference between the two algorithms as the time spent allocating and copying strings is eliminated. For successful and failed searches, we get (again for 10,000 look-ups):

wall time op/s
std::map 0.015s 0.7×106
std::unordered_map 0.005s 2.0×106

and:

wall time op/s
std::map 0.082s 0.1×106
std::unordered_map 0.009s 1.1×106

The std::unordered_map therefore seems to be much faster than std::map. What have we lost in order to gain this speed? First, a lot of memory as hash table must retain a certain sparseness to sport their average constant-time look-ups; and the ordering of keys. Listing the items sorted in a hash map won’t produce an ordered list, but rather a randomized version of the list. std::map on the other hand, allow lexicographic enumeration of its contents.

*
* *

So you’re reading this and are thinking, mmppfh, of course, it’s a hash table, you nitwit. Well, yes. Maybe so.

But here’s what prompted me to do the test. Red-black trees offer O(\lg n) access. But that’s assuming (and quite wrongly) that comparing keys can be performed in constant time. This may be true for simple keys, such as machine-size integers (known in C and C++ as int) but not so for more complex data, like strings and other structures. For string, for example, comparison may still be very fast because the cost of comparing two string only depends on the longest common prefix; if two long strings have differences in the first few characters, comparison terminates rapidly and the cost is moderate. If, on the other hand, the string share a very long prefix, then the comparison algorithm must scan both strings until the end is reached or a difference is found; this can be very long. So, lets p be the average common prefix length. The expected search time is now O(p \lg n) which can grow large if p is large.

For a hash table look-up, you must first compute the hash key which can be at best done in linear time in the average key length. In our first case, this length is the average string length. Let this average length be h, and therefore the cost of computing the hash is proportional to h. The number of probe is c_n a small constant that depends on the sparseness of the table and the number of items, n, it contains. For each of those c_n probes, a string comparison is performed at cost p, leading to an expected complexity of O(h+ c_n p), assuming that secondary hashing is constant time.

Now, it is not clear when p \lg n \geqslant h+ c_n p. Diving by p leads to \lg n \geqslant \frac{h}{p}+ c_n. So, in some conditions, it may be possible to make the tree appear faster than the hash table.

On the average, however, the hash map should win becase we expect c_n +\frac{h}{p} \ll \lg n.

*
* *

A cargo cult is described as:

A cargo cult is a type of religious practice that may appear in primitive tribal societies in the wake of interaction with technologically advanced, non-native cultures. The cults are focused on obtaining the material wealth of the advanced culture through magical thinking, religious rituals and practices, believing that the wealth was intended for them by their deities and ancestors.

Except from the part with ancestor spirits but especially when considering magical thinking, cargo culting applies very often to how programmers write code and take decisions about data structures and algorithms. Choosing a red-black tree over an AVL, or over a splay tree, because we think that it is somehow always better—sometimes because Ancestor X (a more or less reliable source or authority, such as a more experience programmer, a teacher, or some Internet dude) said so—is a form of cargo cult where the programmer does not use rationality to its full extent to make a decision.

When choosing data structures, one must be fully aware of the dual cost of data structures. The first cost is the run-time cost. Theoretical complexity and actual implementation-dependant run-times may be quite different. The second cost is memory usage. Memory is large on modern systems but not infinite. A particularly wasteful method that offers constant-time access to the data may use as much as, say, ten times the memory used by a method that gives you O(\lg n) access. If for small data set this 10× memory usage pose no particular problem, it may be quite different with a largish data set. It’s all a very delicate balancing act between run-time performance and scalability.

It’s very hard to trust one’s gut feelings about a data structure and the data put in. Combined with the access patterns, the data structure may yield a very counter-intuitive performance. Rather than giving into magic thinking cargo cult programming, you should always take a little time to validate our assumptions and hypotheses about the data, the data structure, and the access patterns, as data structure behavior is clearly not independent from the data and the access patterns.

Consider this very simple example. We have a simple binary tree and a list of strings. Binary trees offer O(\lg n) insertion and access times, so it is, a priori, a good choice. Now, the list of string is composed of words, but it so happens it’s already sorted. If we insert the strings in the order they are in the list, we get a degenerate binary-tree that’s in fact a list! Indeed so: insertions are always performed at the far right of the tree, causing the tree to degenerate into something we could call a vine and then all operations degenerate to linear time. If we randomize the list and insert the words into the tree in that randomized order, we get a ragged tree, but about equally deep everywhere, leading to the expected O(\lg n) access and insertion time.

The sorted list / binary tree example is an over-simplistic one, I admit, but it gets to the point: the programmer used a data structure cargo-culted to offer O(\lg n) access time, but due to his incomplete comprehension of the data (the sorted list) and of the access pattern (inserting items sequentially from the list), the result was disastrous.

But the thing is, we all do that to a certain extent!

3 Responses to Cargo Cult Programming (part 1)

  1. Nadav says:

    I enjoyed this blog post. Until now, I haven’t heard of Cargo Cult, but I did experience the phenomena.

    I believe that algorithm implementation can be hidden behind interfaces, especially in c++. I would consider using a typedef definition of std::*map into my own map-name. Then, when the application is complete I would be able to benchmark both algorithms.

    • Steven Pigeon says:

      We all have experienced this in some fashion, I think. It is especially true of things that are incompletely understood like, say, hash tables. Hash tables are a good example of this. I hear again and again that the hash table size must be a prime number. Well, ok, but only if you’re using quadratic residue secondary search (and it has to be a 4j+3 prime, a result due to Maurer, I think). Otherwise the size doesn’t really matter, it just has to be sparse enough.

      Yes, you should be able to hide such details as the actual container behind an interface. Better yet, in C++, you should be able to use templates/metaprogramming to switch between containers. Alas, the STL isn’t quite container-agnostic (a major shortcoming in my opinion) so it prooves rather tricky to get this right. Indeed, not all containers implement operator[], and not all iterator return the same thing; some return a pointer, some a pair.

  2. […] a number of different occasions, I briefly discussed Hash Functions, saying that if a hash function needn’t be very […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: