The best case depth for a search tree is , if
is the arity (or branching) of the tree. Intuitively, we know that if we increase
, the depth goes down, but is that all there is to it? And if not, how do we chose the optimal branching
?
While it’s true that an optimal search tree will have depth , we can’t just increase
indefinitely, because we also increase the number of comparisons we must do in each node. A
-ary tree will have (in the worst case)
keys, and for each, we must do comparisons for lower-than and equal (the right-most child will be searched without comparison, it will be the “else” of all the comparisons). We must first understand how these comparisons affect average search time.
If we use a naïve comparison scheme, each keys ask for two (potentially expensive) comparisons. One to test if the sought value, is smaller than a key
; one to test if they’re equal. If neither tests is true, we move on to the next key. If none of the tests are true, it is an “else” case and we go down the rightmost branch. That scheme does way more work than necessary and would ask for
comparison per node (because there are
keys per node).
The main problem with this straightforward approach is that comparisons can be very expensive. While comparing two integral types (int and the like) is often thought of as “free”, comparing string is expensive. So you clearly do not want to test two strings twice, once for < and once for =. The much better approach is to use a three-way comparison, also known as the “spaceship operator”.
The spaceship operator, <=> is C++20’s version of C’s old timey qsort comparison function (it is in fact much better because it also automagically provides all comparison operators). Basically, a<=>b can return <0 if a<b, 0 if they’re equal, and >0 if a>b. We can therefore store the result of the expensive comparison, then do inexpensive comparisons for the result. That reduces the number of expensive comparisons to per node.
The search complexity, counting the number of comparisons in the worst case for the best-case tree is
,
a strictly increasing function in (as we can treat
as a constant, since we’re not optimizing for
):
We therefore conclude that is the optimal solution.
Except when it isn’t
We have neglected, or rather abstracted, something important: the cost of accessing the node. In main memory, it’s very convenient to suppose that reading a node is free, that is, incurs no cost. That’s of course false, because even reading from cache incurs a delay; fortunately very small. It is even more false if we read the node from disk (yes, even NVMe). A classical spinning rust disk reading a block will cause a few millisecond delay, that may be really, really expensive relative to the comparison of two keys (once they’re) in memory.
The new cost function to optimize will be
,
with the comparison cost, and
, the cost of accessing the node. We can set
to 1 and make
relative to
. Let’s say something like
. Now, luckily for us, this function is convex in
, so we can solve for the optimal
given
and
. To do so, we take the derivative and find where it is zero, that is, solve
for . Since the denominator is strictly increasing in
, we must solve for a zero numerator. However, there’s a slight complication:
isn’t algebraic. We must use a numerical solution to the Lambert W Function. It gives us, with ,
.
The following graph shows the function’s surface with and
, one axes being
, block size and the other being
, the number of items in the tree. The green plane shows the optimal
found with the Lambert W function.
*
* *
To conclude, binary trees are optimal when we ignore the cost of accessing a node, but they aren’t when it becomes costly to access a node. When we access the nodes on disk, with a high cost, it becomes interesting to bundles many keys in a node, and we gave the optimal solution. However, the problem is often solved quite more directly. We just fit as many keys as possible in an I/O block. Disks operates on small blocks of (typically) 512 bytes, but file systems operate in somewhat larger, but fixed-sized blocks, something like 4 or 8k. A good strategy is therefore to fit as many keys as possible in that block, since even if the number of comparisons is large, it will still be faster than bringing that block into main memory.
Hello,
It seems like you can do way better than 2(b-1) comparisons even if you insist on comparing to every element. Can you fully specify what the search should return? In particular, if many elements of the tree compare equal to the query, which should be returned?
Assuming that it’s ok to find any of the elements that compare equal to the query, if you have xs[1] <= needle (equivalently, !(neelde < xs[1])) and xs[2] <= needle and xs[1] <= xs[2] (this is sort of a given), you don't need to check for xs[1] == needle. You could afford to check only the greatest element of the node that is <= query for equality.
I think that multiplicities (many items with the same keys) are a somewhat different thing. I would probably manage that by having a node with the key and a list of objects or a counter. If they are distinguishable, then it means the key used for the tree is partial?
But otherwise, I think that would work: it would be (b-1) per node, and O((b-1)lg n) in the average case (not just worst case) even for perfect trees. But it would be a bigger O(lg n). We should work out the specifics, but it’s probably something like (multiplicites*lg n).
Your article is on reddit (I don’t think posted by you) https://www.reddit.com/r/cpp/comments/op79gh/binary_trees_are_optimal_except_when_theyre_not/
I made some comments there.
Then I’ll comment here:
2 a) SIMD compares are cute for byte, words or 64 bits integers but do not work as well for abstract type T with operator . Think of just strings.
3 a) disk blocks are 512 bytes, typical IO block IS 512 bytes. But *filesystems* groups them and manage larger block. That’s what actually written.
For 3: we’ve moved to 4k disk blocks a number of years back. All SSDs use 4k pages, and “AF” (Advanced Format) hard-disks do too. 512 byte sectors is legacy.
…or not. On my box, with nvme storage:
/home/steven> sudo blockdev –getsize64 /dev/nvme0n1p3
65861058560
/home/steven> sudo blockdev –getsize /dev/nvme0n1p3
128634880
The first gives total size, the second, the number of sectors. Ratio: 512.
But anyway, the point was mainly that basic device IO block and filesystem blocks not not necessarily match, and usually, FS blocks are larger.
Compatibility causes funny things – –getsize is in 512 byte sectors even if that’s not accurate for the device! Try –getpbsz (get physical sector size)
512 ¯\_(ツ)_/¯
Huh.
The other options that can sometimes hint at it are –getiomin and –getioopt
But… maybe you do have a 512 byte sector SSD ¯\_(ツ)_/¯
> sudo nvme list
[sudo] password for steven:
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 1928823000012856535F Force MP600 1 2.00 TB / 2.00 TB 512 B + 0 B EGFM11.1
yup.
Just noticed that’s a Force MP600 – we have those at work. This is what Windows fsutil reports for it for us:
4k physical sectors.
Something seems broken with the data reported by your system. The MP600 emulates 512 byte sectors (like most modern 4k drives) for compatibility with systems that don’t understand 4k sectors, but it ends up turning 512 byte sector writes into a 4k read+modify operation so it’s less than ideal.
Can you provide clarification on ” A good strategy is therefore to fit as many keys as possible in that block”. I’m guessing you mean, fit the top-branches onto one block until you can fit the remaining sub-tree onto one or two.
If accessing a single key cause you to read from disk, you might as well have this expensive read read a bunch of keys. An no, not necessarily just top branches, but at all level of the tree. I guess there are many ways to go about it (say, have a bunch of nodes in the same block or nodes with great many branches (and therefore great many keys))
Why do we need b-1 comparisons for each node? Can’t we do a binary search through the b-1 keys in each node? That should give optimal performance if b is any power of two.
Absolutely. That’s a bonus question ;)