The usual way of forming a search tree from a list is to scan the list and insert each of its element, one by one, into the tree, leading to a(n expected) run-time of .

However, if the list is sorted (in ascending order, say) and the tree is not one of the self-balancing varieties, insertion is , because the “tree” created by the successive insertions of sorted key is in fact a degenerate tree, a list. So, what if the list is already sorted and don’t really want to have a self-balancing tree? Well, it turns out that you can build a(n almost perfectly) balanced tree in .

Let us make the simplifying assumption that we can have two types of nodes in the tree: leaves, that contains the actual data, and internal nodes (or just nodes for the remainder of this post) that holds only a key.

The first strategy that comes to mind, using this assumption, is to use a method reminiscent of how Huffman Codes are constructed. That is, we scan the original list from left to right: we take two nodes (if there are at least two left), remove them from the list, and insert back in their place a new node containing the key of the second list item (remember, the list is supposed to be sorted in ascending order), with the two original nodes (or leaves) as children. That is, the operation goes something like this:

(Notice the metaphor: leaves are green, internal nodes brown.) We simply re-scan the list until we have a single remaining node, which will be the root of the tree. The procedure is simple enough, indeed. Let us work out a full example with 5 leaves:

Then we do one pass of merges:

Then another:

…and finally:

What do you notice? The three isn’t all that balanced. In fact, it’s *really* unbalanced. What went wrong? Well, if for Huffman Codes, this algorithm is optimal, for search trees, it is clearly not. For one thing, Huffman’s method wants to build trees with average path-lengths as close as possible to the source’s entropy; here our goal is to have path-lengths as equal as possible.

Fortunately, we can “repair” the algorithm quite easily, and it’s another code that will come to our aid: Phase-In Codes, which have average code lengths close to (as I show here).

The key observation is that if the length of the list is a power of two, then the list can be successively merged together by the above algorithm and yield a tree that has *exactly* depth. Therefore, if we somehow modify the list to have exactly items in it, we’ll be able to have our optimal depth. Phase-In Codes provides the mean to do so. If

with the largest and with (thus imposing the uniqueness of the solution, with ), we can merge the first nodes, and it will result in a list of nodes ( of which are now internal nodes). Once we have merged the first nodes, we stop the iteration there and restart from the beginning of the list, which is now of length , and will yield the most-equal depth tree we wanted.

Again, with : , so , we take the two first nodes

and merge them:

and *voilà*, we proceed with the successive merges:

and then

and we have a tree of almost equal depth, with average depth of (while the actual ) rather than , a considerable improvement, even for this trivial tree.

*

* *

While it is not the customary way of representing a tree in memory, the segregation of leaves and internal nodes may be justified in the context where the leaves and the tree itself live in different memory locations. For example, if the leaves are rather large compared to just the key, they may have to reside in external memory—that is, the disk—while you will still like to keep the index, the tree structure itself, in memory.

One could also argue that this scheme is cache-friendlier as one can use some kind of compact tree storage to maintain the keys and the structure of the tree in a small amount of memory, and with keys that are nearby in the tree being laid-out nearby in memory, thus reducing cache misses quite a lot.

In all cases, we still have reduced the complexity of building the initial tree from to essentially ,

I think if you have a list of sorted items, you can probably create an AVL-tree in O(N), which is actually self-balancing. The trick is an O(1) method that picks the root item from a list of length N, then you can recoursively create the left and right subtrees too.

If you have an array-list, you indeed have your access to the pivot and the problem simplifies quite a lot. I suppose we could do the same with a skip-list. If you have an ordinary linked-list, you cannot do much better without using extra memory… basically building only the top levels of the skip-list. The resulting algorithm would be still something of , where is the size of a chunk that is using only regular-linked list (the structure would allow access to entry points in the list in at worst .

Well, if the subtree-building function not only builds subtrees, but also tells the last link it used to do that, and you can calculate the pivot index in O(1), then you can still do it in O(N) with linked lists: the recursive function should process the left subtree first, then extract pivot, then process right subtree. With this approach extracting pivot element also takes O(1), and the whole algorithm runs in O(N) for linked lists too.

[…] a while ago, I proposed a linear time algorithm to construct trees from sorted lists. The algorithm relied on the segregation of data and internal […]

Very interesting research! Just one concern. At least, we have to know the length of source list before processing. In most cases, we can calculate tho length in O(n) time (this will be the first pass). So, keeping this fact in mind, instead of calculating the length of the list we can convert it into the array and then end up with very simple and straightforward algorithm (with random access to an array). Anyway, this is still an interesting approach for the functional settings, where we don’t have arrays.

Yes, I explored that type of addressing in a previous post. However, if that scheme is interesting for heaps (where the ordering invariant is different), it is not that useful for a tree that wil have items added or deleted, because 1) the array will have to be resized and 2) insertion will need large portion of the array being copied.