## Average node depth in a Full Tree

May 14, 2013

While doing something else I stumbled upon the interesting problem of computing the average depth of nodes in a tree. The depth of a node is the distance that separates that node from the root. You can either decide that the root is at depth 1, or you can decide that it is at depth zero, but let’s decide on depth 1. So an immediate child of the root is at depth two, and its children at depth 3, and so on until you reach leaves, nodes with no children.

So the calculation of the average node depth (including leaves) in a tree comes interesting when we want to know how far a constructed tree is from the ideal full tree, as a measure of (application-specific) performance. After searching a bit on the web, I found only incomplete or incorrect formulas, or stated with proof. This week, let us see how we can derive the result without (too much) pain.

## A Special Case…

March 26, 2013

Expressions with floors and ceilings ($\lfloor x \rfloor$ and $\lceil y \rceil$) are usually troublesome to work with. There are cases where you can essentially remove them by a change of variables.

Turns out, one form that regularly comes up in my calculations is $\lfloor \lg x \rfloor$, and it bugged me a while before I figured out the right way of getting rid of them (sometimes).

## Compressed Series (Part II)

March 12, 2013

Last week we looked at an alternative series to compute $e$, and this week we will have a look at the computation of $e^x$. The usual series we learn in calculus textbook is given by

$\displaystyle e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$

We can factorize the expression as

## Compressed Series (Part I)

March 5, 2013

Numerical methods are generally rather hard to get right because of error propagation due to the limited precision of floats (or even doubles). This seems to be especially true with methods involving series, where a usually large number of ever diminishing terms must added to attain something that looks like convergence. Even fairly simple sequences such as

$\displaystyle e=\sum_{n=0}^\infty \frac{1}{n!}$

may be complex to evaluate. First, $n!$ is cumbersome, and $\displaystyle \frac{1}{n!}$ becomes small exceedingly rapidly.

## On rectangles

February 19, 2013

While reading Cryptography: The Science of Secret Writing [1], the author makes a remark (p. 36) (without substantiating it further) that the number of possible distinct rectangles composed of $n$ squares (a regular grid) is quite limited.

Unsubstantiated (or, more exactly, undemonstrated) claims usually bug me if the demonstration is not immediate. In this case, I determined fairly rapidly the solution, but I guess I would have liked something less vague than “increased in size does not necessarily provide greater possibilities for combination.” Let’s have a look at that problem.

## Crates of Bottles

February 5, 2013

The other day I was looking at a crate of 12 water bottles, and looked at the wasted space between them. Why not use a honeycomb? or even other bottle shapes?

Well, let us have a mathematical look at bottle packing. A round bottle occupies a surface (at the bottom of the box) of $S=\pi r^2$ (for some radius $r$) in a square of $4r^2$. The occupancy is therefore

## Damn you, Napier!

January 15, 2013

Briggs‘ logarithms (often mistakenly understood to be Napier‘s logarithms) is such an useful function that most of us don’t really think about it, until we have to. Everyone’s familiar with its properties:

$\displaystyle\log_b a = \frac{\log a}{\log b}$

$\log b^x = x \log b$

$\log a b = \log a + \log b$ (1)

$\displaystyle\log \frac{a}{b} = \log a - \log b$

but suddenly,

$\log (a+b) = ?$

What can we do with this last one?

## Fast Interpolation (Interpolation, part V)

July 3, 2012

In the last four installments of this series, we have seen linear interpolation, cubic interpolation, Hermite splines, and lastly cardinal splines.

In this installment (which should be the last of the series; at least for a while), let us have a look at how we can implement these interpolation efficient.

## Cardinal Splines (Interpolation, part IV)

June 26, 2012

In the last installment of this series, we left off Hermite splines asking how we should choose the derivatives at end points so that patches line up nicely, in a visually (or any other context-specific criterion) pleasing way.

Cardinal splines solve part of this problem quite elegantly. I say part of the problem because they address only the problem of the first derivative, ensuring that the curve resulting from neighboring patches are $C^0$-continuous, that is, the patches line up at the same point, and are $C^1$-continuous, that is, the first derivatives line up as well. We can imagine splines what are (up to) $C^k$-continuous, that is, patches lining up, and up to the $k$-th derivatives lining up as well.

## Hermite Splines (Interpolation, part III)

June 12, 2012

In previous posts, we discussed linear and cubic interpolation. So let us continue where we left the last entry: Cubic interpolation does not guaranty that neighboring patches join with the same derivative and that may introduce unwanted artifacts.

Well, the importance of those artifacts may vary; but we seem to be rather sensitive to curves that change too abruptly, or in unexpected ways. One way to ensure that cubic patches meet gracefully is to add the constraints that the derivative should be equal on both side of a joint. Hermite splines do just that.