In the course of analyzing an algorithm, I used the simplifying hypothesis that the cost function is
That expression is cumbersome but we can get a really good simplified function to use as a proxy. Let’s see how.
First, let’s start with Stirling’s approximation (which would be in fact due to de Moivre, according to Dutka):
The terms of the expansion are given in A001163 and A001164 (in the On-Line Encyclopedia of Integer Sequences), but we needn’t worry too much about them, we’ll show that they can kind of be neglected anyway.
Let’s take logs on both sides:
Now, the term is almost zero as soon as is “kinda” large, and so can be neglected. We must also remark that the truncation has only an additive effect in , while it would have had a multiplicative effect in the original . Therefore, we will lose precision, but only additively. So no biggie. Also, let’s reorder terms a bit:
Now, , with an error of about , it’s not that bad of an approximation. Finally, we can write:
So, how good is it? Let’s have a look. The first 20 show that the estimation undershoots the real value by about less than 1% as soon as . The three columns are , and the approximation.
And the difference diminishes as grows.
Therefore the truncation isn’t all that bad. In fact, in an algorithm analysis, we might even chop it down to without being too wrong. At that point, we can rewrite it as , which is probably the form easiest to manipulate.
Jacques Dutka — The Early History of the Factorial Function — Archive for History of Exact Sciences, vol. 45 no. 3 (sept. 1993) p. 225–249