(and its logarithm) keep showing up in the analysis of algorithm. Unfortunately, it’s very often unwieldy, and we use approximations of (or ) to simplify things. Let’s examine a few!
Finding rational approximations to real numbers may help us simplify calculations in every day life, because using
makes back-of-the-envelope estimations much easier. It also may have some application in programming, when your CPU is kind of weak and do not deal well with floating point numbers. Floating point numbers emulated in software are very slow, so if we can dispense from them an use integer arithmetic, all the better.
However, finding good rational approximations to arbitrary constant is not quite as trivial as it may seem. Indeed, we may think that using something like
will be quite sufficient as it will give you 6 digits precision, but why use 3141592/1000000 when 355/113 gives you better precision? Certainly, we must find a better way of finding approximations that are simultaneously precise and … well, let’s say cute. Well, let’s see what we could do.
In the course of analyzing an algorithm, I used the simplifying hypothesis that the cost function is
That expression is cumbersome but we can get a really good simplified function to use as a proxy. Let’s see how.
Cutting corners is generally thought of as a bad thing. It generally is, I agree. But in some occasions, optimally cutting corners is the right thing to do. I can show you what I mean. Using the Logo programming language (precisely the KTurtle implementation), I devised the following experiment. Consider the following image:
Read the rest of this entry »