Some numbers are easier to work with than others, especially for computer arithmetic—and even more so with weak processors. So let’s have a look at easy numbers that we can sometimes exploit to get faster code.
Initializing arrays, or any variable for that matter, is always kind of a problem. Most of the times, you can get away with a default value, typically zero in C#C++, but not always. For floats, for example, NaN makes much more sense. Indeed, it’s initialized to not a number: it clearly states that it is initialized, consciously, to not a value. That’s neat. What about integers? Clearly, there’s no way to encode a NaI (not an integer), maybe std::numeric_limits::min(), which is still better than zero. What about bools?
Bool is trickier. In C++, bool is either false or true, and weak typing makes everything not zero true. However, if you assign 3 to a bool, it will be “normalized” to true, that is, exactly 1. Therefore, and not that surprisingly, you can’t have true, false, and maybe. Well, let’s fix that.
In Graphics Gems , Paeth proposes a fast (but quite approximate) method for the rapid computation of hypotenuse,
The goal here is to get rid of the big bad because it is deemed “too expensive”—I wonder if that’s still actually true. First, he transforms the above equation:
I’ve discussed algorithms for computing square roots a couple of times already, and then some. While sorting notes, I’ve came across something interesting: Archytas’ method for computing square roots.
Archytas’ method is basically the old Babylonian method, where you first set
until desired precision is achieved (or the scribe is exhausted).
Recently on Freenode channel ##cpp, I saw some code using an include-all-you-can header. The idea was to help beginners to the language, help them start programming without having to remember which header was which, and which headers are needed.
Is that really a good idea?
A Taylor series for a function around that is times differentiable is given by
where is the th derivative of at .
Have you ever wondered where the coefficients in a Taylor series come from? Well, let’s see!
Getting good text data for language-model training isn’t as easy as it sounds. First, you have to find a large corpus. Second, you must clean it up!