We often start thinking about something, make some progress, and eventually realize it’s not going anywhere, or, anyway, the results aren’t very phrasmotic—certainly not what you hoped for. Well, this week’s post is one of those experiments.
So I was wondering, how can I compute the binomial coefficient, , and minimize the size of the integers involved?
Let’s first define the binomial coefficient. In its simplest form:
where denotes the factorial:
If we proceed directly with the formulation, we compute , a potentially huge number (it is !), then , , and finally perform the division. Computing directly performs multiplications with increasing large integers. We can assume that operations on machine-size integers have constant-time cost, but we can’t do that with large integers. Even with a fast multiplication algorithm, say Karatsuba’s or Toom-Cook, that cost grows rapidly. So I figured that since some terms cancel out, we might be able to reduce the number of multiplies and the maximum size of the integers involved.
Let’s have a look at what this thing actually computes.
which becomes, after canceling a few terms:
Where the numerator and the denominator have the same number of terms, . We can rewrite the above as:
or, if we insist in using integers only:
(We can always have less than terms because we can use the identity to keep in the product formula as small as possible!)
So the number of multiplications passes from (regardless of ) to , which is a huge improvement. Further, the largest number computed goes from to , an improvement as well.
If we take the formulas
to compute an approximation to using just floats, how does it compare to the native identity? Certainly using smaller numbers also implies smaller errors? Well, I do not know yet.
To be continued…