Compressed Series (Part I)

Numerical methods are generally rather hard to get right because of error propagation due to the limited precision of floats (or even doubles). This seems to be especially true with methods involving series, where a usually large number of ever diminishing terms must added to attain something that looks like convergence. Even fairly simple sequences such as

\displaystyle e=\sum_{n=0}^\infty \frac{1}{n!}

may be complex to evaluate. First, n! is cumbersome, and \displaystyle \frac{1}{n!} becomes small exceedingly rapidly.

So what can we do about it? The first idea is to use doubles instead of floats (or whatever floating number type offers more precision given the platform) but a better reflex would be to look for alternative representations that are better behaved. For example,

\displaystyle \sum_{k=0}^\infty \frac{2k+2}{(2k+1)!}

also computes e, but converges about twice as fast. I guess you’re asking yourself where does that expression come from anyway. Fair enough, let’s see.

First, if we expand the first expression, we get

\displaystyle e=\sum_{n=0}^\infty \frac{1}{n!}=1+\frac{1}{2}+\frac{1}{6}+\frac{1}{24}+\cdots

Let us now group terms two by two, we get, somewhere in the series terms of the form of

\displaystyle\cdots+\left(\frac{1}{(2k)!}+\frac{1}{(2k+1)!}\right)+\cdots

“Simplifying” the pair, we get

\displaystyle\left(\frac{1}{(2k)!}+\frac{1}{(2k+1)!}\right)=\frac{2k+2}{(2k+1)!}

which you can readily verify using simple algebra. This yields the alternate series above (which has no immediate semblance to the first, while being strictly equivalent), with half the number of terms (an enumerably infinite number, but still converges twice as fast, as counted by sum iterations).

So, even if it should converge twice as fast, does it? And does it with increased precision or does it have worst error propagation? Well, let’s experiment! A C version gives:

float_x naive_e(int nb_terms)
 {
  // ok, not so naive
  float_x s=1;
  float_x n=1;

  for (int i=1;i<=nb_terms;i++)
   {
    n*=i;
    s+=1/n;
   }

  return s;
 }

////////////////////////////////////////
float_x compressed_e(int nb_terms)
 {
  float_x s=2;
  float_x k_bang=1;
  for (int k=1;k<=nb_terms;k++)
   {
    float_x t=2*k;
    k_bang*=t*(t+1);
    s+=(t+2)/k_bang;
   }

  return s;
 }

With the functions parametrized by the number of iteration one is willing to give them to compute their magic. The actual value of e is

2.71828182845904523536028747135266249775724709369995\ldots

Running both functions with an increasing number of iterations yields the following table:


it. Naive              Compressed

 1  1                  2
 2  2                  2.666666666666667
 3  2.5                2.716666666666666
 4  2.666666666666667  2.718253968253968
 5  2.708333333333333  2.718281525573192
 6  2.716666666666666  2.718281826198492
 7  2.718055555555555  2.718281828446758
 8  2.718253968253968  2.718281828458994
 9  2.718278769841270  2.718281828459045
10  2.718281525573192  2.718281828459045
11  2.718281801146385  2.718281828459045
12  2.718281826198493  2.718281828459045
13  2.718281828286169  2.718281828459045
14  2.718281828446759  2.718281828459045
15  2.718281828458230  2.718281828459045
16  2.718281828458995  2.718281828459045
17  2.718281828459043  2.718281828459045
18  2.718281828459046  2.718281828459045
19  2.718281828459046  2.718281828459045
20  2.718281828459046  2.718281828459045

where we see that the compressed version 1) gets its final value at iteration 9 (while the naive gets it at iteration 18) and that 2) it gets a better result than the naive version, with all significant digits right. Ok, so the converge being twice as fast is not exactly a surprise, but the increased precision might be. For one thing, the compressed version has a smaller magnitude difference between the numerators and denominators of the terms, making it better conditioned.

It is not clear that we always want to have an alternate expression for a series where the ratio of magnitudes of numerators and denominators is close to 1, but it is clear that it helps preventing terms going to zero (in a machine-specific way) too fast. n doesn’t have to be very large for \frac{1}{n!} to be zero, as represented by floats, and O\left(\frac{n}{n!}\right) goes to zero a bit slower.

We could have a look at another series, for example, e^x, to see how compression affects it.

To be continued…

2 Responses to Compressed Series (Part I)

  1. Gui13 says:

    Hello,

    That’s interesting. Now, performance-wise, you have for each iteration:
    – for the naive: 1 integer multiplication, 1 float division, 1 float addition
    – for the compressed: 3 integer multiplication, 2 int add, 1 float division, 1 float addition

    Which means the compressed has 2 int mult and 2 int add more than the naive.

    So, integrating this, would the compressed version still be faster?

    • “Faster” means here the number of iterations, not really the number of operations. Of course, what you say is right; it does need more operations per iterations, and on a in-order, non-superscalar CPU, it would probably be much slower per iteration. However, (and fortunately) most CPUs are superscalar and out-of-order, so as many instructions as possible are executed in parallel, and now it’s quite possible that the timings are comparable when we reach comparable precision.

      And precision should be more important than speed (most of the times). The key point here is that 1/n! vanishes to machine-precision much faster than, say, (2n)/n!, and that’s what buys us extra precision.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 74 other followers

%d bloggers like this: