Sometimes, you need to compute a function that’s complex, cumbersome, or which your favorite language doesn’t quite provide for. We may be interested in an exact implementation, or in a sufficiently good approximation. For the later, I often turn to Abramovitz and Stegun’s *Handbook of Mathematical Function*, a treasure trove of tables and approximations to hard functions. But *how* do they get all these approximations? The rational function approximations seemed the most mysterious of them all. Indeed, how can one come up with

for , for ?

Well, turns out, it’s hard work, but there’s a twist to it. Let’s see how it’s done!

The first step is to find the Maclaurin (around zero) or the Taylor (around some number ) expansion of the function, , you want to approximate. The Taylor expansion of a function around is

First, you must remark that the terms are constant, because they do not depend on . Let’s call them the . Then, the are only polynomials, because is also a constant. With much work, you can rearrange the whole thing, gather all the constants together that goes with an , and write them as

.

So basically: every function for which you can find a Taylor series, you can rewrite as a power series. It is this power series that we will approximate by a polynomial, or, more exactly, a rational function: a quotient of two polynomials.

Infinite sums and series are rather cumbersome, and we will want to cut it so a certain precision. Therefore, we choose some that will get us good precision (that’ll obviously dependent on the function) and write

.

If is well chosen and somewhat well behaved, the approximation error due to the truncation of the series will be small, or at least not objectionable for our application (you may need to experiment to find the that’s good for you and your function). We will approximate the truncated series by

,

where the and the are for us to find. , and are constrained by . This will allow us to get equations in unknowns—but we will be able to solve it anyway! Rewriting the above as

(*),

we will, by gathering the members by powers of , be able to get equations of the form

with if . Then use these equations to solve for all the and the . This will get our solution. OK, let’s go through a worked-out example.

*

* *

Let’s choose an easy function, say . We find that, around zero, the Taylor series is

If we want and (let’s choose smallish numbers, otherwise we’ll still be here next week!), then we keep terms. We write:

.

By rearranging terms as in eq. (*), and carrying out the left hand side product, we get

.

Gathering powers together, we arrive at the set of equations

and all other equations are , and we ignore them. We know all the , they’re given by the truncated Taylor series. Then fun begins. We have equations but unknowns, so we know that the solution won’t be unique, unless we decide on the value of one of the unknown, or decide to express the solution in terms of this one unknown. We could choose, say to be the “known unknown” and go through the solution as if is a constant rather than an unknown. This process is tedious, but it’s only normal substitution/elimination.

We arrive at:

Letting (because, why not), we get the approximation:

We’re done!

*

* *

How does it look? We didn’t use very many terms and the Taylor series is around zero. Let’s use, say, *Mathematica*, and plot the difference:

We see that on the interval ]-1,1[ the error is very small. Of course, you can’t tell from the graph, but the maximum error between -0.5 and 0.5 is 0.0012, and goes a bit bigger on ]-1,1[ at about 0.016. With very few terms, that’s rather good. Want better precision? Just pick a larger and and have fun.