New Way of Computing Square Roots?

I sometimes have quite nerdy readings. As of late, I’ve been reading Le fabuleux destin de \sqrt{2} (The Fabulous Destiny of \sqrt{2}, one might translate) by Benoît Rittaud. This book is a treasure trove of \sqrt{2} facts, and one caught my eye more than the others so far: an iterative method to compute square roots of any (positive) number.

When the method is first presented, he leaves to the reader to find a demonstration (though he gives one much later on, several chapters later), but let’s see what we can find.

The iterative method has two components. At first:

a_0=1

and

b_0=n

if we’re interested in finding \sqrt{n}. We then have the recurrence:

a_{t+1}=\displaystyle\frac{2}{\frac{1}{a_t}+\frac{1}{b_t}},

b_{t+1}=\displaystyle\frac{1}{2}(a_t+b_t).

Rittaud (handwavingly) remarks that since a_{t+1} and b_{t+1} are always between a_t and b_t, they eventually converge to the same value, which turns out to be the square root of n.

*
* *

Let us make something that looks like a digression but really isn’t. Newton’s method for finding roots (or more exactly zeroes; the vocabulary seems to vary from author to author) of functions can be applied to the computation of \sqrt{n}, and is in fact the usual textbook example for this computation.

Newton’s method is fundamentally simple: from an initial guess for the position of the zero of the function, x_0, we use the slope of the function at x_0 to guess where the zero might actually be. If f(x) is the function, its slope at point x is denoted f'(x), and is fact its derivative at x. Using a little algebra, we find that the next guess for the zero is given by:

x_{t+1}=x_t-\displaystyle\frac{f(x_t)}{f'(x_t)}.

For the square root of n, we put:

f(x)=x^2-n,

which has its zero(es) at x=\sqrt{n}, and since it is also concave, it should behave well with Newton’s method. Replacing f(x)=x^2-n in Newton’s iteration, we get:

x_{t+1}=x_t-\displaystyle\frac{x_t^2-n}{2x_t},

since \displaystyle f'(x)=\frac{\partial f(x)}{\partial x}=2x. It can be rewritten as:

\displaystyle x_{t+1}=\frac{1}{2}\left(x_t+\frac{n}{x_t}\right)

just by simplifying the original expression somewhat.

*
* *

Let us compare the two methods, Newton’s and Rittaud’s (I don’t think it’s his, the text seems to indicate it was known since ancient times; but let’s call it Rittaud’s for now to distinguish it from Newton’s). Let us have a look at the following table, comparing the two methods, by iteration.

t \sqrt{2} x_t a_t b_t
0 1.4142136 2 1 2
1 1.4142136 1.5 1.3333333 1.5
2 1.4142136 1.4166667 1.4117647 1.4166667
3 1.4142136 1.4142157 1.4142114 1.4142157
4 1.4142136 1.4142136 1.4142136 1.4142136
5 1.4142136 1.4142136 1.4142136 1.4142136

So what do we notice? That Newton’s method converges exceedingly fast? Sure. The other method converges pretty fast too. In fact… they converge at the same rate. In fact, they are the same algorithm: b_t is the same as x_t!

*
* *

If you’re tenacious, you can unravel the expression for b_t and see that, algebraically, it is identical to Newton’s method. So, using either Newton’s or Hero‘s method (we eventually find out that a variant of the method is due to Hero of Alexandria), we find:

b_0=n

b_1=\displaystyle\frac{1+n}{2}

b_2=\displaystyle\frac{1+6n+n^2}{4+4n}

b_3=\displaystyle\frac{1+28n+70n^2+28n^3+n^4}{8+56n+56n^2+8n^3}

and it continues to get much worse after that. We can show that the limit of the series indeed \sqrt{n}. Therefore, the recurrence presented by Rittaud, that simplifies to Hero’s method is in fact the same as Newton’s method.

As far as the number of operations is concerned (it is always important when implementing an algorithm), Newton’s (fully simpified) method is more efficient. In its simplified version, it does one addition and two divisions, one of which is the trivial division by 2. The initial recurrence with a_t and b_t need a lot more: three divisions plus one addition in the a_t term, and one more division and addition in the b_t term.

*
* *

So at first, reading of this method in Rittaud’s book, it got me to think whether or not the recurrence using a_t and b_t would be faster or more interesting than Newton’s method. Turns out, it’s an interesting but overly complicated way of computing the same thing. Newton wins again.

4 Responses to New Way of Computing Square Roots?

  1. […] is used to train neural networks of all kinds. Newton’s method for finding square roots (last week’s post) can be thought of as gradient-descent type algorithm with an expression for that depends […]

  2. […] another post, I discussed Newton’s algorithm to extract square roots. Read the previous post for the […]

  3. […] have discussed the problem of efficient square root calculation quite a few times. And yet, while reading Rudman’s The Babylonian Theorem, I […]

  4. […] stable numerical algorithms is usually hard, even for simple problems like the square root, subject we’ve visited a number of times. One method we haven’t explored yet, is using the Taylor series. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: