Numerical methods are generally rather hard to get right because of error propagation due to the limited precision of floats (or even doubles). This seems to be especially true with methods involving series, where a usually large number of ever diminishing terms must added to attain something that looks like convergence. Even fairly simple sequences such as
may be complex to evaluate. First, is cumbersome, and
becomes small exceedingly rapidly.
Posted by Steven Pigeon