Cutting corners is generally thought of as a bad thing. It generally is, I agree. But in some occasions, optimally cutting corners is the right thing to do. I can show you what I mean. Using the Logo programming language (precisely the KTurtle implementation), I devised the following experiment. Consider the following image:
The two circles do look the same, aside from color and shift, that is. The blue circle is done using the classic child-like approach: the turtle repeats 360 times the forward 1 turnright 1 commands. Superficially, at that resolution, the red circles does look quite a lot like the blue one. Or does it? Shifting back both circles to the same position:
We see that there are minutes differences between the two. But the red circle doesn’t have 360 sides, only 36. The second, red, circle is drawn by repeating 36 times forward 10 turnright 10 commands. It is only by superimposing the two that we actually see the difference.
Granted, Logo programming is not exactly a major concern for modern graphics rendering or game programming, but this short example does prove my point that sometimes, a great speedup can be obtained by using short-cut evaluation of more complex mathematical expressions, such as Bézier curves that are often used to calculate smooth trajectories of cameras in scenes. Using a simple method (the de Casteljau’s algorithm) one can compute a sufficiently precise approximation without exceeding a given number of computations; or at least balance between the precision and the number of points computed. Does a Bézier curve approximated by 10, 20 segments looks smooth under your rendering conditions? Or does the roller-coaster ride still looks realistic? That’s up to you to decide.
Taking back the circle example. How precise does it have to be, really? We would probably all agree that it should be precise to the rounded-pixel. Some will argue that anti-aliasing is quite necessary so that sub-pixel precision is required. Is splitting pixels in four enough? In 16? Answering this question first will help you guide your efforts toward the development of a good enough algorithm.
This reasoning doesn’t apply only to graphics applications; it is true for a large number of other applications and contexts. It is not because someone came up with a complex algorithm that offers the best precision and that it’s the accepted method that there is no room to cut corners. You may get away with a much faster prediction algorithm that is right 95% of the times instead of 97% because the cost of being wrong 2% more often doesn’t mean all that much because it occurs comparatively rarely.
Balancing between the amount of computation and the precision of the result is one of the eternal struggles in computing. You have to chose between the usine à gaz that provides precision up to the umpteenth bit, or much simpler devices that will give you acceptable precision, often with a much reduced computational load. You have to fully understand the trade-offs involved in the precision vs accuracy dilemma for your specific problem. Increasing the computation time does not mean you will gain a significant amount of accuracy or precision; moreover even though the difference may be significant mathematically, it doesn’t mean it’s pragmatically relevant.
As everything else, nothing is as simple as it first seems.
An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (Google Books). Although this book does not discuss the subject of numerical approximation, it is still a must read to understand the precision vs accuracy dilemma under the more mathematical approach of error propagation. Or you can get it from Amazon.com