In the article, "A compact Logarithm Algorithm" (DDJ, February 1996), John DeVos describes an approximation to log2(x) using interpolation. This note is based on a letter to DDJ, May 1996 detailing another route to computing the natural logarithm quickly using a simple but accurate formula.

Here is another useful solution to the same problem, but which is more general and easily extended to many other functions. It is based on using a Pade approximation. This has the somewhat magical property of converting a poorly converging power series into a usually much better behaved rational polynomial. The general solution is a little more complex, but an approximation using three terms will do for this case.

Given the first three terms of a truncated power series:

f(x) = ax + bx² + cx³ + ...

The equivalent Pade approximation can be derived as:

P{f(x)} = x(ab + (b² - ac)x)/(b - cx)

This rational expression exactly matches the known coefficients for x, x² and x³ but has generally much better convergence. Dividing out (b-cx) gives:

P{(f(x)} = ax + bx² + cx³ + (cx²)²/(b-cx)

The problem of interpolating the logarithm function is equivalent to
requiring the value for ln(1+x) in the range 0 <= x <= 1 The common
series for this has very poor convergence:

ln( 1+x ) = x - x²/2 + x³/3 - ...

ln(1) = 0, ln(2) = 0.833333 max error = 0.14 ( 20% )

Hence the Pade approximation is:

P{ ln( 1+x ) } = x(6+x)/(6+4x)

ln(1) = 0, ln(2) = 0.7 max error = 0.00685 ( < 1% ) rms error =
0.00258

This is already a good and fast approximation to ln(1+x) and in many applications like realtime displays can be used as a basis for logarithmic scaling with a few minor modifications for integer arithmetic. To keep this note a reasonable length I won't detail the changes to allow for proper truncation in a scaled integer version.

Where greater accuracy is required the expression can be optimised by
least squares fitting of the coefficients over the range 0-1. This yields:

p'{ ln( 1+x ) } = x*(6 + 0.7662x)/(5.9897 + 3.7658x)

ln(1) = 0, ln(2) = 0.69358 max error = 4.3E-4 ( < 0.1 % ) rms error =
1.5E-4

This is about as good as you can get with this simple formula, and these
coefficients may be scaled up to suitable integers for use.

Much higher accuracy is possible by starting from a better series:

Define: y = x/(2+x)

then ln( (1+y)/(1-y) ) = 2y + 2y^3/3 + 2y^5/5 + ... ln(1) = 0, ln(2) =
0.69300 max error = -0.00014

From which the Pade approximation yields:

P{ ln (1+y)/(1-y) } = 2y*(15 - 4y²)/(15 - 9y²) ln(1) = 0, ln(2)
= 0.693122 max error = -0.000025

Again this expression can also have its coefficients tweaked to improve
accuracy over a narrow range still further.

Bit shifting and a similar approach can derive a method to calculate
approximate square roots quickly based on:

P{ sqrt(1+x) } = (4 + 3x)/(4 + x)

Fast rational expressions can also be derived for sin, cos, tan and their inverses with rms accuracies typically around 0.03% Restricting arguments to 0<x<pi/4 gives an rms error of <0.003%

Here are the basic first order versions of a few common transcendental functions:

Function | Pade Approximation | Taylor Series |

Log( 1+x ) | x(6+x)/(6+4x) | x - x²/2 + x³/3 - ... |

Log( 1+x ) | x - x²(12 + x)/(24+18x) | x - x²/2 + x³/3 - x^4/4 +... |

Log( (1+x)/(1-x) ) | 2x(15-4x²)/(15-9x²) | x + x³/3 + x^5/5 + ... |

Sqrt( 1+x ) | (4+3x)/(4+x) | 1 + x/2 - x²/8 + ... |

Sqrt( 1+x) | 1 + x(4+x)/(8+4x) | 1 + x/2 - x²/8 + x³/16 - ... |

Sqrt( x² + y² ) | x + 2x*y²/(4x²+y²) | x >= y |

Sin(x) | x(60-7x²)/(60+3x²) | x - x³/3! + x^5/5! - ... |

Sin(x) | x - x³(420+11x²)/(2520+60x²) | x - x³/3! + x^5/5! - x^7/7!... |

Cos(x) | (12-5x²)/(12+x²) | 1 - x²/2! + x^4/4! - ... |

Cos(x) | 1 - x²(60+3x²)/(120+4x²) | 1 - x²/2! + x^4/4! - x^6/6!.. |

Tan(x) | x(15-x²)/(15-6x²) | x + x³/3 + 2x^5/15 + ... |

Tan(x) | x + x³(210 - x²)/(630 - 255x²) | x + x³/3 + 2x^5/15 + 17x^7/315 + |

Arctan(x) | x(15+4x²)/(15+9x²) | x - x³/3 + x^5/5 - ... |

Arctan(x) | x - x³(35+4x²)/(105+75x²) | x - x³/3 + x^5/5 - x^7/7 + ... |

Exp(x) | (2+x)/(2-x) | 1 + x + x²/2! + ... |

Exp(x) | 1 + x(6+x)/(6-2x) | 1 + x + x²/2! + x³/3! + ... |

(1 + x)^(1/n) | (2n + (1+n)x)/(2n +(n-1)x) | 1 + x/n + (x/n)^2(1-n)/2 |

(1 + 1/n)^n | e(1 - 6/(12n+11)) | 1 + 1 + (n-1)/n/2! + .... |

x.Log(x) | -20x(1-x)/(7+15x) | 0<=x<=1 (impure) |

x Log(x) | x(1-x)(1 + (1-x)(79+71x)/(24+248x)) | 0 <= x <= 1 (impure) |

x.Log(x)+(1-x)Log(1-x) | -x(1-x)(11+33x(1-x))/(2+20x(1-x)) | 0<=x<=1 (impure) |

The rational approximation is particularly good for series with alternating terms and poor polynomial convergence like log(1+x). The polynomial series for exp(x) converges too well gain much benefit from Pade approximation using a first order rational polynomial. Longer versions of optimised rational polynomials are frequently used for the computation of transcendental functions on larger machines. It is worth considering using them on microcontrollers too, because they provide a good compromise in accuracy, size and speed.

Simple Pade approximants can also be used to obtain novel and sometimes useful analytical solutions to a range of classical problems. One such is in the solution of Kelper's equation using the lowest order Pade approximation for Sin(x) to yield a cubic polynomial.