Solving the Quadratic Equation

This note originated in a thread on comp.software-eng on the topic of robust algorithms. I was trying to illustrate the need for algorithm designers to be very careful in enumerating all the possible special cases in routines which are intended to be robust. It is my view that such routines should cope sensibly with all valid inputs and give meaningful error reports for any invalid inputs. Unless speed is absolutely of the essence I do not subscribe to the garbage in garbage out maxim. I much prefer garbage in error messages out. An ideal which is not always attainable.

This is a fairly trivial example, but manages to encompass enough complexity to illustrate the hazards without having to resort to a long and detailed analysis.

As an example of how deceptively simple looking problems can be trouble, is anyone brave enough to offer a solution to the quadratic equation?
0 = ax² + bx + c
Developing robust algorithms that cope with all the special cases correctly is a specialised business.

So I would have to guess that your point is that the roots can be complex. Anyhow, I'll bite:


OK - that is the formal high school algebra solution, I'll post a fairly complete solution here in about a week.

A moderately long thread ensued with some interesting and highly inaccurate claims made about how to solve this simple equation. I was also sent a large number of programs which people claimed were complete solutions of the problem and asserting that there were absolutely no problems with their program.

Most of the solution cases were identified, but one important special case #6 was missed. I think this shows quite clearly why well tested and documented library routines are to be preferred to own brew concoctions.

We can clearly identify the following special cases
#0 a = 0, b = 0 -> equation is inconsistent unless c = 0, and x is undetermined
#1 a = 0 -> solve 0 = bx + c -> x = -c/b (one root)
#2 a != 0 -> define D = b²-4ac then
#3 D < 0 -> complex roots (need complex arithmetic)
#4 D = 0 -> identical roots x = -b/2a (rounding problems with D)
#5 D > 0 -> real distinct roots (as described above)

However, we are still not done. In the spirit of robustness we should also check that substituting the roots into the equation does give (a close approximation to) zero. It may even be possible to "polish" the roots using the time honoured Newton-Raphson method. xnew = x - f(x)/f '(x)

In order of seriousness the following traps remain:

#6 in case #5 b² >> 4ac -> roots tend towards -b/a and 0
The latter being the difference of two large numbers and so is very prone to difficulties of numerical roundoff. The usual solution is to use the standard expression for the larger magnitude root, x1, and derive the other root accurately as: x2 = (c/a) / x1

#7 in case #4 Both roots identical prevents N-R root polishing.
#8 in case #5 b² << 4ac -> slight risk of rounding errors (not usually requiring special solution)

That's the lot - excuse me not treating the complex roots case, but there are no other interesting special cases occurring there. In more complex situations even with skilled analysts it is not uncommon for the odd special case, or an unexpected consequence of rounding error in unusual cases to escape scrutiny.

There are also a few end effects which need careful consideration when the magnitudes of the coefficients are either very large or very small to ensure that the best accuracy is preserved. It is possible to rescale the problem to a canonical form which avoids some of this but there is still some risk of overflow in intermediate calculations.

b² or 4ac may overflow even though sqrt(b² -4ac) is a reasonable value.

There were quite few postings in the thread which highlighted a range of misconceptions about the nature of the problem and the difficulties inherent in manipulating floating point numbers on a computer.

Please do not sent me any more programs which solve the equations together with "proofs" that your program is perfectly ok and there is no problem here worth worrying about based on solving a=1, b=2, c=1 Also note that because intermediate results in Intel floating point processors are held in temporary registers of 80 bits it will tend to mask any numerical instability for single precision coefficients. Even for double precision 64bit real coefficients the extra 16 bits of temporary reals are very helpful in maintaining numerical accuracy.

I can still hear you saying. Yes OK but this is a theoretical fault and it could never happen in practice. Well if a thing can go wrong it will and here is the practical example where hardware improvements together with the naive solution of a quadratic equation caused real trouble in an industrial environment.

High quality instrumentation DAC's were available in 16bit and 18bit accuracy and are guaranteed to be monotonically increasing and accurate locally to better than half a bit. However, as the early ones were a bit less than perfect they tended to have a slight DC offset and some nonlinearity droop as the output voltage increased. This was generally calibrated out against a reference DVM of high resolution and accuracy (6 significant figures). The systematic error was very slight, but important in some high precision applications.
Values of a = 1e-10 b=38 and c=y were fairly typical and were solved for the root nearest to zero. Note that this is very close to a linear equation with a very small quadratic correction. Case #1 or more accurately as Case #6

Time passed and the DAC manufacturers improved their process so that both the DC offset was lower and more importantly the main source of nonlinearity was largely eliminated by better analogue designs. This pushed the value of "a" to the point where it was no longer possible to get correct 18 bit DAC values for some instruments. The fault looked remarkably like a hardware fault as least significant DAC bits would be erratic (and the software had previously been known to work flawlessly for several years).

It was finally traced to inaccurate solution of the calibration polynomial only after the hardware had been exonerated.

The example of solving the quadratic equation for single precision floating point is also covered in the Intel Pentium Architecture Software Developers Manual Volume 1 in Chapter 7 Floating Point. The crucial point here is that the x87 intermediate results are held in wide 80bit registers and the loss of significance will normally be hidden from the end user provided all of the calculation is done on the FPU stack. However, if as may well be the case the partial result sqrt(b²-4ac) is saved to external single precision storage and reloaded then all bets are off. It is very generous of the floating point hardware designers to build in these extra guard digits to help programmers.


Email to Martin Brown

Send feedback suggestions and comments to Martin Brown
Last modified 8th February 1998