# How and How Not to Compute a Relative Error

The relative error in a scalar $y$ as an approximation to a scalar $x$ is the absolute value of $e = (x-y)/x$. I recently came across a program in which $e$ had been computed as $1 - y/x$. It had never occurred to me to compute it this way. The second version is slightly easier to type, requiring no parentheses, and it has the same cost of evaluation: one division and one subtraction. Is there any reason not to use this parenthesis-free expression?

Consider the accuracy of the evaluation, using the standard model of floating point arithmetic, which says that $fl(x \mathbin{\mathrm{op}} y) = (x \mathbin{\mathrm{op}} y)(1+\delta)$ with $|\delta| \le u$, where $\mathrm{op}$ is any one of the four elementary arithmetic operations and $u$ is the unit roundoff. For the expression $e_1 = (x-y)/x$ we obtain, with a hat denoting a computed quantity, $\widehat{e_1} = \displaystyle\left(\frac{x-y}{x}\right) (1+\delta_1)(1+\delta_2), \qquad |\delta_i| \le u, \quad i = 1, 2.$

It follows that $\left| \displaystyle\frac{e - \widehat{e_1}}{e} \right| \le 2u + u^2.$

Hence $e_1$ is computed very accurately.

For the alternative expression, $e_2 = 1 - y/x$, we have $\widehat{e_2} = \left(1 - \displaystyle\frac{y}{x}(1+\delta_1)\right) (1+\delta_2), \qquad |\delta_i| \le u, \quad i = 1, 2.$

After a little manipulation we obtain the bound $\left| \displaystyle\frac{e - \widehat{e_2}}{e} \right| \le u + \left|\displaystyle\frac{1-e}{e}\right|(u + u^2).$

The bound on the relative error in $\widehat{e_2}$ is of order $|(1-e)/e|u$, and hence is very large when $|e| \ll 1$.

To check these bounds we carried out a MATLAB experiment. For 500 single precision floating point numbers $y$ centered on $x = 3$, we evaluated the relative error of $y$ as an approximation to $x$ using the two formulas. The results are shown in this figure, where an ideal error is of order $u \approx 6\times 10^{-8}$. (The MATLAB script that generates the figure is available as this gist.) As expected from the error bounds, the formula $1-y/x$ is very inaccurate when $y$ is close to $x$, whereas $(x-y)/x$ retains its accuracy as $y$ approaches $x$.

Does this inaccuracy matter? Usually, we are concerned only with the order of magnitude of an error and do not require an approximation with many correct significant figures. However, as the figure shows, for the formula $|1-y/x|$ even the order of magnitude is incorrect for $y$ very close to $x$. The standard formula $|x-y|/|x|$ should be preferred.

## 2 thoughts on “How and How Not to Compute a Relative Error”

1. Raúl Martínez says:

Nick,

Thank you for delving into this. Your result is certainly far from obvious and I find it very helpful. In my own work, I don’t recall computing the relative error using 1–y/x (though I cannot be certain), but I have sometimes used the latter expression in writing. After reading your note, I can see that such usage could mislead a reader into using that expression for computation, so I’ll be mindful not to use it in the future.

Sincerely,

Raúl Martínez

2. Kyle Mandli says:

For interested parties I converted Nick’s gist above to Python:

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
 # Based on the code by Nick Higham # Compares relative error formulations using single precision and compared to double precision N = 501 # Note: Use 501 instead of 500 to avoid the zero value d = numpy.finfo(numpy.float32).eps * 1e4 a = 3.0 x = a * numpy.ones(N, dtype=numpy.float32) y = [x[i] + numpy.multiply((i – numpy.divide(N, 2.0, dtype=numpy.float32)), d, dtype=numpy.float32) for i in range(N)] # Compute errors and "true" error relative_error = numpy.empty((2, N), dtype=numpy.float32) relative_error[0, :] = numpy.abs(x – y) / x relative_error[1, :] = numpy.abs(1.0 – y / x) exact = numpy.abs( (numpy.float64(x) – numpy.float64(y)) / numpy.float64(x)) # Compute differences between error calculations error = numpy.empty((2, N)) for i in range(2): error[i, :] = numpy.abs((relative_error[i, :] – exact) / numpy.abs(exact)) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.semilogy(y, error[0, :], '.', markersize=10, label="$|x-y|/|x|$") axes.semilogy(y, error[1, :], '.', markersize=10, label="$|1-y/x|$") axes.grid(True) axes.set_xlabel("y") axes.set_ylabel("Relative Error") axes.set_xlim((numpy.min(y), numpy.max(y))) axes.set_ylim((5e-9, numpy.max(error[1, :]))) axes.set_title("Relative Error Comparison") axes.legend() plt.show()