Tiny Relative Errors

Let x\ne0 and y be distinct floating point numbers. How small can the relative difference between x and y be? For IEEE double precision arithmetic the answer is u = 2^{-53} \approx 1.1 \times 10^{-16}, which is called the unit roundoff.

What if we now let x and y be vectors and define the relative difference as d(x,y) = \|x-y\|_{\infty}/\|x\|_{\infty}, where the infinity norm is \|x\|_{\infty} = \max_i |x_i|? It does not seem to be well known that d(x,y) can be much less than u. I have observed this phenomenon numerous times over the years but had not previously stopped to wonder how it could happen. An example is

x = \begin{bmatrix}1 \\ 10^{-22} \end{bmatrix}, \quad  y = \begin{bmatrix}1 \\ 2\times 10^{-22} \end{bmatrix}, \quad d(x,y) = 10^{-22}.

Notice that the largest element(s) in magnitude of x agrees with the corresponding element(s) of y. This is, in fact, the only way that d(x,y) < u is possible.

Theorem (Dingle & Higham, 2013).
Let x\ne0 and y be n-vectors of normalized floating point numbers.
If d(x,y) < u then x_k = y_k for all k such that |x_k| = \|x\|_{\infty}.

This result, and the scalar case mentioned above, are proved in

Nicholas J. Higham and Nicholas J. Dingle, Reducing the Influence of Tiny Normwise Relative Errors on Performance Profiles, MIMS EPrint 2011.90; to appear in ACM Trans. Math. Soft.

Performance profiles, introduced by Dolan and Moré in 2002, are a popular way to compare competing algorithms according to particular measures of performance, such as relative error or execution time. We show in the paper above that tiny relative errors can result in misleading performance profiles and show how a simple transformation of the data can lead to much more useful profiles. For more, see the paper or the talk Numerical Issues in Testing Linear Algebra Algorithms that I gave at the SIAM Conference on Computational Science and Engineering in Boston last week.