The Rise of Mixed Precision Arithmetic

For the last 30 years, most floating point calculations in scientific computing have been carried out in 64-bit IEEE double precision arithmetic, which provides the elementary operations of addition, subtraction, multiplication, and division at a relative accuracy of about 10^{-16}. We are now seeing growing use of mixed precision, in which different floating point precisions are combined in order to deliver a result of the required accuracy at minimal cost.

Single precision arithmetic (32 bits) is an attractive alternative to double precision because it halves the costs of storing and transferring data, and on Intel chips the SSE extensions allow single precision arithmetic to run twice as fast as double.

wpid-qd-compare2.jpg
” /> The Mandelbrot set computed in double and quadruple precision. Image taken from https://www.thasler.com/blog/blog/glsl-part5.

Quadruple precision arithmetic, which was included in the 2008 revision of the IEEE standard, is supported by some compilers, and it can be implemented in terms of double precision arithmetic via double-double arithmetic. Arbitrary precision floating point arithmetic is available through, for example, the GNU MPFR library, the mpmath library for Python, the core data type BigFloat in the new language Julia, VPA arithmetic in the MATLAB Symbolic Math Toolbox, or the Advanpix Multiprecision Computing Toolbox for MATLAB.

Half precision arithmetic, in which a number occupies 16 bits, is supported by the IEEE standard for storage but not for computation. It has been argued that for deep learning half precision, with its relative accuracy of about 10^{-4}, is good enough for training and running neural networks. Here are some of the ways in which extra precision is currently being used.

  • Iterative refinement, in the traditional form that first became popular in the 1960s, improves the quality of an approximate solution to a linear system via updates obtained from residuals computed in extra precision.
  • When an algorithm suffers instability it may be possible to overcome it by using extra precision in just a few, key places. This has been done recently in eigensolvers and for matrix orthonormalization.
  • Any iterative algorithm that accepts an arbitrary starting point can be run once at a given precision and the solution used to “warm start” a second run of the same algorithm at higher precision. This idea has been used recently in linear programming.
  • Numerical integration of differential equations over long time periods may need higher precision in order to allow the phenomena of interest to be observed. A recent example is in the study of Kerr (rotating) black holes, where the underlying hyperbolic partial differential equation is solved using quadruple precision arithmetic running on GPUs.
  • When one is developing error bounds or testing algorithms, one needs in principle the exact solution. In practice, a solution computed at high precision and rounded to the working precision is usually adequate, and this is an approach I frequently use in my work in numerical linear algebra.

As the relative costs and ease of computing at different precisions evolve, due to changing architectures and software, as well as the disruptive influence of accelerators such as GPUs, we will see an increasing development and use of mixed precision algorithms. In some ways this is analogous to the increasing interoperability of programming languages (illustrated by C++, Julia, and Python, for example): one uses the main tool (precision) one would like to work with and brings in other tools (precisions) as necessary in order to complete the task.

Update: linear programming link updated December 18, 2018.

3 thoughts on “The Rise of Mixed Precision Arithmetic

  1. I really enjoyed this blog post. Thank you.

    Multiple precision arithmetic is also employed in homotopy continuation methods for numerical solutions to multidimensional rootfinding. This is an interesting application of multiple precision because it may be needed for two reasons: (1) Two homotopy paths may get numerically close and extra precision in the key places can alleviate the short-lived algorithmic instability, and (2) An isolated root may itself be ill-conditioned and extra precision can sometimes be used to get just a few extra digits in the final solution. Bertini is just one excellent rootfinder that employs adaptive multiple precision arithmetic: https://bertini.nd.edu/.

  2. Are you aware of any work on mixed-precision for Krylov methods? This would be highly interesting to explore convergence rates with single precision.

Leave a comment