Gaussian elimination is the process of reducing an matrix to upper triangular form by elementary row operations. It consists of
stages, in the
th of which multiples of row
are added to later rows to eliminate elements below the diagonal in the
th column. The result of Gaussian elimination (assuming it succeeds) is a factorization
, where
is unit lower triangular (lower triangular with ones on the diagonal) and
is upper triangular. This factorization reduces the solution of a linear system
to the solution of two triangular systems.
James Wilkinson showed in the early 1960s that the numerical stability of Gaussian elimination depends on the growth factor
where the are the elements at the start of the
th stage of Gaussian elimination, with
. Specifically, he obtained a bound for the backward error of the computed solution that is proportional to
, where
is the unit roundoff of the floating-point arithmetic. Wilkinson’s analysis focused attention on the size of the growth factor.
If no row interchanges are done during the elimination then can be arbitrarily large, as is easily seen for
. For some specific types of matrix more can be said.
- If
is symmetric positive definite then
. In this case one would normally use Cholesky factorization instead of LU factorization.
- If
is nonsingular and totally nonnegative (every submatrix has nonnegative determinant) then
.
- If
is diagonally dominant by rows or columns then
.
- If
is complex and its real and imaginary parts are both symmetric positive definite then
.
The entry in the position at the start of stage
of Gaussian elimination is called the pivot. By swapping rows and columns, any element with row and column indices greater than or equal to
can be brought into the pivot position and used as the pivot. A pivoting strategy is a strategy for interchanging rows and columns at the start of each stage in order to choose a good sequence of pivots. For dense matrices the main criterion is that the pivoting strategy keeps the growth factor small, while for sparse matrices minimizing fill-in (a zero entry becoming nonzero) is also important.
Partial Pivoting
Partial pivoting interchanges rows to ensure that the pivot element is the largest in magnitude in its column. Wilkinson showed that partial pivoting ensures and that equality is attained for matrices of the form illustrated for
by
This matrix is a special case of a larger class of matrices for which equality is attained (Higham and Higham, 1989). Exponential growth can occur for matrices arising in practical applications, as shown by Wright for the multiple shooting method for two-point boundary value problems and Foster for a quadrature method for solving Volterra integral equations. Remarkably, though, is usually of modest size in practice. Explaining this phenomenon is one of the outstanding problems in numerical analysis.
Rook Pivoting
Rook pivoting interchanges rows and columns to ensure that the pivot element is the largest in magnitude in both its row and its column. Foster has shown that
This bound grows much more slowly than the bound for partial pivoting, but it can still be large. In practice, the bound is pessimistic.
Complete Pivoting
Complete pivoting chooses as pivot the largest element in magnitude in the remaining part of the matrix. Wilkinson (1961) showed that
This bound grows much more slowly than that for rook pivoting, but again it is pessimistic. Wilkinson noted in 1965 that no matrix had been discovered for which . Much interest has focused on the question of whether growth of
can be exceeded and what is the largest possible growth. For a Hadamard matrix,
, but such matrices do not exist for all
. Evidence suggests that the growth factor for complete pivoting on a Hadamard matrix is exactly
, and this has been proved for
and
.
That is possible was shown in the early 1990s by Gould (in floating-point arithmetic) and Edelman (in exact point arithmetic), through a
matrix with growth
.
Growth Factor Bounds for Any Pivoting Strategy
While much attention has focused on investigating the growth factor for particular pivoting strategies, some results are known that provide growth factor lower bounds for any pivoting strategy. Higham and Higham (1989) obtained a result that implies that for the symmetric orthogonal matrix
the lower bound
holds for any pivoting strategy. Recently, Higham, Higham, and Pranesh (2020) have shown that
with high probability for large when
is a random orthogonal matrix from the Haar distribution. They show that the same bound holds for matrices that have only one singular value different from
and have singular vector matrices that are random orthogonal matrices from the Haar distribution. These “randsvd matrices” can be generated as
gallery('randsvd',n,kappa,2)
in MATLAB.
Experimentation
If you want to experiment with growth factors in MATLAB, you can use the function gep
from the Matrix Computation Toolbox. Examples:
>> rng(0); A = randn(1000); >> [L,U,P,Q,rho] = gep(A,'p'); rho % Partial pivoting. rho = 1.9437e+01 >> [L,U,P,Q,rho] = gep(A,'r'); rho % Rook pivoting. rho = 9.9493e+00 >> [L,U,P,Q,rho] = gep(A,'c'); rho % Complete pivoting. rho = 6.8363e+00 >> A = gallery('randsvd',1000,1e1,2); >> [L,U,P,Q,rho] = gep(A,'c'); rho % Complete pivoting. rho = 5.4720e+01
References
This is a minimal set of references, which contain further useful references within.
- Alan Edelman, The complete pivoting conjecture for Gaussian elimination is false, The Mathematica Journal 2, 58–61, 1992.
- Leslie V. Foster, The growth factor and efficiency of Gaussian elimination with rook pivoting, J. Comput. Appl. Math. 86, 177–194, 1997
- Nicholas J. Higham and Desmond J. Higham, Large growth factors in Gaussian elimination with pivoting, SIAM J. Matrix Anal. Appl. 10 (2), 155–164, 1989.
- Desmond J. Higham, Nicholas J. Higham, and Srikara Pranesh, Random Matrices Generating Large Growth in LU Factorization with Pivoting, MIMS EPrint 2020.13, The University of Manchester, UK, May 2020.
- Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, second edition, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2002. Sections 9.4, 10.4.
- Christos Kravvaritis and Marilena Mitrouli, The growth factor of a Hadamard matrix of order
is
, Numer. Linear Algebra Appl. 16, 715–743, 2009.
- J. H. Wilkinson, Error analysis of direct methods of matrix inversion, J. Assoc. Comput. Mach. 8, 281–330, 1961.
Related Blog Posts
- Randsvd Matrices with Large Growth Factors (2020)
- What Is a Hadamard Matrix? (2020)
- What Is a Random Orthogonal Matrix? (2020)
This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.
Hi Sirs.
There must be some bug in the “gep” function when rook pivoting is chosen.
In the example
rng(0); A = randn(1000);
[L,U,P,Q,rho] = gep(A,’r’); rho % Rook pivoting.
a warning “too many FOR loop iterations” is issued.
There was a change in MATLAB a while back to the allowed limits in for loops. I hope to do a major update of the toolbox later this year. For now you can get an updated gep.m from https://www.dropbox.com/s/pqwilyyjg66v9y1/gep.m?dl=0
Thanks Nick. That would be great. I’ll use this version in the meantime.