An LU factorization of an matrix is a factorization , where is unit lower triangular and is upper triangular. “Unit” means that has ones on the diagonal. Example:

An LU factorization simplifies the solution of many problems associated with linear systems. In particular, solving a linear system reduces to solving the triangular systems and , since then .

For a given , an LU factorization may or may not exist, and if it does exist it may not be unique. Conditions for existence and uniqueness are given in the following result (see Higham, 2002, Thm. 9.1 for a proof). Denote by the leading principal submatrix of of order .

Theorem 1.The matrix has a unique LU factorization if and only if is nonsingular for . If is singular for some then the factorization may exist, but if so it is not unique.

Note that the (non)singularity of plays no role in Theorem 1. However, if is nonsingular and has an LU factorization then the factorization is unique. Indeed if has LU factorizations then the are necessarily nonsingular and so . The left side of this equation is unit lower triangular and the right side is upper triangular; therefore both sides must equal the identity matrix, which means that and , as required.

Equating leading principal submatrices in gives , which implies that . Hence . In fact, such determinantal formulas hold for all the elements of and :

Here, , where and are vectors of subscripts, denotes the submatrix formed from the intersection of the rows indexed by and the columns indexed by .

## Relation with Gaussian Elimination

LU factorization is intimately connected with Gaussian elimination. Recall that Gaussian elimination transforms a matrix to upper triangular form in stages. At the th stage, multiples of row are added to the later rows to eliminate the elements below the diagonal in column , using the formulas

where the quantities are the multipliers. Of course each must be nonzero for these formulas to be defined, and this is connected with the conditions of Theorem 1, since . The final is the upper triangular LU factor, with for , and for , that is, the multipliers make up the factor (for a proof of these properties see any textbook on numerical linear algebra).

The matrix factorization viewpoint is well established as a powerful paradigm for thinking and computing. Separating the computation of LU factorization from its application is beneficial. For example, given we saw above how to solve . If we need to solve for another right-hand side we can just solve and , re-using the LU factorization. Similarly, solving reduces to solving the triangular systems and .

## Computation

An LU factorization can be computed by directly solving for the components of and in the equation . Indeed because has unit diagonal the first row of is the same as the first row of , and then determines the first column of . One can go on to determine the th row of and the th row of , for . This leads to the Doolittle method, which involves inner products of partial rows of and partial columns of .

Given the equivalence between LU factorization and Gaussian elimination we can also employ the Gaussian elimination equations:

This ordering of the loops in the factorization is the basis of early Fortran implementations of LU factorization, such as that in LINPACK. The inner loop travels down the columns of , accessing contiguous elements of since Fortran stores arrays by column. Interchanging the two inner loops gives the ordering, which updates the matrix a row at a time, and is appropriate for a language such as C that stores arrays by row.

The and orderings correspond to the Doolittle method. The last two of the orderings are the and orderings, to which we will return later.

## Schur Complements

For with we can write

The matrix is called the *Schur complement* of in .

The first row and column of and have the correct forms for a unit lower triangular matrix and an upper triangular matrix, respectively. If we can find an LU factorization then

is an LU factorization of . Note that this is simply another way to express the algorithm above.

For several matrix structures it is immediate that . If we can show that the Schur complement inherits the same structure then it follows by induction that we can compute the factorization for , and so an LU factorization of exists. Classes of matrix for which and being in the class implies the Schur complement is also in the class include

- symmetric positive definite matrices,
- -matrices,
- matrices (block) diagonally dominant by rows or columns.

(The proofs of these properties are nontrivial.) Note that the matrix (1) is row diagonally dominant, as is its factor, as must be the case since its rows are contained in Schur complements.

We say that has *upper bandwidth* if for and *lower bandwidth* if for . Another use of (2) is to show that and inherit the bandwidths of .

Theorem 2.Let have lower bandwidth and upper bandwidth . If has an LU factorization then has lower bandwidth and has upper bandwidth .

Proof.In (2), the first column of and the first row of have the required structure and has upper bandwidth and lower bandwidth , since and have only and nonzero components, respectively. The result follows by induction.

## Block Implementations

In order to achieve high performance on modern computers with their hierarchical memories, LU factorization is implemented in a block form expressed in terms of matrix multiplication and the solution of multiple right-hand side triangular systems. We describe two block forms of LU factorization. First, consider a block form of (2) with block size , where is :

Here, is the Schur complement of in , given by . This leads to the following algorithm:

- Factor .
- Solve for .
- Solve for .
- Form .
- Repeat steps 1–4 on to obtain .

The factorization on step 1 can be done by any form of LU factorization. This algorithm is known as a *right-looking* algorithm, since it accesses data to the right of the block being worked on (in particular, at each stage lines 2 and 4 access the last few columns of the matrix).

An alternative algorithm can derived by considering a block partitioning, in which we assume we have already computed the first block column of and :

We now compute the middle block column of and , comprising columns, by

- Solve for .
- Factor .
- Solve for .
- Repartition so that the first two block columns become a single block column and repeat steps 1–4.

This algorithm corresponds to the ordering. Note that the Schur complement is updated only a block column at a time. Because the algorithm accesses data only to the left of the block column being worked on, it is known as a *left-looking* algorithm.

Our description of these block algorithms emphasizes the mathematical ideas. The implementation details, especially for the left-looking algorithm, are not trivial. The optimal choice of block size will depend on the machine, but is typically in the range —.

An important point is that all these different forms of LU factorization, no matter which ordering or which value of , carry out the same operations. The only difference is the order in which the operations are performed (and the order in which the data is accessed). Even the rounding errors are the same for all versions (assuming the use of “plain vanilla” floating-point arithmetic).

## Rectangular Matrices

Although it is most commonly used for square matrices, LU factorization is defined for rectangular matrices, too. If then the factorization has the form with lower triangular and upper trapezoidal. The conditions for existence and uniqueness of an LU factorization of are the same as those for , where .

## Block LU Factorization

Another form of LU factorization relaxes the structure of and from triangular to block triangular, with having identity matrices on the diagonal:

Note that is not, in general, upper triangular.

An example of a block LU factorization is

LU factorization fails on because of the zero pivot. This block LU factorization corresponds to using the leading principal submatrix of to eliminate the elements in the submatrix. In the context of a linear system , we have effectively solved for the variables and in terms of and and then substituted for and in the last two equations.

Conditions for the existence of a block LU factorization are analogous to, but less stringent than, those for LU factorization in Theorem 1.

Theorem 3.The matrix has a unique block LU factorization if and only if the first leading principal block submatrices of are nonsingular.

The conditions in Theorem 3 can be shown to be satisfied if is block diagonally dominant by rows or columns.

Note that to solve a linear system using a block LU factorization we need to solve and , but the latter system is not triangular and requires the solution of systems involving the diagonal blocks of , which would normally be done by (standard) LU factorization.

## Sensitivity

If has a unique LU factorization then for a small enough perturbation an LU factorization exists. To first order, this equation is , which gives

Since is strictly lower triangular and is upper triangular, we have, to first order,

where denotes the strictly lower triangular part and the strictly upper triangular part. Clearly, the sensitivity of the LU factors depends on the inverses of and . However, in most situations, such as when we are solving a linear system , it is the backward stability of the LU factors, not their sensitivity, that is relevant.

## Pivoting and Numerical Stability

Since not all matrices have an LU factorization, we need the option of applying row and column interchanges to ensure that the pivots are nonzero unless the column in question is already in triangular form.

In finite precision computation it is important that computed LU factors and are numerically stable in the sense that with , where is a constant and is the unit roundoff. For certain matrix properties, such as diagonal dominance by rows or columns, numerical stability is guaranteed, but in general it is necessary to incorporate row interchanges, or row or column interchanges, in order to obtain a stable factorization.

See What Is the Growth Factor for Gaussian Elimination? for details of pivoting strategies and see Randsvd Matrices with Large Growth Factors for some recent research on growth factors.

## References

This is a minimal set of references, which contain further useful references within.

- Jack J. Dongarra, Iain S. Duff, Danny C. Sorensen, and Henk A. Van der Vorst, Numerical Linear Algebra for High-Performance Computers, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1998. (For different implementations of LU factorization.)
- Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, second edition, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2002.

## Related Blog Posts

- Randsvd Matrices with Large Growth Factors (2020)
- What Is a Block Matrix? (2020)
- What is a Diagonally Dominant Matrix? (2021)
- What Is the Growth Factor for Gaussian Elimination? (2020)

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.