Six matrix factorizations dominate in numerical linear algebra and matrix analysis: for most purposes one of them is sufficient for the task at hand. We summarize them here.

For each factorization we give the cost in flops for the standard method of computation, stating only the highest order terms. We also state the main uses of each factorization.

For full generality we state factorizations for complex matrices. Everything translates to the real case with “Hermitian” and “unitary” replaced by “symmetric” and “orthogonal”, respectively.

The terms “factorization” and “decomposition” are synonymous and it is a matter of convention which is used. Our list comprises three factorization and three decompositions.

Recall that an upper triangular matrix is a matrix of the form

and a lower triangular matrix is the transpose of an upper triangular one.

## Cholesky Factorization

Every Hermitian positive definite matrix has a unique *Cholesky factorization* , where is upper triangular with positive diagonal elements.

*Cost*: flops.

*Use*: solving positive definite linear systems.

## LU Factorization

Any matrix has an *LU factorization* , where is a permutation matrix, is unit lower triangular (lower triangular with 1s on the diagonal), and is upper triangular. We can take if the leading principal submatrices , , of are nonsingular, but to guarantee that the factorization is numerically stable we need to have particular properties, such as diagonal dominance. In practical computation we normally choose using the partial pivoting strategy, which almost always ensures numerically stable.

*Cost*: flops.

*Use*: solving general linear systems.

## QR Factorization

Any matrix with has a *QR factorization* , where is unitary and is upper trapezoidal, that is, , where is upper triangular.

Partitioning , where has orthonormal columns, gives , which is the *reduced*, *economy size*, or *thin* QR factorization.

*Cost*: flops for Householder QR factorization. The explicit formation of (which is not usually necessary) requires a further flops.

*Use*: solving least squares problems, computing an orthonormal basis for the range space of , orthogonalization.

## Schur Decomposition

Any matrix has a *Schur decomposition* , where is unitary and is upper triangular. The eigenvalues of appear on the diagonal of . For each , the leading columns of span an invariant subspace of .

For real matrices, a special form of this decomposition exists in which all the factors are real. An *upper quasi-triangular matrix* is a block upper triangular with whose diagonal blocks are either or . Any has a *real Schur decomposition* , where is real orthogonal and is real upper quasi-triangular with any diagonal blocks having complex conjugate eigenvalues.

*Cost*: flops for and (or ) by the QR algorithm; flops for (or ) only.

*Use*: computing eigenvalues and eigenvectors, computing invariant subspaces, evaluating matrix functions.

## Spectral Decomposition

Every Hermitian matrix has a * spectral decomposition* , where is unitary and . The are the eigenvalues of , and they are real. The spectral decomposition is a special case of the Schur decomposition but is of interest in its own right.

*Cost*: for and by the QR algorithm, or flops for only.

*Use*: any problem involving eigenvalues of Hermitian matrices.

## Singular Value Decomposition

Any matrix has a *singular value decomposition* (SVD)

where and are unitary and . The are the *singular values* of , and they are the nonnegative square roots of the largest eigenvalues of . The columns of and are the *left and right singular vectors* of , respectively. The rank of is equal to the number of nonzero singular values. If is real, and can be taken to be real. The essential SVD information is contained in the *compact* or *economy size* SVD , where , , , and .

*Cost*: for , , and by the Golub–Reinsch algorithm, or with a preliminary QR factorization.

*Use*: determining matrix rank, solving rank-deficient least squares problems, computing all kinds of subspace information.

## Discussion

Pivoting can be incorporated into both Cholesky factorization and QR factorization, giving (complete pivoting) and (column pivoting), respectively, where is a permutation matrix. These pivoting strategies are useful for problems that are (nearly) rank deficient as they force to have a zero (or small) block.

The big six factorizations can all be computed by numerically stable algorithms. Another important factorization is that provided by the Jordan canonical form, but while it is a useful theoretical tool it cannot in general be computed in a numerically stable way.

For further details of these factorizations see the articles below.

These factorizations are precisely those discussed by Stewart (2000) in his article The Decompositional Approach to Matrix Computation, which explains the benefits of matrix factorizations in numerical linear algebra.

How about LDL*, where L is lower triangular and D is block diagonal with 1×1 or 2×2 blocks? Useful for symmetric indefinite problems. The matrix inertia can be computed from D.

Yes, that’s also an important factorization. It’s mentioned in the article https://nhigham.com/2020/12/22/what-is-a-modified-cholesky-factorization/