# The Big Six Matrix Factorizations

Six matrix factorizations dominate in numerical linear algebra and matrix analysis: for most purposes one of them is sufficient for the task at hand. We summarize them here.

For each factorization we give the cost in flops for the standard method of computation, stating only the highest order terms. We also state the main uses of each factorization.

For full generality we state factorizations for complex matrices. Everything translates to the real case with “Hermitian” and “unitary” replaced by “symmetric” and “orthogonal”, respectively.

The terms “factorization” and “decomposition” are synonymous and it is a matter of convention which is used. Our list comprises three factorization and three decompositions.

Recall that an upper triangular matrix is a matrix of the form

$\notag R = \begin{bmatrix} r_{11} & r_{12} & \dots & r_{1n}\\ & r_{22} & \dots & r_{2n}\\ & & \ddots& \vdots\\ & & & r_{nn} \end{bmatrix},$

and a lower triangular matrix is the transpose of an upper triangular one.

## Cholesky Factorization

Every Hermitian positive definite matrix $A\in\mathbb{C}^{n\times n}$ has a unique Cholesky factorization $A = R^*R$, where $R\in\mathbb{C}^{n\times n}$ is upper triangular with positive diagonal elements.

Cost: $n^3/3$ flops.

Use: solving positive definite linear systems.

## LU Factorization

Any matrix $A\in\mathbb{C}^{n\times n}$ has an LU factorization $PA = LU$, where $P$ is a permutation matrix, $L$ is unit lower triangular (lower triangular with 1s on the diagonal), and $U$ is upper triangular. We can take $P = I$ if the leading principal submatrices $A(1\colon k, 1\colon k)$, $k = 1\colon n-1$, of $A$ are nonsingular, but to guarantee that the factorization is numerically stable we need $A$ to have particular properties, such as diagonal dominance. In practical computation we normally choose $P$ using the partial pivoting strategy, which almost always ensures numerically stable.

Cost: $2n^3/3$ flops.

Use: solving general linear systems.

## QR Factorization

Any matrix $A\in\mathbb{C}^{m\times n}$ with $m\ge n$ has a QR factorization $A = QR$, where $Q\in \mathbb{C}^{m\times m}$ is unitary and $R$ is upper trapezoidal, that is, $R = \left[\begin{smallmatrix} R_1 \\ 0\end{smallmatrix}\right]$, where $R_1\in\mathbb{C}^{n\times n}$ is upper triangular.

Partitioning $Q = [Q_1~Q_2]$, where $Q_1\in\mathbb{C}^{m\times n}$ has orthonormal columns, gives $A = Q_1R_1$, which is the reduced, economy size, or thin QR factorization.

Cost: $2(n^2m-n^3/3)$ flops for Householder QR factorization. The explicit formation of $Q$ (which is not usually necessary) requires a further $4(m^2n-mn^2+n^3/3)$ flops.

Use: solving least squares problems, computing an orthonormal basis for the range space of $A$, orthogonalization.

## Schur Decomposition

Any matrix $A\in\mathbb{C}^{n\times n}$ has a Schur decomposition $A = QTQ^*$, where $Q$ is unitary and $T$ is upper triangular. The eigenvalues of $A$ appear on the diagonal of $T$. For each $k$, the leading $k$ columns of $Q$ span an invariant subspace of $A$.

For real matrices, a special form of this decomposition exists in which all the factors are real. An upper quasi-triangular matrix $R$ is a block upper triangular with whose diagonal blocks $R_{ii}$ are either $1\times1$ or $2\times2$. Any $A\in\mathbb{R}^{n\times n}$ has a real Schur decomposition $A = Q R Q^T$, where $Q$ is real orthogonal and $R$ is real upper quasi-triangular with any $2\times2$ diagonal blocks having complex conjugate eigenvalues.

Cost: $25n^3$ flops for $Q$ and $T$ (or $R$) by the QR algorithm; $10n^3$ flops for $T$ (or $R$) only.

Use: computing eigenvalues and eigenvectors, computing invariant subspaces, evaluating matrix functions.

## Spectral Decomposition

Every Hermitian matrix $A\in\mathbb{C}^{n\times n}$ has a spectral decomposition $A = Q\Lambda Q^*$, where $Q$ is unitary and $\Lambda = \mathrm{diag}(\lambda_i)$. The $\lambda_i$ are the eigenvalues of $A$, and they are real. The spectral decomposition is a special case of the Schur decomposition but is of interest in its own right.

Cost: $9n^3$ for $Q$ and $D$ by the QR algorithm, or $4n^3\!/3$ flops for $D$ only.

Use: any problem involving eigenvalues of Hermitian matrices.

## Singular Value Decomposition

Any matrix $A\in\mathbb{C}^{m\times n}$ has a singular value decomposition (SVD)

$\notag A = U\Sigma V^*, \quad \Sigma = \mathrm{diag}(\sigma_1,\sigma_2,\dots,\sigma_p) \in \mathbb{R}^{m\times n}, \quad p = \min(m,n),$

where $U\in\mathbb{C}^{m\times m}$ and $V\in\mathbb{C}^{n\times n}$ are unitary and $\sigma_1\ge\sigma_2\ge\cdots\ge\sigma_p\ge0$. The $\sigma_i$ are the singular values of $A$, and they are the nonnegative square roots of the $p$ largest eigenvalues of $A^*A$. The columns of $U$ and $V$ are the left and right singular vectors of $A$, respectively. The rank of $A$ is equal to the number of nonzero singular values. If $A$ is real, $U$ and $V$ can be taken to be real. The essential SVD information is contained in the compact or economy size SVD $A = U\Sigma V^*$, where $U\in\mathbb{C}^{m\times r}$, $\Sigma = \mathrm{diag}(\sigma_1,\dots,\sigma_r)$, $V\in\mathbb{C}^{n\times r}$, and $r = \mathrm{rank}(A)$.

Cost: $14mn^2+8n^3$ for $P(:,1\colon n)$, $\Sigma$, and $Q$ by the Golub–Reinsch algorithm, or $6mn^2+20n^3$ with a preliminary QR factorization.

Use: determining matrix rank, solving rank-deficient least squares problems, computing all kinds of subspace information.

## Discussion

Pivoting can be incorporated into both Cholesky factorization and QR factorization, giving $\Pi^T A \Pi = R^*R$ (complete pivoting) and $A\Pi = QR$ (column pivoting), respectively, where $\Pi$ is a permutation matrix. These pivoting strategies are useful for problems that are (nearly) rank deficient as they force $R$ to have a zero (or small) $(2,2)$ block.

The big six factorizations can all be computed by numerically stable algorithms. Another important factorization is that provided by the Jordan canonical form, but while it is a useful theoretical tool it cannot in general be computed in a numerically stable way.

For further details of these factorizations see the articles below.

These factorizations are precisely those discussed by Stewart (2000) in his article The Decompositional Approach to Matrix Computation, which explains the benefits of matrix factorizations in numerical linear algebra.

## 2 thoughts on “The Big Six Matrix Factorizations”

1. Pieter Ghysels says:

How about LDL*, where L is lower triangular and D is block diagonal with 1×1 or 2×2 blocks? Useful for symmetric indefinite problems. The matrix inertia can be computed from D.