A singular value decomposition (SVD) of a matrix is a factorization

where and are orthogonal, , where , and .

Partition and . The are called the *singular values* of and the and are the left and right *singular vectors*. We have , . The matrix is unique but and are not. The form of is

Here is an example, in which the entries of have been specially chosen to give simple forms for the elements of the factors:

The power of the SVD is that it reveals a great deal of useful information about norms, rank, and subspaces of a matrix and it enables many problems to be reduced to a trivial form.

Since and are nonsingular, , where is the number of nonzero singular values. Since the -norm and Frobenius norm are invariant under orthogonal transformations, for both norms, giving

and hence . The range space and null space of are given in terms of the columns of and by

We can write the SVD as

which expresses as a sum of rank- matrices, the th of which has -norm . The famous Eckart–Young theorem (1936) says that

and that the minimum is attained at

In other words, truncating the sum after terms gives the best rank- approximation to in both the -norm and the Frobenius norm. In particular, this result implies that when has full rank the distance from to the nearest rank-deficient matrix is .

## Relations with Symmetric Eigenvalue Problem

The SVD is not directly related to the eigenvalues and eigenvectors of . However, for , implies

so the singular values of are the square roots of the eigenvalues of the symmetric positive semidefinite matrices and (modulo zeros in the latter case), and the singular vectors are eigenvectors. Moreover, the eigenvalues of the matrix

are plus and minus the singular values of , together with additional zeros if , and the eigenvectors of and the singular vectors of are also related.

Consequently, by applying results or algorithms for the eigensystem of a symmetric matrix to , , or one obtains results or algorithms for the singular value decomposition of .

## Connections with Other Problems

The pseudoinverse of a matrix can be expressed in terms of the SVD as

The least squares problem , where with is solved by , and when is rank-deficient this is the solution of minimum -norm. For this is an underdetermined system and gives the minimum 2-norm solution.

We can write , where is orthogonal and is symmetric positive semidefinite. This decomposition is the polar decomposition and is unique. This connection between the SVD and the polar decomposition is useful both theoretically and computationally.

## Applications

The SVD is used in a very wide variety of applications—too many and varied to attempt to summarize here. We just mention two.

The SVD can be used to help identify to which letters vowels and consonants have been mapped in a substitution cipher (Moler and Morrison, 1983).

An inverse use of the SVD is to construct test matrices by forming a diagonal matrix of singular values from some distribution then pre- and post-multiplying by random orthogonal matrices. The result is matrices with known singular values and 2-norm condition number that are nevertheless random. Such “randsvd” matrices are widely used to test algorithms in numerical linear algebra.

## History and Computation

The SVD was introduced independently by Beltrami in 1873 and Jordan in 1874. Golub popularized the SVD as an essential computational tool and developed the first reliable algorithms for computing it. The Golub–Reinsch algorithm, dating from the late 1960s and based on bidiagonalization and the QR algorithm, is the standard way to compute the SVD. Various alternatives are available; see the references.

## References

This is a minimal set of references, which contain further useful references within.

- Jack Dongarra and Mark Gates and Azzam Haidar and Jakub Kurzak and Piotr Luszczek and Stanimire Tomov and Ichitaro Yamazaki, The Singular Value Decomposition: Anatomy of Optimizing an Algorithm for Extreme Scale, SIAM Rev. 60(4), 808–865, 2018.
- Gene Golub and Charles F. Van Loan, Matrix Computations, fourth edition, Johns Hopkins University Press, Baltimore, MD, USA, 2013.
- Roger Horn and Charles Johnson, Topics in Matrix Analysis, Cambridge University Press, 1991. Chapter 3.
- Cleve B. Moler and Donald Morrison, Singular Value Analysis of Cryptograms, Amer. Math. Monthly 90, 78–87, 1983.
- Yuji Nakatsukasa and Nicholas J. Higham, Stable and Efficient Spectral Divide and Conquer Algorithms for the Symmetric Eigenvalue Decomposition and the SVD, SIAM J. Sci. Comput. 35(3), A1325–A1349, 2013.

## Related Blog Posts

- Faster SVD via Polar Decomposition (2015)
- Professor SVD by Cleve Moler (2006)
- What Is a Random Orthogonal Matrix? (2020)
- What Is the Polar Decomposition? (2020)

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Dear Professor Higham,

If you agree I would like to add the wonderful paper on the history of the SVD (you have suggested it by writing the line devoted to Beltrami and Jordan) due to G. W. Stewart: “On the Early History of the Singular Value Decomposition” (SIAM Review 35(4) (1993)). This is a very important paper in my academic life (even in my personal life).

Thank you, always, for your work.

https://epubs.siam.org/doi/abs/10.1137/1035134?journalCode=siread