The pseudoinverse is an extension of the concept of the inverse of a nonsingular square matrix to singular matrices and rectangular matrices. It is one of many generalized inverses, but the one most useful in practice as it has a number of special properties.
The pseudoinverse of a matrix is an
matrix
that satisfies the Moore–Penrose conditions
Here, the superscript denotes the conjugate transpose. It can be shown that there is a unique
satisfying these equations. The pseudoinverse is denoted by
; some authors write
.
The most important property of the pseudoinverse is that for any system of linear equations (overdetermined or underdetermined)
minimizes
and has the minimum
-norm over all minimizers. In other words, the pseudoinverse provides the minimum
-norm least squares solution to
.
The pseudoinverse can be expressed in terms of the singular value decomposition (SVD). If is an SVD, where the
matrix
and
matrix
are unitary and
with
(so that
) with
, then
where the diagonal matrix is . This formula gives an easy way to prove many identities satisfied by the pseudoinverse. In MATLAB, the function
pinv computes using this formula.
From the Moore–Penrose conditions or (1) it can be shown that and
.
For full rank we have the concise formulas
Consequently,
Some special cases are worth noting.
- The pseudoinverse of a zero
matrix is the zero
matrix.
- The pseudoinverse of a nonzero vector
is
.
- For
,
and if
and
are nonzero then
.
- The pseudoinverse of a Jordan block with eigenvalue
is the transpose:
The pseudoinverse differs from the usual inverse in various respects. For example, the pseudoinverse of a triangular matrix is not necessarily triangular (here we are using MATLAB with the Symbolic Math Toolbox):
>> A = sym([1 1 1; 0 0 1; 0 0 1]), X = pinv(A) A = [1, 1, 1] [0, 0, 1] [0, 0, 1] X = [1/2, -1/4, -1/4] [1/2, -1/4, -1/4] [ 0, 1/2, 1/2]
Furthermore, it is not generally true that for
and
. A sufficient condition for this equality to hold is that
.
It is not usually necessary to compute the pseudoinverse, but if it is required it can be obtained using (1) or (2) or from the Newton–Schulz iteration
for which as
if
. The convergence is at an asymptotically quadratic rate. However, about
iterations are required to reach the asymptotic phase, where
, so the iteration is slow to converge when
is ill conditioned.
Notes and References
The pseudoinverse was first introduced by Eliakim Moore in 1920 and was independently discovered by Roger Penrose in 1955. For more on the pseudoinverse see, for example, Ben-Israel and Greville (2003) or Campbell and Meyer (2009). For analysis of the Newton–Schulz iteration see Pan and Schreiber (1991).
- Adi Ben-Israel and Thomas N. E. Greville, Generalized Inverses: Theory and Applications, second edition, Springer-Verlag, New York, 2003
- Stephen Campbell and Carl Meyer, Generalized Inverses of Linear Transformations, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2009. published (Originally published by Pitman in 1979.)
- Victor Pan and Robert Schreiber, An Improved Newton Iteration for the Generalized Inverse of a Matrix, with Applications, SIAM J. Sci. Statist. Comput. 12 (5), 1109–1130, 1991.
Related Blog Posts
This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

