The inverse of a matrix is a matrix such that , where is the identity matrix (which has ones on the diagonal and zeros everywhere else). The inverse is written as . If the inverse exists then is said to be nonsingular or invertible, and otherwise it is singular.
The inverse of also satisfies , as we now show. The equation says that for , where is the th column of and is the th unit vector. Hence the columns of span , which means that the columns are linearly independent. Now , so every column of is in the null space of . But this contradicts the linear independence of the columns of unless , that is, .
The inverse of a nonsingular matrix is unique. If then premultiplying by gives , or, since , .
The inverse of the inverse is the inverse: , which is just another way of interpreting the equations .
Connections with the Determinant
Since the determinant of a product of matrices is the product of the determinants, the equation implies , so the inverse can only exist when . In fact, the inverse always exists when .
An explicit formula for the inverse is
where the adjugate is defined by
and where denotes the submatrix of obtained by deleting row and column . A special case is the formula
The equation implies .
Conditions for Nonsingularity
The following result collects some equivalent conditions for a matrix to be nonsingular. We denote by the null space of (also called the kernel).
Theorem 1. For the following conditions are equivalent to being nonsingular:
- has a unique solution , for any ,
- none of the eigenvalues of is zero,
A useful formula is
Here are some facts about the inverses of matrices of special types.
- A diagonal matrix is nonsingular if for all , and .
- An upper (lower) triangular matrix is nonsingular if its diagonal elements are nonzero, and the inverse is upper (lower) triangular with element .
- If and , then is nonsingular and
This is the Sherman–Morrison formula.
The Inverse as a Matrix Polynomial
The Cayley-–Hamilton theorem says that a matrix satisfies its own characteristic equation, that is, if , then . In other words, , and if is nonsingular then multiplying through by gives (since
This means that is expressible as a polynomial of degree at most in (with coefficients that depend on ).
To Compute or Not to Compute the Inverse
The inverse is an important theoretical tool, but it is rarely necessary to compute it explicitly. If we wish to solve a linear system of equations then computing and then forming is both slower and less accurate in floating-point arithmetic than using LU factorization (Gaussian elimination) to solve the system directly. Indeed, for one would not solve by computing .
For sparse matrices, computing the inverse may not even be practical, as the inverse is usually dense.
If one needs to compute the inverse, how should one do it? We will consider the cost of different methods, measured by the number of elementary arithmetic operations (addition, subtraction, division, multiplication) required. Using (1), the cost is that of computing one determinant of order and determinants of order . Since computing a determinant costs at least operations by standard methods, this approach costs at least operations, which is prohibitively expensive unless is very small. Instead one can compute an LU factorization with pivoting and then solve the systems for the columns of , at a total cost of operations.
Equation (2) does not give a good method for computing , because computing the coefficients and evaluating a matrix polynomial are both expensive.
It is possible to exploit fast matrix multiplication methods, which compute the product of two matrices in operations for some . By using a block LU factorization recursively, one can reduce matrix inversion to matrix multiplication. If we use Strassen’s fast matrix multiplication method, which has , then we can compute in operations.
MATLAB uses the backslash and forward slash for “matrix division”, with the meanings and . Note that because matrix multiplication is not commutative, , in general. We have and . In MATLAB, the inverse can be compute with
inv(A), which uses LU factorization with pivoting.
If is then the equation requires to be , as does . Rank considerations show that at most one of these equations can hold if . For example, if is a nonzero row vector, then for , but . This is an example of a generalized inverse.
An Interesting Inverse
Here is a triangular matrix with an interesting inverse. This example is adapted from the LINPACK Users’ Guide, which has the matrix, with “LINPACK” replacing “INVERSE” on the front cover and the inverse on the back cover.
Related Blog Posts
- What Is a Generalized Inverse? (2020)
- What Is a Sparse Matrix? (2020)
- What Is an LU Factorization? (2021)
- What Is the Adjugate of a Matrix? (2020)
- What Is the Determinant of a Matrix? (2021)
- What Is the Sherman–Morrison–Woodbury Formula? (2020)