What Is the Matrix Inverse?

The inverse of a matrix A\in\mathbb{C}^{n\times n} is a matrix X\in\mathbb{C}^{n\times n} such that AX = I, where I is the identity matrix (which has ones on the diagonal and zeros everywhere else). The inverse is written as A^{-1}. If the inverse exists then A is said to be nonsingular or invertible, and otherwise it is singular.

The inverse X of A also satisfies XA = I, as we now show. The equation AX = I says that Ax_j = e_j for j=1\colon n, where x_j is the jth column of A and e_j is the jth unit vector. Hence the n columns of A span \mathbb{C}^n, which means that the columns are linearly independent. Now A(I-XA) = A - AXA = A -A = 0, so every column of I - XA is in the null space of A. But this contradicts the linear independence of the columns of A unless I - XA = 0, that is, XA = I.

The inverse of a nonsingular matrix is unique. If AX = AW = I then premultiplying by X gives XAX = XAW, or, since XA = I, X = W.

The inverse of the inverse is the inverse: (A^{-1})^{-1} = A, which is just another way of interpreting the equations AX = XA = I.

Connections with the Determinant

Since the determinant of a product of matrices is the product of the determinants, the equation AX = I implies \det(A) \det(X) = 1, so the inverse can only exist when \det(A) \ne 0. In fact, the inverse always exists when \det(A) \ne 0.

An explicit formula for the inverse is

\notag   A^{-1} = \displaystyle\frac{\mathrm{adj}(A)}{\det(A)},   \qquad (1)

where the adjugate \mathrm{adj} is defined by

\bigl(\mathrm{adj}(A)\bigr)_{ij} = (-1)^{i+j} \det(A_{ji})

and where A_{pq} denotes the submatrix of A obtained by deleting row p and column q. A special case is the formula

\notag      \begin{bmatrix}        a & b \\ c& d      \end{bmatrix}^{-1}    = \displaystyle\frac{1}{ad-bc}      \begin{bmatrix}        d & -b \\ -c & a      \end{bmatrix}.

The equation AA^{-1} = I implies \det(A^{-1}) = 1/\det(A).

Conditions for Nonsingularity

The following result collects some equivalent conditions for a matrix to be nonsingular. We denote by \mathrm{null}(A) the null space of A (also called the kernel).

Theorem 1. For A \in \mathbb{C}^{n \times n} the following conditions are equivalent to A being nonsingular:

  • \mathrm{null}(A) = \{0\},
  • \mathrm{rank}(A) = n,
  • Ax=b has a unique solution x, for any b,
  • none of the eigenvalues of A is zero,
  • \det(A) \ne 0.

A useful formula is

\notag     (AB)^{-1} = B^{-1}A^{-1}.

Here are some facts about the inverses of n\times n matrices of special types.

  • A diagonal matrix D = \mathrm{diag}(d_i) is nonsingular if d_i\ne0 for all i, and D^{-1} = \mathrm{diag}(d_i^{-1}).
  • An upper (lower) triangular matrix T is nonsingular if its diagonal elements are nonzero, and the inverse is upper (lower) triangular with (i,i) element t_{ii}^{-1}.
  • If x,y\in\mathbb{C}^n and y^*A^{-1}x \ne -1, then A + xy^* is nonsingular and

    \notag     \bigl(A + xy^*\bigr)^{-1}       = A^{-1} - \displaystyle\frac{A^{-1}x y^* A^{-1}}{1 +  y^*A^{-1}x}.

    This is the Sherman–Morrison formula.

The Inverse as a Matrix Polynomial

The Cayley-–Hamilton theorem says that a matrix satisfies its own characteristic equation, that is, if p(t) = \det(tI - A) = t^n + c_{n-1} t^{n-1} + \cdots + c_0, then p(A) = 0. In other words, A^n + c_{n-1} A^{n-1} + \cdots + c_0I = 0, and if A is nonsingular then multiplying through by A^{-1} gives (since c_0 = p(0) = (-1)^n\det(A) \ne 0)

A^{-1} = -\displaystyle\frac{1}{c_0} (A^{n-1} + c_{n-1} A^{n-2} + \cdots + c_1 I).     \qquad (2)

This means that A^{-1} is expressible as a polynomial of degree at most n-1 in A (with coefficients that depend on A).

To Compute or Not to Compute the Inverse

The inverse is an important theoretical tool, but it is rarely necessary to compute it explicitly. If we wish to solve a linear system of equations Ax = b then computing A^{-1} and then forming x = A^{-1}b is both slower and less accurate in floating-point arithmetic than using LU factorization (Gaussian elimination) to solve the system directly. Indeed, for n = 1 one would not solve 3x = 1 by computing 3^{-1} \times 1.

For sparse matrices, computing the inverse may not even be practical, as the inverse is usually dense.

If one needs to compute the inverse, how should one do it? We will consider the cost of different methods, measured by the number of elementary arithmetic operations (addition, subtraction, division, multiplication) required. Using (1), the cost is that of computing one determinant of order n and n^2 determinants of order n-1. Since computing a k\times k determinant costs at least O(k^3) operations by standard methods, this approach costs at least O(n^5) operations, which is prohibitively expensive unless n is very small. Instead one can compute an LU factorization with pivoting and then solve the systems Ax_i = e_i for the columns x_i of A^{-1}, at a total cost of 2n^3 + O(n^2) operations.

Equation (2) does not give a good method for computing A^{-1}, because computing the coefficients c_i and evaluating a matrix polynomial are both expensive.

It is possible to exploit fast matrix multiplication methods, which compute the product of two n\times n matrices in O(n^\alpha) operations for some \alpha < 3. By using a block LU factorization recursively, one can reduce matrix inversion to matrix multiplication. If we use Strassen’s fast matrix multiplication method, which has \alpha = \log_2 7 \approx 2.807, then we can compute A^{-1} in O(n^{2.807}) operations.

Slash Notation

MATLAB uses the backslash and forward slash for “matrix division”, with the meanings A \backslash B = A^{-1}B and A / B = AB^{-1}. Note that because matrix multiplication is not commutative, A \backslash B \ne A / B, in general. We have A\backslash I = I/A = A^{-1} and I\backslash A = A/I = A. In MATLAB, the inverse can be compute with inv(A), which uses LU factorization with pivoting.

Rectangular Matrices

If A is m\times n then the equation AX = I_m requires X to be n\times m, as does XA = I_n. Rank considerations show that at most one of these equations can hold if m\ne n. For example, if A = a^* is a nonzero row vector, then AX = 1 for X = a/a^*a, but XA = aa^*/a^*a\ne I. This is an example of a generalized inverse.

An Interesting Inverse

Here is a triangular matrix with an interesting inverse. This example is adapted from the LINPACK Users’ Guide, which has the matrix, with “LINPACK” replacing “INVERSE” on the front cover and the inverse on the back cover.

\notag \left[\begin{array}{ccccccc} I & N & V & E & R & S & E\\ 0 & N & V & E & R & S & E\\ 0 & 0 & V & E & R & S & E\\ 0 & 0 & 0 & E & R & S & E\\ 0 & 0 & 0 & 0 & R & S & E\\ 0 & 0 & 0 & 0 & 0 & S & E\\ 0 & 0 & 0 & 0 & 0 & 0 & E \end{array}\right]^{-1} = \left[\begin{array}{*{7}{r@{\hspace{4pt}}}} 1/I & -1/I & 0 & 0 & 0 & 0 & 0\\ 0 & 1/N & -1/N & 0 & 0 & 0 & 0\\ 0 & 0 & 1/V & -1/V & 0 & 0 & 0\\ 0 & 0 & 0 & 1/E & -1/E & 0 & 0\\ 0 & 0 & 0 & 0 & 1/R & -1/R & 0\\ 0 & 0 & 0 & 0 & 0 & 1/S & -1/S\\ 0 & 0 & 0 & 0 & 0 & 0 & 1/E \end{array}\right].

Leave a comment