## What Is the Adjugate of a Matrix?

The adjugate of an $n\times n$ matrix $A$ is defined by

$\mathrm{adj}(A) = \bigl( (-1)^{i+j} \det(A_{ji}) \bigr),$

where $A_{pq}$ denotes the submatrix of $A$ obtained by deleting row $p$ and column $q$. It is the transposed matrix of cofactors. The adjugate is sometimes called the (classical) adjoint and is sometimes written as $A^\mathrm{A}$.

For nonsingular $A$,

$\mathrm{adj}(A) = \det(A) A^{-1},$

and this equation is often used to give a formula for $A^{-1}$ in terms of determinants.

Since the adjugate is a scalar multiple of the inverse, we would expect it to share some properties of the inverse. Indeed we have

\begin{aligned} \mathrm{adj}(AB) &= \mathrm{adj}(B) \mathrm{adj}(A),\\ \mathrm{adj}(A^*) &= \mathrm{adj}(A)^*. \end{aligned}

These properties can be proved by first assuming the matrices are nonsingular—in which case they follow from properties of the determinant and the inverse—and then using continuity of the entries of $\mathrm{adj}(A)$ and the fact that every matrix is the limit of a sequence of nonsingular matrices. Another property that can be proved in a similar way is

$\mathrm{adj}\bigl(\mathrm{adj}(A)\bigr) = (\det A)^{n-2} A.$

If $A$ has rank $n-2$ or less then $\mathrm{adj}(A)$ is just the zero matrix. Indeed in this case every $(n-1)\times(n-1)$ submatrix of $A$ is singular, so $\det(A_{ji}) \equiv 0$ in the definition of $\mathrm{adj}(A)$. In particular, $\mathrm{adj}(0) = 0$ and $\mathrm{adj}(xy^*) = 0$ for any rank-1 matrix $xy^*$

Less obviously, if $\mathrm{rank}(A) = n-1$ then $\mathrm{adj}(A)$ has rank 1. This can be seen from the formula below that expresses the adjugate in terms of the singular value decomposition (SVD).

If $x,y\in\mathbb{C}^n$, then for nonsingular $A$,

\begin{aligned} \det(A + xy^*) &= \det\bigl( A(I + A^{-1}x y^*) \bigr)\\ &= \det(A) \det( I + A^{-1}x y^*)\\ &= \det(A) (1+ y^*A^{-1}x)\\ &= \det(A) + y^* (\det(A)A^{-1})x\\ &= \det(A) + y^* \mathrm{adj}(A)x. \end{aligned}

Again it follows by continuity that for any $A$,

$\det(A + xy^*) = \det(A) + y^* \mathrm{adj}(A)x.$

This expression is useful when we need to deal with a rank-1 perturbation to a singular matrix. A related identity is

$\det\left( \begin{bmatrix} A & x \\ y^* & \alpha \\ \end{bmatrix}\right) = \alpha \det(A) - y^* \mathrm{adj}(A) x.$

Let $A = U\Sigma V^*$ be an SVD, where $U$ and $V$ are unitary and $\Sigma = \mathrm{diag}(\sigma_i)$, with $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_n \ge 0$. Then

$\mathrm{adj}(A) = \mathrm{adj}(V^*) \mathrm{adj}(\Sigma) \mathrm{adj}(U) = \mathrm{adj}(V)^* \mathrm{adj}(\Sigma) \mathrm{adj}(U).$

It is easy to see that $D = \mathrm{adj}(\Sigma)$ is diagonal, with

$d_{kk} = \displaystyle\prod_{i=1\atop i\ne k }^n \sigma_i.$

Since $U$ and $V$ are unitary and hence nonsingular,

\begin{aligned} \mathrm{adj}(A) &= \mathrm{adj}(V)^* \mathrm{adj}(\Sigma) \mathrm{adj}(U)\\ &= \overline{\det(V)} V^{-*} \mathrm{adj}(\Sigma) \det(U) U^{-1}\\ &= \overline{\det(V)} V \mathrm{adj}(\Sigma) \det(U) U^*\\ &= \overline{\det(V)} \det(U) \cdot V \mathrm{adj}(\Sigma) U^*. \end{aligned}

If $\mathrm{rank}(A) = n-1$ then $\sigma_n = 0$ and so

$\mathrm{adj}(A) = \rho\, \sigma_1 \sigma_2 \cdots \sigma_{n-1}v_n^{} u_n^*,$

where $\rho = \overline{\det(V)} \det(U)$ has modulus $1$ and $v_n$ and $u_n$ are the last columns of $V$ and $U$, respectively.

The definition of $\mathrm{adj}(A)$ does not provide a good means for computing it, because the determinant computations are prohibitively expensive. The following MATLAB function (which is very similar to the function adjoint in the Symbolic Math Toolbox) uses the SVD formula.

function X = adj(A)
%   X = ADJ(A) computes the adjugate of the square matrix A via
%   the singular value decomposition.

n = length(A);
[U,S,V] = svd(A);
D = zeros(n);
for i=1:n
d = diag(S);
d(i) = 1;
D(i,i) = prod(d);
end
X = conj(det(V))*det(U)*V*D*U';


Note that, in common with most SVD codes, the svd function does not return $\det(U)$ and $\det(V)$, so we must compute them. This function is numerically stable, as shown by Stewart (1998).

Finally, we note that Richter (1954) and Mirsky (1956) obtained the Frobenius norm bound

$\|\mathrm{adj}(A) \|_F \le \displaystyle\frac{ \|A\|_F^{n-1} }{ n^{(n-2)/2} }.$

## References

This is a minimal set of references, which contain further useful references within.