# Matrix Rank Relations

Matrix rank is an important concept in linear algebra. While rank deficiency can be a sign of an incompletely or improperly specified problem (a singular system of linear equations, for example), in some problems low rank of a matrix is a desired property or outcome. Here we present some fundamental rank relations in a concise form useful for reference. These are all immediate consequences of the singular value decomposition (SVD), but we give elementary (albeit not entirely self-contained) proofs of them.

The rank of a matrix $A\in\mathbb{R}^{m\times n}$ is the maximum number of linearly independent columns, which is the dimension of the range space of $A$, $\mathrm{range}(A) = \{\, Ax: x \in\mathbb{R}^n \,\}$. An important but non-obvious fact is that this is the same as the maximum number of linearly independent rows (see (5) below).

A rank-$1$ matrix has the form $xy^*$, where $x$ and $y$ are nonzero vectors. Every column is a multiple of $x$ and every row is a multiple of $y^*$. A sum of $k$ rank-$1$ matrices has the form

$\notag A = \displaystyle\sum_{i=1}^{k} x_iy_i^* = \begin{bmatrix} x_1 & x_2 & \dots & x_k \end{bmatrix} \begin{bmatrix} y_1^* \\ y_2^* \\ \vdots\\ y_k^* \end{bmatrix} \equiv XY^*. \qquad (0)$

Each column of $A$ is a linear combination of the vectors $x_1$, $x_2$, …, $x_k$, so $A$ has at most $k$ linearly independent columns, that is, $A$ has rank at most $k$. In fact, $\mathrm{rank}(A) = k$ if $X$ and $Y$ have rank $k$, as follows from (4) below. Any rank-$k$ matrix can be written in the form $(0)$ with $X$ and $Y$ of rank $k$; indeed this is the full-rank factorization below.

Here are some fundamental rank equalities and inequalities.

## Rank-Nullity Theorem

The rank-nullity theorem says that

$\notag \boxed{ \mathrm{rank}(A) + \mathrm{dim}( \mathrm{null}(A) ) = n, \quad A\in\mathbb{R}^{m\times n},}$

where $\mathrm{null}(A) = \{\, x \in\mathbb{R}^n: Ax = 0 \,\}$ is the null space of $A$.

## Rank Bound

The rank cannot exceed the number of columns, or, by (5) below, the number of rows:

$\notag \boxed{ \mathrm{rank}(A) \le \min(m,n), \quad A\in\mathbb{C}^{m\times n}. }$

## Rank of a Sum

For any $A$ and $B$ of the same dimension,

$\notag \boxed{|\mathrm{rank}(A) - \mathrm{rank}(B)| \le \mathrm{rank}(A+B) \le \mathrm{rank}(A) + \mathrm{rank}(B).} \qquad (1)$

The upper bound follows from the fact that the dimension of the sum of two subspaces cannot exceed the sum of the dimensions of the subspaces. Interestingly, the upper bound is also a corollary of the bound (3) for the rank of a matrix product, because

\notag \begin{aligned} \mathrm{rank}(A+B) &= \mathrm{rank}\biggl( \begin{bmatrix} A & B \end{bmatrix} \begin{bmatrix} I \\ I \end{bmatrix} \biggr)\\ &\le \min\biggl(\mathrm{rank}\bigl(\begin{bmatrix} A & B \end{bmatrix}\bigr), \mathrm{rank}\biggl(\begin{bmatrix} I \\ I \end{bmatrix} \biggr)\biggr)\\ &\le \mathrm{rank}\bigl(\begin{bmatrix} A & B \end{bmatrix}\bigr)\\ &\le \mathrm{rank}(A) + \mathrm{rank}(B). \end{aligned}

For the lower bound, writing $A = -B + A+B$ and applying the upper bound gives $\mathrm{rank}(A) \le \mathrm{rank}(-B) + \mathrm{rank}(A+B) = \mathrm{rank}(B) + \mathrm{rank}(A+B)$, and likewise with the roles of $A$ and $B$ interchanged.

## Rank of $A$ and $A^*A$

For any $A$,

$\notag \boxed{\mathrm{rank}(A^*A) = \mathrm{rank}(A).} \qquad (2)$

Indeed $Ax = 0$ implies $A^*Ax = 0$, and $A^*Ax = 0$ implies $0 = x^*A^*Ax = (Ax)^*(Ax)$, which implies $Ax = 0$. Hence the null spaces of $A$ and $A^*A$ are the same. The equality (2) follows from the rank-nullity theorem.

## Rank of a General Product

For any $A$ and $B$ for which the product $AB$ is defined,

$\notag \boxed{\mathrm{rank}(AB) \le \min\bigl( \mathrm{rank}(A), \mathrm{rank}(B) \bigr).} \qquad (3)$

If $B = [b_1,\dots,b_n]$ then $AB = [Ab_1,\dots,Ab_n]$, so the columns of $AB$ are linear combinations of those of $A$ and so $AB$ cannot have more linearly independent columns than $A$, that is, $\mathrm{rank}(AB) \le \mathrm{rank}(A)$. Using (5) below, we then have

$\notag \mathrm{rank}(AB) = \mathrm{rank}(B^*A^*) \le \mathrm{rank}(B^*) = \mathrm{rank}(B).$

The latter inequality can be proved without using (5) (our proof of which uses (3)), as follows. Suppose $\mathrm{rank}(B) < \mathrm{rank}(AB) = r$. Let the columns of $Y$ span $\mathrm{range}(AB)$, so that $Y$ has $r$ columns and $Y = ABZ$ for some matrix $Z$ with $r$ columns. Now $\mathrm{rank}(BZ) \le \mathrm{rank}(B) < r$ by the first part, so $BZg = 0$ for some nonzero $g$. But then $Yg = ABZg = 0$, which contradicts the linear independence of the columns of $Y$, so we must have $\mathrm{rank}(B) \ge \mathrm{rank}(AB)$.

## Rank of a Product of Full-Rank Matrices

We have

$\notag \boxed{ \mathrm{rank}(AB) = r, \quad A\in\mathbb{C}^{m\times r}, \; B\in\mathbb{C}^{r\times n}, \; \mathrm{rank}(A) = \mathrm{rank}(B) = r .} \qquad (4)$

We note that $A^*A$ and $BB^*$ are both nonsingular $r\times r$ matrices by (2), so their product has rank $r$. Using (3),

$\notag r = \mathrm{rank}(A^*A BB^*) \le \mathrm{rank}(A B) \le r,$

and hence $\mathrm{rank}(A B) = r$.

Another important relation is

$\notag \boxed{ \mathrm{rank}(XAY ) = \mathrm{rank}(A), \quad X\in\mathbb{C}^{m\times m} \;\mathrm{and}\; Y\in\mathbb{C}^{n\times n}\; \mathrm{nonsingular}. }$

This is a consequence of the equality $\mathrm{range}(XAY) = X\mathrm{range}(A)Y$ for nonsingular $X$ and $Y$.

## Ranks of $A$ and $A^*$

By (2) and (3) we have $\mathrm{rank}(A) = \mathrm{rank}(A^*A) \le \mathrm{rank}(A^*)$. Interchanging the roles of $A$ and $A^*$ gives $\mathrm{rank}(A^*) \le \mathrm{rank}(A)$ and so

$\notag \boxed{ \mathrm{rank}(A^*) = \mathrm{rank}(A). } \qquad (5)$

In other words, the rank of $A$ is equal to the maximum number of linearly independent rows as well as the maximum number of linearly independent columns.

## Full-Rank Factorization

$A\in\mathbb{C}^{m \times n}$ has rank $r$ if and only if $A = GH$ for some $G\in\mathbb{C}^{m \times r}$ and $H\in\mathbb{C}^{r \times n}$, both of rank $r$, and this is called a full-rank factorization. The existence of such a factorization implies that $\mathrm{rank}(A) = r$ by (4). Conversely, suppose that $A$ has rank $r$. Let the columns of $X\in\mathbb{C}^{m \times r}$ form a basis for the range space of $A$. Then there are $r$-vectors $y_j$ such that $a_j = Xy_j$, $j = 1\colon n$, and with $Y = [y_1,y_2,\dots, y_n]$ we have $A = XY$. Finally, $r = \mathrm{rank}(A) = \mathrm{rank}(XY) \le \mathrm{rank}(Y)$ by (3), and since $\mathrm{rank}(Y) \le r$ we have $\mathrm{rank}(Y) = r$.

## Rank and Minors

A characterization of rank that is sometimes used as the definition is that it is the size of the largest nonsingular square submatrix. Equivalently, the rank is the size of the largest nonzero minor, where a minor of size $k$ is the determinant of a $k\times k$ submatrix.

## rank(AB) and rank(BA)

Although $AB$ and $BA$ have some properties in common when both products are defined (notably they have the same nonzero eigenvalues), $\mathrm{rank}(AB)$ is not always equal to $\mathrm{rank}(BA)$. A simple example is $A = x$ and $B = y^*$ with $x$ and $y$ orthogonal vectors: $AB = xy^*$ but $BA = y^*x = 0$. An example with square $A$ and $B$ is

$\notag \begin{gathered} A = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \quad B = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \\ \mathrm{rank}(AB) = \mathrm{rank}\biggl( \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \biggr) = 0, \quad \mathrm{rank}(BA) = \mathrm{rank}\biggl( \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \biggr) = 1. \end{gathered}$

Note that $A = e_1e_2^T$ and $B = e_1e_1^T$, where $e_i$ has $1$ in the $i$th position and zeros everywhere else. Such matrices are easy to manipulate in this form (e.g., $AB = e_1 (e_2^Te_1)e_1^T = 0$) and are useful for constructing examples.

## How to Find Rank

If we have a full-rank factorization of $A$ then we can read off the rank from the dimensions of the factors. But finding a full-rank factorization is a nontrivial task. The ultimate full-rank factorization is the SVD

$\notag A = U\Sigma V^T,$

where $U\in\mathbb{R}^{m\times m}$ and $V\in\mathbb{R}^{n\times n}$ are orthogonal, $\Sigma = \mathrm{diag}(\sigma_1,\dots, \sigma_p)\in\mathbb{R}^{m\times n}$, where $p = \min(m,n)$, and $\sigma_1\ge \sigma_2\ge \cdots \ge \sigma_r > 0 = \sigma_{r+1} = \cdots = \sigma_p = 0$. The rank of $A$ is $r$, the number of nonzero singular values.

In floating-point arithmetic, the standard algorithms for computing the SVD are numerically stable, that is, the computed singular values are the exact singular values of a matrix $A + \Delta A$ with $\|\Delta A\|_2 \le c_{m,n}u\|A\|$, where $c_{m,n}$ is a constant and $u$ is the unit roundoff. Unfortunately, $A + \Delta A$ will typically be full rank when $A$ is rank deficient. For example, consider this computation.

>> n = 4; A = zeros(n); A(:) = 1:n^2, svd(A)
A =
1     5     9    13
2     6    10    14
3     7    11    15
4     8    12    16
ans =
3.8623e+01
2.0713e+00
1.5326e-15
1.3459e-16


The matrix has rank $2$ and the two zero singular values are approximated by computed singular values of order $10^{-15}$. In general, we have no way to know whether tiny computed singular values signify exactly zero singular values. In practice, one typically defines a numerical rank based on a threshold and regards computed singular values less than the threshold as zero. Indeed the MATLAB rank function computes the rank as the number of singular values exceeding $2u \max(m,n)\widehat{\sigma}_1$, where $\widehat{\sigma}_1$ is the largest computed singular value. If the data from which the matrix is constructed is uncertain then the definition of numerical rank should take into account the level of uncertainty in the data. Dealing with rank deficiency in the presence of data errors and in finite precision arithmetic is a tricky business.

## References

An excellent reference for further rank relations is Horn and Johnson. Stewart describes some of the issues associated with rank-deficient matrices in practical computation.