# What Is a Matrix Norm?

A matrix norm is a function $\|\cdot\| : \mathbb{C}^{m\times n} \to \mathbb{R}$ satisfying

• $\|A\| \ge 0$ with equality if and only if $A=0$ (nonnegativity),
• $\|\alpha A\| =|\alpha| \|A\|$ for all $\alpha\in\mathbb{C}$, $A\in\mathbb{C}^{m\times n}$ (homogeneity),
• $\|A+B\| \le \|A\|+\|B\|$ for all $A,B\in\mathbb{C}^{m\times n}$ (the triangle inequality).

These are analogues of the defining properties of a vector norm.

An important class of matrix norms is the subordinate matrix norms. Given a vector norm on $\mathbb{C}^n$ the corresponding subordinate matrix norm on $\mathbb{C}^{m\times n}$ is defined by

$\notag \|A\| = \displaystyle\max_{x\ne0} \frac{ \|Ax\| }{ \|x\| },$

or, equivalently,

$\|A\| = \displaystyle\max_{\|x\| = 1} \|Ax\|.$

For the $1$-, $2$-, and $\infty$-vector norms it can be shown that

\notag \begin{alignedat}{2} \|A\|_1 &= \max_{1\le j\le n}\sum_{i=1}^m |a_{ij}|, &&\qquad \mbox{max column sum'',} \\ \|A\|_{\infty} &= \max_{1\le i\le m}\sum_{j=1}^n |a_{ij}|, &&\qquad \mbox{max row sum'',} \\ \|A\|_2 &= \sigma_{\max}(A), &&\qquad \mbox{spectral norm,} \end{alignedat}

where $\sigma_{\max}(A)$ denotes the largest singular value of $A$. To remember the formulas for the $1$– and $\infty$-norms, note that 1 is a vertical symbol (for columns) and $\infty$ is a horizontal symbol (for rows). For the general Hölder $p$-norm no explicit formula is known for $\|A\|_p$ for $p\ne 1,2,\infty$.

An immediate implication of the definition of subordinate matrix norm is that

$\notag \|AB\| \le \|A\| \|B\|$

whenever the product is defined. A norm satisfying this condition is called consistent or submultiplicative.

Another commonly used norm is the Frobenius norm,

$\notag \|A\|_F = \biggl( \displaystyle\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2 \biggr)^{1/2} = \bigl( \mathrm{trace}(A^*\!A) \bigr)^{1/2}.$

The Frobenius norm is not subordinate to any vector norm (since $\|I_n\|_F = n^{1/2}$, whereas $\|I_n\| = 1$ for any subordinate norm), but it is consistent.

The $2$-norm and the Frobenius norm are unitarily invariant: they satisfy $\|UAV\| = \|A\|$ for any unitary matrices $U$ and $V$. For the Frobenius norm the invariance follows easily from the trace formula.

As for vector norms, all matrix norms are equivalent in that any two norms differ from each other by at most a constant. This table shows the optimal constants $\alpha_{pq}$ such that $\|A\|_p \le \alpha_{pq} \|A\|_q$ for $A\in\mathbb{C}^{m\times n}$ and the norms mentioned above. Each inequality in the table is attained for some $A$.

For any $A\in\mathbb{C}^{n\times n}$ and any consistent matrix norm,

$\notag \rho(A) \le \|A\|, \qquad\qquad (1)$

where $\rho$ is the spectral radius (the largest absolute value of any eigenvalue). To prove this inequality, let $\lambda$ be an eigenvalue of $A$ and $x$ the corresponding eigenvector (necessarily nonzero), and form the matrix $X=[x,x,\dots,x] \in \mathbb{C}^{n\times n}$. Then $AX = \lambda X$, so $|\lambda| \|X\| = \|AX\| \le \|A\| \, \|X\|$, giving $|\lambda|\le\|A\|$ since $X \ne 0$. For a subordinate norm it suffices to take norms in the equation $Ax = \lambda x$.

A useful relation is

$\notag \|A\|_2 \le \bigl( \|A\|_1 \|A\|_{\infty}\bigr)^{1/2}, \qquad\qquad (2)$

which follows from $\|A\|_2^2 = \|A^*A\|_2 = \rho(A^*A) \le \| A^*A \|_{\infty} \le \|A^*\|_{\infty} \|A\|_{\infty} = \|A\|_1 \|A\|_{\infty}$, where the first two equalities are obtained from the singular value decomposition and we have used (1).

## Mixed Subordinate Matrix Norms

A more general subordinate matrix norm can be defined by taking different vector norms in the numerator and denominator:

$\notag \|A\|_{\alpha,\beta} = \displaystyle\max_{x\ne 0} \frac{ \|Ax\|_{\beta} }{ \|x\|_{\alpha} }.$

Some authors denote this norm by $\|A\|_{\alpha\to\beta}$.

A useful characterization of $\|A\|_{\alpha,\beta}$ is given in the next result. Recall that $\|\cdot\|^D$ denotes the dual of the vector norm $\|\cdot\|$.

Theorem 1. For $A\in\mathbb{C}^{m\times n}$,

$\notag \|A\|_{\alpha,\beta} = \displaystyle\max_{x\in\mathbb{C}^n \atop y \in\mathbb{C}^m} \frac{\mathrm{Re}\, y^*Ax}{\|y\|_\beta^D \|x\|_\alpha}.$

Proof. We have

\notag \begin{aligned} \displaystyle \max_{x\in\mathbb{C}^n \atop y \in\mathbb{C}^m} \frac{\mathrm{Re}\, y^*Ax}{\|y\|_\beta^D \|x\|_\alpha} &= \max_{x\in\mathbb{C}^n} \frac{1}{\|x\|_\alpha} \max_{y\in\mathbb{C}^m} \frac{\mathrm{Re}\, y^*Ax}{\|y\|_\beta^D} \\ &= \max_{x\in\mathbb{C}^n} \frac{\| Ax \|_\beta }{\|x\|_\alpha} = \|A\|_{\alpha,\beta}, \end{aligned}

where the second equality follows from the definition of dual vector norm and the fact that the dual of the dual norm is the original norm.

We can now obtain a connection between the norms of $A$ and $A^*$. Here, $\|A^*\|_{\beta^D,\alpha^D}$ denotes $\max_{x\ne 0} \|Ax\|_\alpha^D / \|x\|_\beta^D$.

Theorem 2. If $A\in\mathbb{C}^{m\times n}$ then $\|A\|_{\alpha,\beta} = \|A^*\|_{\beta^D,\alpha^D}$.

Proof. Using Theorem 1, we have

$\notag \|A^*\|_{\alpha,\beta} = \displaystyle\max_{x\in\mathbb{C}^n \atop y \in\mathbb{C}^m} \frac{\mathrm{Re}\, y^*Ax}{\|y\|_\beta^D \|x\|_\alpha} = \displaystyle\max_{x\in\mathbb{C}^n \atop y \in\mathbb{C}^m} \frac{\mathrm{Re}\, x^*(A^*y)}{\|x\|_\alpha \|y\|_{\beta^D}} = \|A^*\|_{\beta^D,\alpha^D}. \quad\square$

If we take the $\alpha$– and $\beta$-norms to be the same $p$-norm then we have $\|A\|_p = \|A^*\|_q$, where $p^{-1} + q^{-1} = 1$ (giving, in particular, $\|A\|_2 = \|A^*\|_2$ and $\|A\|_1 = \|A^*\|_\infty$, which are easily obtained directly).

Now we give explicit formulas for the $(\alpha,\beta)$ norm when $\alpha$ or $\beta$ is $1$ or $\infty$ and the other is a general norm.

Theorem 3. For $A\in\mathbb{C}^{m\times n}$,

\notag \begin{alignedat}{2} \|A\|_{1,\beta} &= \max_j \| A(:,j) \|_{\beta}, &\qquad\qquad& (3) \\ \|A\|_{\alpha,\infty} &= \max_i \|A(i,:)^*\|_{\alpha}^D, &\qquad\qquad& (4) \\ \end{alignedat}

For $A\in\mathbb{R}^{m\times n}$,

$\notag \|A\|_{\infty,\beta} = \displaystyle\max_{x\in Z} \|Az\|_{\beta}, \qquad\qquad (5)$

where

$Z = \{ z\in\mathbb{R}^n: z_i = \pm 1~\mathrm{for~all}~i \,\},$

and if $A$ is symmetric positive semidefinite then

$\notag \|A\|_{\infty,1} = \displaystyle\max_{x\in Z} z^T\!Az.$

Proof. For (3),

$\notag \|Ax\|_{\beta} = \Big\| \sum_j x_j A(\colon,j) \Bigr\|_{\beta} \le \displaystyle\max_j \| A(\colon,j) \|_{\beta} \sum_j |x_j|,$

with equality for $x=e_k$, where the maximum is attained for $j=k$. For (4), using the Hölder inequality,

$\|Ax\|_{\infty} = \displaystyle\max_i | A(i,\colon) x | \le \max_i \bigl( \|A(i,\colon)^*\|_{\alpha}^D \|x\|_{\alpha} \bigr) = \|x\|_{\alpha} \max_i \|A(i,\colon)^*\|_{\alpha}^D.$

Equality is attained for an $x$ that gives equality in the Hölder inequality involving the $k$th row of $A$, where the maximum is attained for $i=k$.

Turning to (5), we have $\|A\|_{\infty,\beta} = \max_{ \|x\|_\infty =1} \|Ax\|_\beta$. The unit cube $\{\, x\in\mathbb{R}^n: -e \le x \le e\,\}$, where $e = [1,1,\dots,1]^T$, is a convex polyhedron, so any point within it is a convex combination of the vertices, which are the elements of $Z$. Hence $\|x\|_\infty = 1$ implies

$\notag x = \displaystyle\sum_{z\in Z} \lambda_z Z, \quad \mathrm{where} \quad \lambda_z \ge 0, \quad \sum_{z\in Z} \lambda_z = 1$

and then

$\notag \|Ax\|_\beta = \biggl\| \displaystyle\sum_{z\in Z} \lambda_z Az \biggr\|_\beta \le \max_{z\in Z} \|Az\|_\beta.$

Hence $\max_{\|x\|_\infty = 1} \|Ax\|_\beta \le \max_{z\in Z} \|Az\|_\beta$, but trivially $\max_{z\in Z} \|Az\|_\beta \le \max_{\|x\|_\infty = 1} \|Ax\|_\beta$ and (5) follows.

Finally, if $A$ is symmetric positive semidefinite let $\xi_z = \mathrm{sign}(Az) \in Z$. Then, using a Cholesky factorization $A = R^T\!R$ (which exists even if $A$ is singular) and the Cauchy–Schwarz inequality,

\notag \begin{aligned} \max_{z\in Z} \|Az\|_1 &= \max_{z\in Z} \xi_z^T Az = \max_{z\in Z} \xi_z^T R^T\!Rz\\ &= \max_{z\in Z} (R\xi_z)^T Rz \le \max_{z\in Z} \|R\xi_z\|_2 \|Rz\|_2\\ &\le \max_{z\in Z} \|Rz\|_2^2\\ &= \max_{z\in Z} z^T\!Az. \end{aligned}

Conversely, for $z\in Z$ we have

$\notag z^T\!Az = |z^T\!Az| \le |z|^T|Az| = e^T |Az| = \|Az\|_1,$

so $\max_{z\in Z} z^T\!Az \le \max_{z\in Z} \|Az\|_1$. Hence $\max_{z\in Z} z^T\!Az = \max_{z\in Z} \|Az\|_1 = \|A\|_{\infty,1}$, using (5). $\square$

As special cases of (3) and (4) we have

\notag \begin{aligned} \|A\|_{1,\infty} &= \max_{i,j} |a_{ij}|, \qquad\qquad\qquad (6) \\ \|A\|_{2,\infty} &= \max_i \|A(i,:)^*\|_2, \\ \|A\|_{1,2} &= \max_J \|A(:,j)\|_2. \end{aligned}

We also obtain by using Theorem 2 and (5), for $A\in\mathbb{R}^{m\times n}$,

$\notag \|A\|_{2,1} = \displaystyle\max_{z\in Z} \|A^Tz\|_2.$

The $(2,\infty)$-norm has recently found use in statistics (Cape, Tang, and Priebe, 2019), the motivation being that because it satisfies

$\notag \|A\|_{2,\infty} \le \|A\|_2 \le m^{1/2} \|A\|_{2,\infty},$

the $(2,\infty)$-norm can be much smaller than the $2$-norm when $m \gg n$ and so can be a better norm to use in bounds. The $(2,\infty)$– and $(\infty,2)$-norms are used by Rebrova and Vershynin (2018) in bounding the $2$-norm of a random matrix after zeroing a submatrix. They note that the $2$-norm of a random $n\times n$ matrix involves maximizing over infinitely many random variables, while the $(\infty,2)$-norm and $(2,\infty)$-norm involve only $2^n$ and $n$ random variables, respectively.

The ($\alpha,\beta$) norm is not consistent, but for any vector norm $\gamma$, we have

\notag \begin{aligned} \|AB\|_{\alpha,\beta} &= \max_{x\ne0} \frac{ \|ABx\|_\beta }{ \|x\|_\alpha} = \max_{x\ne0} \frac{ \|A(Bx)\|_\beta }{ \|Bx\|_\gamma} \frac{ \|Bx\|_\gamma}{ \|x\|_\alpha} \le \max_{x\ne0} \frac{ \|A(Bx)\|_\beta }{ \|Bx\|_\gamma} \max_{x\ne0} \frac{ \|Bx\|_\gamma}{ \|x\|_\alpha}\\ &\le \|A\|_{\gamma,\beta} \|B\|_{\alpha,\gamma}. \end{aligned}

## Matrices With Constant $p$-Norms

Let $A$ be a nonnegative square matrix whose row and column sums are all equal to $\mu$. This class of matrices includes magic squares and doubly stochastic matrices. We have $\|A\|_1 = \|A\|_{\infty} = \mu$, so $\|A\|_2 \le \mu$ by (2). But $Ae = \mu e$ for the vector $e$ of $1$s, so $\mu$ is an eigenvalue of $A$ and hence $\|A\|_2 \ge \mu$ by (1). Hence $\|A\|_1 = \|A\|_2 = \|A\|_{\infty} = \mu$. In fact, $\|A\|_p = \mu$ for all $p\in[1,\infty]$ (see Higham (2002, Prob. 6.4) and Stoer and Witzgall (1962)).

## Computing Norms

The problem of computing or estimating $\|A\|_{\alpha,\beta}$ may be difficult for two reasons: we may not have an explicit formula for the norm, or $A$ may be given implicitly, as $A = f(B)$, $A = B^{-1}$, or $A = BC$, for example, and we may wish to compute the norm without forming $A$. Both difficulties are overcome by a generalization of the power method proposed by Tao in 1975 for arbitrary norms and independently Boyd in 1984 for $\alpha$ and $\beta$ both $p$-norms (see Higham, 2002, Sec.$\,$15.2). The power method accesses $A$ only through matrix–vector products with $A$ and $A^*$, so $A$ does not need to be explicitly available.

Let us focus on the case where $\alpha$ and $\beta$ are the same $p$-norm. The power method is implemented in the Matrix Computation Toolbox as pnorm and we use it here to estimate the $\pi$-norm and the $99$-norm, which we compare with the $1$-, $2$-, and $\infty$-norms.

>> A = gallery('frank',4)
A =
4     3     2     1
3     3     2     1
0     2     2     1
0     0     1     1
>> [norm(A,1) norm(A,2) pnorm(A,pi), pnorm(A,99), norm(A,inf)]
ans =
8.0000e+00   7.6237e+00   8.0714e+00   9.8716e+00   1.0000e+01


The plot in the top half of the following figure shows the estimated $p$-norm for the matrix gallery('binomial',8) for $p \in[1,25]$. This shape of curve is typical, because, as the plot in the lower half indicates, $\log(\|A\|_p)$ is a convex function of $1/p$ for $p\ge 1$ (see Higham, 2002, Sec.$\,$6.3).

The power method for the $(1,\infty)$ norm, which is the max norm in (6), is investigated by Higham and Relton (2016) and extended to estimate the $k$ largest values $a_{ij}$ or $|a_{ij}|$.

## A Norm Related to Grothendieck’s Inequality

A norm on $\mathbb{C}^{n\times n}$ can be defined by

\notag \begin{aligned} \|A\|^{(t)} = \max \biggl\{ \bigg|\sum_{i,j=1}^n a_{ij} y^*_jx_i \bigg| : x_i,y_j \in\mathbb{C}^t, \; \|x_i\|_2 \le 1, \; \|y_j\|_2 \le 1, \; i,j=1\colon t \, \biggr\}. \end{aligned}

Note that

$\notag \|A\|^{(1)} = \displaystyle\max_{0\ne x,y\in\mathbb{C}^n} \frac{|y^*Ax|}{\|y\|_\infty \|x\|_\infty} = \|A\|_{\infty,1}.$

A famous inequality of Grothendieck states that $\|A\|^{(n)} \le c \|A\|^{(1)}$ for all $A$, for some constant $c$ independent of $n$. Much work has been done to estimate the smallest possible constant $c$, which is known to be in the interval $[1.33,1.41]$, or $[1.67,1.79]$ for the analogous inequality for real data. See Friedland, Lim, and Zhang (2018) and the references therein.

## Notes

The formula (5) with $\beta = 1$ is due to Rohn (2000), who shows that evaluating it is NP-hard. For general $\beta$, the formula is given by Lewis (2010).

Computing mixed subordinate matrix norms based on $p$-norms is generally NP-hard (Hendrickx and Olshevsky, 2010).

## References

This is a minimal set of references, which contain further useful references within.

# What’s New in MATLAB R2021b?

In this post I discuss some of the new features in MATLAB R2021b that captured my interest. See the release notes for a detailed list of the many changes in MATLAB and its toolboxes. For my articles about new features in earlier releases, see here.

## High Order Runge–Kutta ODE Solvers

The MATLAB suite of solvers for ordinary differential equations previously contained 8 solvers, including 3 Runge-Kutta solvers: ode23, ode23tb, and ode45. The suite has been augmented with two new high-order Runge-Kutta solvers: ode78 uses 7th and 8th order Runge-Kutta formulas and ode89 uses 8th and 9th order Runge-Kutta formulas.

The documentation says that

• ode78 and ode89 may be more efficient than ode45 on non-stiff problems that are smooth except possibly for a few isolated discontinuities.
• ode89 may be more efficient than ode78 on very smooth problems, when integrating over long time intervals, or when tolerances are tight.
• ode78 and ode89 may not be as fast or as accurate as ode45 in single precision.

## Matrix to Scalar Power

The MATLAB mpower function is called by an exponentiation A^z when A is a matrix and z is a real or complex scalar. For the meaning of such arbitrary matrix powers see What Is a Fractional Matrix Power?

Previously, mpower used a diagonalization of A. For a matrix that is not diagonalizable, or is close to being not diagonalizaable, the results could be inaccurate. In R2021b a Schur–Padé algorithm, which employs a Schur decomposition in conjunction with Padé approximation, is used that works for all matrices. The difference in accuracy between the old and new versions of mpower is clearly demonstrated by computing the square root of a 2-by-2 Jordan block (a difficult case because it is defective).

% R2021a
>> A = [1 1; 0 1], X = A^(0.5), residual = A - X^2
A =
1     1
0     1
X =
1     0
0     1
residual =
0     1
0     0

% R2021b
>> A = [1 1; 0 1], X = A^(0.5), residual = A - X^2
A =
1     1
0     1
X =
1.0000e+00   5.0000e-01
0   1.0000e+00
residual =
0     0
0     0


## Functions on nD Arrays

The new function pagesvd computes the singular value decomposition (SVD) of pages of nD arrays. A page is a 2D slice (i.e., a matrix) formed by taking all the elements in the first two dimensions and fixing the later subscripts. Related functions include pagemtimes for matrix multiplication abd pagetranspose for transposition, both introduced in R2020b. Better performance can be expected from these page functions than from explicit looping over the pages, especially for small-sized pages.

## Improvements to Editor

Several improvements have been introduced to the MATLAB Editor. Ones that caught my eye are the ability to edit rectangular areas of text, automatic completion of block endings and delimiters, wrapping of comments, and changing the case of selected text. I do all my editing in Emacs, using the MATLAB major mode, and these sorts of things are either available or can be added by customization. However, these are great additions for MATLAB Editor users.

# What Is a Vector Norm?

A vector norm measures the size, or length, of a vector. For complex $n$-vectors, a vector norm is a function $\|\cdot\| : \mathbb{C}^n \to \mathbb{R}$ satisfying

• $\|x\| \ge 0$ with equality if and only if $x=0$ (nonnegativity),
• $\|\alpha x\| =|\alpha| \|x\|$ for all $\alpha\in\mathbb{C}$ $x\in\mathbb{C}^n$ (homogeneity),
• $\|x+y\| \le \|x\|+\|y\|$ for all $x,y\in\mathbb{C}^n$ (the triangle inequality).

An important class of norms is the Hölder $p$-norms

$\|x\|_p = \biggl( \displaystyle\sum_{i=1}^n |x_i|^p \biggr)^{1/p}, \quad p\ge 1. \qquad\qquad (1)$

The $\infty$-norm is interpreted as $\lim_{p\to\infty}\|x\|_p$ and is given by

$\notag \|x\|_{\infty} = \displaystyle\max_{1\le i\le n} |x_i|.$

Other important special cases are

\notag \begin{alignedat}{2} \|x\|_1 &= \sum_{i=1}^n |x_i|, &\quad& \mbox{Manhattan'' or taxi~cab'' norm}, \\ \|x\|_2 &= \biggl( \sum_{i=1}^n |x_i|^2 \biggr)^{1/2} = (x^*x)^{1/2}, &\quad& \mbox{Euclidean length}. \end{alignedat}

A useful concept is that of the dual norm associated with a given vector norm, which is defined by

$\notag \|y\|^D = \displaystyle\max_{x\ne0} \displaystyle\frac{\mathop{\mathrm{Re}}x^* y}{\|x\|}.$

The maximum is attained because the definition is equivalent to $\|y\|^D = \max\{ \, \mathop{\mathrm{Re}} x^*y: \|x\| = 1\,\}$, in which we are maximizing a continuous function over a compact set. Importantly, the dual of the dual norm is the original norm (Horn and Johnson, 2013, Thm.$\,$ 5.5.9(c)).

We can rewrite the definition of dual norm, using the homogeneity of vector norms, as

$\notag \|y\|^D = \displaystyle\max_{|c| = 1} \| cy \|^D = \max_{|c| = 1} \max_{x\ne 0} \frac{\mathop{\mathrm{Re}} x^*(cy) }{\|x\|} = \max_{x\ne 0} \max_{|c| = 1} \frac{\mathop{\mathrm{Re}} c(x^*y) }{\|x\|} = \max_{x\ne 0} \frac{ |x^*y| }{\|x\|}.$

Hence we have the attainable inequality

$\notag |x^*y| \le \|x\| \, \|y\|^D, \qquad\qquad (2)$

which is the generalized Hölder inequality.

The dual of the $p$-norm is the $q$-norm, where $p^{-1} + q^{-1} = 1$, so for the $p$-norms the inequality (2) becomes the (standard) Hölder inequality,

$\notag |x^*y| \le \|x\|_p \, \|y\|_q, \quad \displaystyle\frac{1}{p} + \frac{1}{q} = 1.$

An important special case is the Cauchy–Schwarz inequality,

$\notag |x^*y| \le \|x\|_2 \, \|y\|_2.$

The notation $\|x\|_0$ is used to denote the number of nonzero entries in $x$, even though it is not a vector norm and is not obtained from (1) with $p = 0$. In portfolio optimization, if $x_k$ specifies how much to invest in stock $k$ then the inequality $\|x\|_0 \le k$ says “invest in at most $k$ stocks”.

In numerical linear algebra, vector norms play a crucial role in the definition of a subordinate matrix norm, as we will explain in the next post in this series.

All norms on $\mathbb{C}^n$ are equivalent in the sense that for any two norms $\|\cdot\|_\alpha$ and $\|\cdot\|_\beta$ there exist positive constants $\nu_1$ and $\nu_2$ such that

$\nu_1\|x\|_\alpha \le \|x\|_\beta \le \nu_2 \|x\|_\alpha \quad \mathrm{for~all}~x\in \mathbb{C}^n.$

For example, it is easy to see that

\notag \begin{aligned} \|x\|_2 &\le \|x\|_1 \le \sqrt{n} \|x\|_2,\\ \|x\|_\infty &\le \|x\|_2 \le \sqrt{n} \|x\|_\infty,\\ \|x\|_\infty &\le \|x\|_1 \le n \|x\|_\infty. \end{aligned}

The 2-norm is invariant under unitary transformations: if $Q^*Q = I$, then $\|Qx\|^2 = x^*Q^* Qx = x^*x = \|x\|_2^2$.

Care must be taken to avoid overflow and (damaging) underflow when evaluating a vector $p$-norm in floating-point arithmetic for $p\ne 1,\infty$. One can simply use the formula $\|x\|_p = \| (x/\|x\|_{\infty}) \|_p \|x\|_{\infty}$, but this requires two passes over the data (the first to evaluate $\|x\|_{\infty}$). For more efficient one-pass algorithms for the $2$-norm see Higham (2002, Sec. 21.8) and Harayama et al. (2021).

## References

This is a minimal set of references, which contain further useful references within.

Note: This article was revised on October 12, 2021 to change the definition of dual norm to use $\mathrm{Re}$.

# Can We Solve Linear Algebra Problems at Extreme Scale and Low Precisions?

The largest dense linear systems being solved today are of order $n = 10^7$, and future exascale computer systems will be able to tackle even larger problems. Rounding error analysis shows that the computed solution satisfies a componentwise backward error bound that, under favorable assumptions, is of order $nu$, where $u$ is the unit roundoff of the floating-point arithmetic: $u \approx 10^{-16}$ for double precision and $u \approx 10^{-8}$ for single precision. This backward error bound cannot guarantee any stability for single precision solution of today’s largest problems and suggests a loss of half the digits in the backward error for double precision.

Half precision floating-point arithmetic is now readily available in hardware, in both the IEEE binary16 format and the bfloat16 format, and it is increasingly being used in machine learning and in scientific computing more generally. For the computation of the inner product of two $n$-vectors the backward error bound is again of order $nu$, and this bound exceeds $1$ for $n \ge 684$ for both half precision formats, suggesting a potentially complete loss of numerical stability. Yet inner products with $n \ge 684$ are successfully used in half precision computations in practice.

The error bounds I have referred to are upper bounds and so bound the worst-case over all possible rounding errors. Their main purpose is to reveal potential instabilities rather than to provide realistic error estimates. Yet we do need to know the limits of what we can compute, and for mission critical applications we need to be able to guarantee a successful computation..

Can we understand the behavior of linear algebra algorithms at extreme scale and in low precision floating-point arithmetics?

To a large extent the answer is yes if we exploit three different features to obtain smaller error bounds.

## Blocked Algorithms

Many algorithms are implemented in blocked form. For example, an inner product $x^Ty$ of two $n$-vectors $x$ and $y$ can computed as

\notag \begin{aligned} s_i &= x((i-1)b+1:ib)^T y((i-1)b+1:ib), \quad i = 1:k,\\ s &= s_1 + s_2 + \dots + s_k, \end{aligned}

where $n = kb$ and $b \ll n$ is the block size. The inner product has been broken into $k$ smaller inner products of size $b$, which are computed independently then summed. Many linear algebra algorithms are blocked in an analogous way, where the blocking is into submatrices with $b$ rows or $b$ columns (or both). Careful analysis of the error analysis shows that a blocked algorithm has an error bound about a factor of $b$ smaller than that for the corresponding unblocked algorithm. Practical block sizes for matrix algorithms are typically $128$ or $256$, so blocking brings a substantial reduction in the error bounds.

In fact, one can do even better than an error bound of order $(n/b)u$. By computing the sum $s= s_1 + s_2 + \dots + s_k$ with a more accurate summation method the error constant is further reduced to $bu + O(u^2)$ (this is the FABsum method of Blanchard et al. (2020)).

## Architectural Features

Intel x86 processors support an 80-bit extended precision format with a 64-bit significand, which is compatible with that specified in the IEEE standard. When a compiler uses this format with 80-bit registers to accumulate sums and inner products it is effectively working with a unit roundoff of $2^{-64}$ rather than $2^{-53}$ for double precision, giving error bounds smaller by a factor up to $2^{11} = 2048$.

Some processors have a fused multiply–add (FMA) operation, which computes a combined multiplication and addition $x + yz$ with one rounding error instead of two. This results in a reduction in error bounds by a factor $2$.

Mixed precision block FMA operations $D = C + AB$, with matrices $A,B,C,D$ of fixed size, are available on Google tensor processing units, NVIDIA GPUs, and in the ARMv8-A architecture. For half precision inputs these devices can produce results of single precision quality, which can give a significant boost in accuracy when block FMAs are chained together to form a matrix product of arbitrary dimension.

## Probabilistic Bounds

Worst-case rounding error bounds suffer from the problem that they are not attainable for most specific sets of data and are unlikely to be nearly attained. Stewart (1990) noted that

To be realistic, we must prune away the unlikely. What is left is necessarily a probabilistic statement.

Theo Mary and I have recently developed probabilistic rounding error analysis, which makes probabilistic assumptions on the rounding errors and derives bounds that hold with a certain probability. The key feature of the bounds is that they are proportional to $\sqrt{n}u$ when a corresponding worst-case bound is proportional to $nu$. In the most general form of the analysis (Connolly, Higham, and Mary, 2021), the rounding errors are assumed to be mean independent and of mean zero, where mean independence is a weaker assumption than independence.

## Putting the Pieces Together

The different features we have described can be combined to obtain significantly smaller error bounds. If we use a blocked algorithm with block size $b \ll n$ then in an inner product the standard error bound of order $nu$ reduces to a probabilistic bound of order $(\sqrt{n/b})u$, which is a significant reduction. Block FMAs and extended precision registers provide further reductions.

For example, for a linear system of order $10^7$ solved in single precision with a block size of $256$, the probabilistic error bound is of order $10^{-5}$ versus $1$ for the standard worst-case bound. If FABsum is used then the bound is further reduced.

Our conclusion is that we can successfully solve linear algebra problems of greater size and at lower precisions than the standard rounding error analysis suggests. A priori bounds will always be pessimistic, though. One should compute a posteriori residuals or backward errors (depending on the problem) in order to assess the quality of a numerical solution.

For full details of the work summarized here, see Higham (2021).

# Videos from New Directions in Numerical Linear Algebra and High Performance Computing Workshop

In July 2021, Sven Hammarling, Françoise Tisseur and I organized an online workshop New Directions in Numerical Linear Algebra and High Performance Computing. The workshop brought together researchers working in numerical linear algebra and high performance computing to discuss current developments and challenges in the light of evolving computer hardware. It was held to honour Jack Dongarra on the occasion of his 70th birthday. The workshop had been postponed from July 2020 as a result of the pandemic.

Videos of the talks are now available on the Numerical Linear Algebra Group’s YouTube channel and are included below. Slides for the talks are available on the workshop website.

Sven Hammarling (The University of Manchester), “Jack Dongarra”.

Iain Duff (STFC-RAL and CERFACS), “Jack”

James Demmel (University of California, Berkeley), “New Communication-Avoiding Algorithms, and Fixing Old Bugs in the BLAS and LAPACK”

Piotr Luszczek (University of Tennessee), “Numerical Methods and Across Scales, Precisions and Hardware Platforms”

Cleve Moler (MathWorks), “Computers That I Have Known”

Yves Robert (Ecole Normale Supérieure de Lyon), “25+ Years of Scheduling at ICL”

Françoise Tisseur (The University of Manchester), “Mixed Precision Tall and Thin QR Factorization with Applications”

David Keyes (King Abdullah University of Science and Technology), “Adaptive Nonlinear Preconditioning for PDEs with Error Bounds on Output Functionals”

Zhaojun Bai (University of California, Davis), “Many Eigenpair Computation Via Hotelling’S Deflation”,

Ilse Ipsen (North Carolina State University), “A Few Observations About Summation Algorithms”

Erich Strohmaier (TOP500), “TOP500 and Accidental Benchmarking”

Nick Higham (The University of Manchester), “Solving Dense Linear Systems: A Brief History and Future Directions ”

Jack Dongarra (University of Tennessee, Oak Ridge Laboratory and The University of Manchester), “Still Having Fun After 50 Years”,

# SIAM AN21 Minisymposium on Bohemian Matrices and Applications

The two-part minisymposium Bohemian Matrices and Applications, organized by Rob Corless and I, took place at the SIAM Annual Meeting, July 22 and 23, 2021. This page makes available slides from some of the talks.

The minisymposium followed a two-part minisymposium on Bohemian matrices at the 2019 ICIAM meeting in Valencia and a 3-day workshop on Bohemian matrices in Manchester in 2018.

For more on Bohemian matrices see the Bohemian matrices website.

Minisymposium description: Bohemian matrices are matrices with entries drawn from a fixed discrete set of small integers (or some other discrete set). The term is a contraction of BOunded HEight Matrix of Integers. Such matrices arise in many applications, and include $(0,1)$ graph incidence matrices and $(-1,1)$ Bernoulli matrices. The questions of interest range from identifying structures in the spectra of particular classes of Bohemian matrix to searching for most ill conditioned matrices within a class, and applications include stress-testing algorithms and software. This minisymposium will report recent theoretical and computational progress as well as open questions.

Putting Skew-Symmetric Tridiagonal Bohemians on the Calendar. Robert M. Corless, Western University, Canada. Abstract. Rob did not use slides but gave his talk using this paper and this Maple worksheet.

Determinants of Normalized Bohemian Upper Hessenberg Matrices. Massimiliano Fasi, Örebro University, Sweden; Jishe Feng, Longdong University, China; Gian Maria Negri Porzio, University of Manchester, United Kingdom. Abstract. Slides.

Experiments on Upper Hessenberg and Toeplitz Bohemians. Eunice Chan, Western University, Canada. Abstract. Slides.

Eigenvalues of Magic Squares and Related Bohemian Matrices. Hariprasad Manjunath Hegde, Indian Institute of Science, Bengaluru, India. Abstract. Slides.

Calculating the 3D Kings Multiplicity Constant. Nicholas Cohen and Neil Calkin, Clemson University, U.S. Abstract. Slides.

Bohemian Inners Inverses: A First Step Toward Bohemian Generalized Inverses. Laureano Gonzalez-Vega, Universidad de Cantabria, Spain; Juan Rafael Sendra, Universidad Alcalá de Henares, Spain; Juana Sendra Pons, Universidad Politécnica de Madrid, Spain. Abstract. Slides.

Recent Progress in the Rational Factorisation of Integer Matrices. Matthew Lettington, Cardiff University, United Kingdom. Abstract. Slides.

Which Columns are Independent? Why does Row Rank = Column Rank? Gilbert Strang, Massachusetts Institute of Technology, U.S. Abstract. Slides.

Bohemian Matrices: the Symbolic Computation Approach. Juana Sendra, Universidad Autónoma de Madrid, Spain; Laureano González-Vega, Universidad de Estudios Financieros en Madrid, Spain; Juan Rafael Sendra, Universidad Alcalá de Henares, Spain. Abstract. Slides.

# What Is a Totally Nonnegative Matrix?

The determinant of a square submatrix of a matrix is called a minor. A matrix $A\in\mathbb{R}^{n\times n}$ is totally positive if every minor is positive. It is totally nonnegative if every minor is nonnegative. These definitions require, in particular, that all the matrix elements must be nonnegative or positive, as must $\det(A)$.

An important property is that total nonnegativity is preserved under matrix multiplication and hence under taking positive integer powers.

Theorem 1. If $A,B\in\mathbb{R}^{n\times n}$ are totally nonnegative then so is $AB$.

Theorem 1 is a direct consequence of the Binet–Cauchy theorem on determinants (also known as the Cauchy–Binet theorem). To state it, we need a way of specifying submatrices. We say the vector $\alpha = [\alpha_1,\alpha_2,\dots,\alpha_k]$ is an index vector of order $k$ if its components are integers from the set $\{1,2,\dots,n\}$ satisfying $\alpha_1 < \alpha_2 < \cdots < \alpha_k$. If $\alpha$ and $\beta$ are index vectors of order $k$ and $\ell$, respectively, then $A(\alpha, \beta)$ denotes the $k\times \ell$ matrix with ($i,j$) element $a_{\alpha_i,\beta_j}$.

Theorem 2. (Binet–Cauchy) Let $A\in\mathbb{R}^{m\times n}$, $B\in\mathbb{R}^{n\times p}$, and $C = AB$. If $\alpha$ and $\beta$ are index vectors of order $k$ and $1 \le k \le \min(m,n,p)$ then

$\notag \det(C(\alpha,\beta)) = \sum_{\gamma} \det( A(\alpha,\gamma) ) \det( B(\gamma,\beta) ), \qquad (1)$

where the sum is over all index vectors $\gamma$ of order $k$.

Note than when $k = m = n = p$, (1) reduces to the well-known relation $\det(AB) = \det(A)\det(B)$, while when $k = 1$, (1) reduces to the definition of matrix multiplication.

Totally nonnegative matrices have many interesting determinantal properties. For example, they satisfy Fischer’s inequality, first proved for symmetric positive definite matrices.

Theorem 3. (Fischer) If $A\in\mathbb{R}^{n\times n}$ is totally nonnegative then for any index vector $\alpha$,

$\notag \det(A) \le \det(A(\alpha)) \det(A(\alpha^c)), \qquad (2)$

where $\alpha^c$ comprises the indices not in $\alpha$.

By repeatedly applying (2) with $\alpha$ containing just one element, we obtain Hadamard’s inequality for totally nonnegative $A$:

$\notag \det(A) \le a_{11} a_{22} \cdots a_{nn}.$

## Examples

We give some examples of totally positive matrices, showing how they can be generated in MATLAB. We use the Anymatrix toolbox.

A matrix well known to be positive definite, but which is also totally positive, is the Hilbert matrix $H\in\mathbb{R}^{n\times n}$, with $h_{ij} = 1/(i+j-1)$. The Hilbert matrix is a particular case of a Cauchy matrix $C$, with $c_{ij} = 1/(x_i + y_j)$ for given vectors $x,y\in\mathbb{R}^{n\times n}$. A Cauchy matrix is totally positive if $0 < x_1 < x_2 < \cdots < x_n$ and $0 < y_1 < y_2 < \cdots < y_n$, which follows from the formula

$\notag \det(C_n) = \displaystyle\frac{\displaystyle\prod_{1\le i < j \le n} (x_j-x_i) (y_j-y_i) } {\displaystyle\prod_{1\le i,j \le n} (x_i+y_j) }.$

In MATLAB, the Hilbert matrix is hilb(n) and the Cauchy matrix can be generated by gallery('cauchy',x,y) (or anymatrix('gallery/cauchy',x,y)).

$\notag V = V(x_1,x_2,\dots,x_n) = \begin{bmatrix} 1 & 1 & \dots & 1 \\ x_1 & x_2 & \dots & x_n \\ \vdots &\vdots & & \vdots \\ x_1^{n-1} & x_2^{n-1} & \dots & x_n^{n-1} \end{bmatrix} \in \mathbb{C}^{n\times n}$

is totally positive if the points $x_i$ satisfy $0 < x_1 < x_2 < \cdots < x_n$. As a partial check, the general formula

$\notag \det(V) = \displaystyle\prod_{1\le i < j \le n}^n (x_i - x_j)$

shows that every leading principal minor is positive. In MATLAB, a Vandermonde matrix can be generated by anymatrix('core/vand',x).

The Pascal matrix $P_n\in\mathbb{R}^{n\times n}$ is defined by

$p_{ij} = \displaystyle\frac{ (i+j-2)! }{ (i-1)! (j-1)! } = {i+j-2 \choose j-1}.$

For example, in MATLAB:

>> P = pascal(5)
P =
1     1     1     1     1
1     2     3     4     5
1     3     6    10    15
1     4    10    20    35
1     5    15    35    70


The Pascal matrix is totally positive for all $n$ (see the section below on bidiagonal factorizations).

The one-parameter correlation matrix $C_n(\theta)$ with off-diagonal elements given by $\theta$ with $0 \le \theta < 1$, illustrated by

$\notag C_3(\theta) = \begin{bmatrix} 1 & \theta & \theta \\ \theta & 1 & \theta \\ \theta & \theta & 1 \\ \end{bmatrix},$

is not totally positive because while the principal minors are all positive, the submatrix $A([1,2],[2,3]) = \bigl[\begin{smallmatrix} \theta & \theta \\ 1 & \theta \end{smallmatrix}\bigr]$ has nonpositive determinant. However, the Kac–Murdock–Szegö matrix $K_n(\theta) = (\theta^{|i-j|})$, with $0 \le \theta < 1$, illustrated by

$\notag K_3(\theta) = \begin{bmatrix} 1 & \theta & \theta^2 \\ \theta & 1 & \theta \\ \theta^2 & \theta & 1 \\ \end{bmatrix}$

is totally positive thanks to the decay of the elements way from the diagonal. In MATLAB, the Kac–Murdock–Szegö matrix can be generated by gallery('kms',n,rho).

The lower Hessenberg Toeplitz matrix $H_n$ with all elements $1$ on and below the superdiagonal, illustrated for $n = 4$ by

$\notag H_4 = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ \end{bmatrix},$

is totally nonnegative. It has $\lfloor n/2 \rfloor$ zero eigenvalues, which appear in a single Jordan block, and its largest eigenvalue is $2(1+\cos(2\pi/(n+2)))$. In MATLAB, this matrix can be generated by anymatrix('core/hessfull01',n). This and other binary totally nonnegative matrices are studied by Brualdi and Kirkland (2010).

Finally, consider a nonnegative $4\times 4$ bidiagonal matrix factorized into a product of elementary nonnegative bidiagonal matrices (nonnegative means that the elements of the matrix are nonnegative):

\notag \begin{aligned} L = \begin{bmatrix} 1 & 0 & 0 & 0 \\ \ell_{21} & 1 & 0 & 0 \\ 0 & \ell_{32} & 1 & 0 \\ 0 & 9 & \ell_{43} & 1 \\ \end{bmatrix} &= \begin{bmatrix} 1 & 0 & 0 & 0 \\ \ell_{21} & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & \ell_{32} & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & \ell_{43} & 1 \\ \end{bmatrix}\\ &\equiv L_1(\ell_{21}) L_2(\ell_{32}) L_3(\ell_{43}). \end{aligned}

It is easy to see by inspection that $L_1$, $L_2$, and $L_3$ are totally nonnegative, so $L$ is totally nonnegative by Theorem 1. With $D = \mathrm{diag}(1,-1,1,1,-1)$, we have

\notag \begin{aligned} DL^{-1}D &= (DLD)^{-1} = (DL_1D \cdot DL_2D \cdot DL_3D )^{-1}\\ &= (DL_3D)^{-1} (DL_2D)^{-1} (DL_3D)^{-1}\\ &= L_3(\ell_{43}) L_2(\ell_{32}) L_1(\ell_{21}), \qquad (3)\ \end{aligned}

which is a product of totally nonnegative matrices and hence is totally nonnegative by Theorem 1. This example clearly generalizes to show that an $n\times n$ nonnegative bidiagonal matrix is totally nonnegative.

## Inverse

Recall that the inverse of a nonsingular $A\in\mathbb{R}^{n\times n}$ is given by $A^{-1} = \mathrm{adj}(A)/\det(A)$, where

$\mathrm{adj}(A) = \bigl( (-1)^{i+j} \det(A_{ji}) \bigr)$

and $A_{pq}$ denotes the submatrix of $A$ obtained by deleting row $p$ and column $q$. If $A$ is nonsingular and totally nonnegative then it follows that $A^{-1}$ has a checkerboard (alternating) sign pattern. Indeed, we can write $A^{-1} = DBD$, where $D = \mathrm{diag}((-1)^{i+1})$ and $B$ has nonnegative elements, and in fact it can be shown that $B$ is totally nonnegative using Theorem 1, Theorem 6, and (3). For example, here is the inverse of the $4\times 4$ Pascal matrix:

>> inv(sym(pascal(5)))
ans =
[  5, -10,  10,  -5,  1]
[-10,  30, -35,  19, -4]
[ 10, -35,  46, -27,  6]
[ -5,  19, -27,  17, -4]
[  1,  -4,   6,  -4,  1]


## Eigensystem

A totally nonnegative matrix has nonnegative trace and determinant, so the sum and product of its eigenvalues are both nonnegative. In fact, all the eigenvalues are real and nonnegative. Since a Jordan block corresponding to a nonnegative eigenvalue is totally nonnegative any Jordan form with nonnegative eigenvalues is possible. More can be said of $A$ is irreducible. Recall that a matrix $A\in\mathbb{C}^{n\times n}$ is irreducible if there does not exist a permutation matrix $P$ such that

$\notag P^TAP = \begin{bmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{bmatrix}$

where $A_{11}$ and $A_{22}$ are square, nonempty submatrices.

Theorem 4. If $A\in\mathbb{R}^{n\times n}$ is totally nonnegative then its eigenvalues are all real and nonnegative. If $A$ is also irreducible then the positive eigenvalues are distinct.

If $A$ is nonsingular and totally nonnegative and irreducible then by the theorem we can write the eigenvalues as $\lambda_1 > \lambda_2 > \cdots > \lambda_n >0$. It is known that the eigenvector $x_k$ associated with $\lambda_k$ has $k-1$ sign changes, that is, $(x_k)_{i+1}$ and ($x_k)_i$ have opposite signs for $k-1$ values of $i$ (any zero elements are deleted before counting sign changes). Note that for $k=1$, we already know from Perron–Frobenius theory that there is a positive eigenvector $x_1$. This result is illustrated by the Pascal matrix above:

>> A = pascal(5); [V,d] = eig(A,'vector'); [~,k] = sort(d,'descend');
>> evals = d', evecs = V(:,k)
evals =
1.0835e-02   1.8124e-01   1.0000e+00   5.5175e+00   9.2290e+01
evecs =
1.7491e-02   2.4293e-01  -7.6605e-01  -5.7063e-01   1.6803e-01
7.4918e-02   4.8079e-01  -3.8302e-01   5.5872e-01  -5.5168e-01
2.0547e-01   6.1098e-01   1.6415e-01   2.5292e-01   7.0255e-01
4.5154e-01   4.1303e-01   4.3774e-01  -5.1785e-01  -4.0710e-01
8.6486e-01  -4.0736e-01  -2.1887e-01   1.7342e-01   9.0025e-02


Note that the number of sign changes (but not the number of negative elements) increases by $1$ as we go from one column to the next

The class of nonsingular totally nonnegative irreducible matrices is known as the oscillatory matrices, because such matrices arise in the analysis of small oscillations of elastic systems. An equivalent definition (in fact, the usual definition) is that an oscillatory matrix is a totally nonnegative matrix for which $A^q$ is totally positive for some positive integer $q$.

## LU Factorization

The next result shows that a totally nonnegative matrix has an LU factorization with special properties. We will need the following special case of Fischer’s inequality (Theorem 3):

$\notag \det(A) \le \det \bigl( A(1\colon p,1\colon p) \bigr) \det \bigl( A(p+1\colon n,p+1\colon n) \bigr), \quad p=1\colon n-1. \qquad (4)$

Theorem 5. If $A\in\mathbb{R}^{n\times n}$ is nonsingular and totally nonnegative then it has an LU factorization with $L$ and $U$ totally nonnegative and the growth factor $\rho_n = 1$.

Proof. Since $A$ is nonsingular and every minor is nonnegative, (4) shows that $\det(A(1\colon p,1\colon p))>0$ for $p=1\colon n-1$, which guarantees the existence of an LU factorization. That the elements of $L$ and $U$ are nonnegative follows from explicit determinantal formulas for the elements of $L$ and $U$. The total nonnegativity of $L$ and $U$ is proved by Cryer (1976). Gaussian elimination starts with $A^{(1)} = A$ and computes $a_{ij}^{(k+1)} = a_{ij}^{(k)} - m_{ik}a_{kj}^{(k)} = a_{ij}^{(k)} - \ell_{ik} u_{kj} \le a_{ij}^{(k)}$, since $\ell_{ik}, u_{kj} \ge 0$. Thus $a_{ij} = a_{ij}^{(1)} \ge a_{ij}^{(2)} \ge \cdots \ge a_{ij}^{(r)}$, $r = \min (i,j)$. For $i > j$, $a_{ij}^{(r)} \ge a_{ij}^{(r+1)} = \cdots = a_{ij}^{(n)} = 0$; for $j \ge i$, $a_{ij}^{(r)} = \cdots = a_{ij}^{(n)} = u_{ij} \ge 0$. Thus $0 \le a_{ij}^{(k)} \le a_{ij}$ for all $i,j,k$ and hence $\rho_n \le 1$. But $\rho_n\ge1$, so $\rho_n=1$.

Theorem 5 implies that it is safe to compute the LU factorization without pivoting of a nonsingular totally nonnegativity matrix: the factorization does not break down and it is numerically stable. In fact, the computed LU factors have a strong componentwise form of stability. As shown by De Boor and Pinkus (1977), for small enough unit roundoff $u$ the computed factors $\widehat{L}$ and $\widehat{U}$ will have nonnegative elements and so from the standard backward error result for LU factorization,

$\notag \widehat{L}\widehat{U} = A + \Delta A, \quad |\Delta A| \le \gamma_n |\widehat{L}||\widehat{U}| \quad \Bigl(\gamma_n = \displaystyle\frac{nu}{1-nu} \Bigr),$

we have

$\notag |\widehat{L}||\widehat{U}| = |\widehat{L}\widehat{U}| = |A + \Delta A| \le |A| + \gamma_n |\widehat{L}||\widehat{U}|,$

which gives $|\widehat{L}||\widehat{U}| \le (1 - \gamma_n)^{-1}|A|$ and hence

$\notag \widehat{L}\widehat{U} = A + \Delta A, \quad |\Delta A| \le \displaystyle\frac{\gamma_n}{1-\gamma_n} |A|,$

which is about as strong a backward error result as we could hope for. The significance of this result is reduced, however, by the fact that for some important classes of totally nonnegative matrices, including Vandermonde matrices and Cauchy matrices, structure-exploiting linear system solvers exist that are substantially faster, and potentially more accurate, than LU factorization.

## Factorization into a Product of Bidiagonal Matrices

We showed above that any nonnegative bidiagonal matrix is totally nonnegative. The next result shows that any nonsingular totally nonnegative matrix has an LU factorization in which $L$ and $U$ can be factorized into a product of nonnegative bidiagonal matrices.

Theorem 6. (Gasca and Peña, 1996) A nonsingular matrix $A\in\mathbb{R}^{n\times n}$ is totally nonnegative if and only if it it can be factorized as

$\notag A = L_{n-1} L_{n-2} \dots L_1 D U_1 U_2 \dots U_{n-1}, \qquad (5)$

where $D$ is a diagonal matrix with positive diagonal entries and $L_i$ and $U_i$ are unit lower and unit upper bidiagonal matrices, respectively, with the first $i-1$ entries along the subdiagonal of $L_i$ and $U_i^T$ zero and the rest nonnegative.

An analogue of Theorem 6 holds for totally positive matrices, the only difference being that the last $n-i+1$ subdiagonal entries of $L_i$ and $U_i^T$ are positive.

The factorization (5) can be computed by Neville elimination, which is a version of Gaussian elimination in which the eliminations are between adjacent rows, working from the bottom of each column upwards.

This factorization into bidiagonal factors can be used to obtain simple proofs of various properties of totally nonnegative matrices and totally positive matrices (Fallat, 2001). It also provides an efficient way to generates such matrices. If all the parameters in $D$ and the $L_i$ and $U_i$ are set to $1$ then the Pascal matrix is generated.

## Testing for Total Positivity

An $n\times n$ matrix has $\sum_{i=1}^n {n \choose k} = 2^n-1$ principal minors (ones based on submatrices centred on the diagonal) and $\sum_{i=1}^n {n \choose k}^2 = {2n \choose n} -1 \approx 4^n/(n\pi)^{1/2}$ minors in total. However, it is not necessary to check all these minors to test for total positivity.

Theorem 7. (Gasca and Peña, 1996) The matrix $A\in\mathbb{R}^{n\times n}$ is totally positive if and only if $\det(A(\alpha,\beta)) > 0$ for all index vectors $\alpha$ and $\beta$ such that one of $\alpha$ and $\beta$ is $[1,2,\dots,k]$ and the entries of the other are $k$ consecutive integers.

Theorem 7 shows that only $n(n+1)$ minors need to be tested. Gasca and Peña have also show that total nonnegativity can be tested by checking about $2^{n+1} + n^2/2$ minors. A more efficient way to test for total nonnegativity is to compute the factorization in Theorem 6 and check the signs of the entries.

## Notes

The results we have described show that totally nonnegative and totally positive matrices are analogous in many ways to symmetric positive (semi)definite matrices. The analogies go further because totally nonnegative and totally positive matrices also satisfy eigenvalue interlacing inequalities (albeit weaker than for symmetric matrices) and the eigenvalues of an oscillatory matrix majorize the diagonal elements. See Fallat and Johnson (2011) or Fallat (2014) for details.

## References

This is a minimal set of references, which contain further useful references within.

# What Is the Perron–Frobenius Theorem?

A real matrix is nonnegative if all its elements are nonnegative and it is positive if all its elements are positive. Nonnegative matrices arise in a wide variety of applications, for example as matrices of probabilities in Markov processes and as adjacency matrices of graphs. Information about the eigensystem is often essential in these applications.

Perron (1907) proved results about the eigensystem of a positive matrix and Frobenius (1912) extended them to nonnegative matrices.

The following three results of increasing specificity summarize the key spectral properties of nonnegative matrices proved by Perron and Frobenius. Recall that a simple eigenvalue of an $n\times n$ matrix is one with algebraic multiplicity $1$, that is, it occurs only once in the set of $n$ eigenvalues. We denote by $\rho(A)$ the spectral radius of $A$, the largest absolute value of any eigenvalue of $A$.

Theorem 1. (Perron–Frobenius) If $A\in\mathbb{R}^{n\times n}$ is nonnegative then

1. $\rho(A)$ is an eigenvalue of $A$,
2. there is a nonnegative eigenvector $x$ such that $Ax = \rho(A)x$.

A matrix $A\in\mathbb{C}^{n\times n}$ is reducible if there is a permutation matrix $P$ such that

$\notag P^TAP = \begin{bmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{bmatrix}$

where $A_{11}$ and $A_{22}$ are square, nonempty submatrices; it is irreducible if it is not reducible. Examples of reducible matrices are triangular matrices and matrices with a zero row or column. A positive matrix is trivially irreducible.

Theorem 2. (Perron–Frobenius) If $A\in\mathbb{R}^{n\times n}$ is nonnegative and irreducible then

1. $\rho(A)$ is an eigenvalue of $A$,
2. $\rho(A)>0$,
3. there is a positive eigenvector $x$ such that $Ax = \rho(A) x$,
4. $\rho(A)$ is a simple eigenvalue.

Theorem 3. (Perron) If $A\in\mathbb{R}^{n\times n}$ is positive then Theorem 2 holds and, in addition, $|\lambda| < \rho(A)$ for any eigenvalue $\lambda$ with $\lambda \ne \rho(A)$.

For nonnegative, irreducible $A$, the eigenvalue $\rho(A)$ is called the Perron root of $A$ and the corresponding positive eigenvector $x$, normalized so that $\|x\|_1 = 1$, is called the Perron vector.

It is a good exercise to apply the theorems to all binary $2\times 2$ matrices. Here are some interesting cases.

• $A = \bigl[\begin{smallmatrix}0 & 1 \\ 0 & 0 \end{smallmatrix}\bigr]$: Theorem 1 says that $\rho(A) = 0$ is an eigenvalue and and that it has a nonnegative eigenvector. Indeed $[1~0]^T$ is an eigenvector. Note that $A$ is reducible and $0$ is a repeated eigenvalue.
• $A = \bigl[\begin{smallmatrix}0 & 1 \\ 1 & 0 \end{smallmatrix}\bigr]$: $A$ is irreducible and Theorem 2 says that $\rho(A)$ is a simple eigenvalue with positive eigenvector. Indeed the eigenvalues are $\pm 1$ and $[1~1]^T/2$ is the Perron vector for the Perron root $1$. This matrix has two eigenvalues of maximal modulus.
• $A = \bigl[\begin{smallmatrix}1 & 1 \\ 1 & 1 \end{smallmatrix}\bigr]$: Theorem 3 says that $\rho(A) = 2$ is an eigenvalue with positive eigenvector and that the other eigenvalue has modulus less than $2$. Indeed the eigenvalues are the Perron root $2$, with Perron vector $[1~1]^T/2$, and $0$.

For another example, consider the irreducible matrix

$\notag B = \begin{bmatrix} 0 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 0 \end{bmatrix}, \quad \Lambda(B) = \bigl\{ 1, \textstyle\frac{1}{2}( -1 \pm \sqrt{3}\mskip1mu\mathrm{i} ) \bigr\}.$

Note that $B$ is a companion matrix and a permutation matrix. Theorem 2 correctly tells us that $\rho(A) = 1$ is an eigenvalue of $A$, and that it has a corresponding positive eigenvector, the Perron vector $[1~1~1]^T/3$. Two of the eigenvalues are complex, however, and all three eigenvalues have modulus 1, as they must because $B$ is orthogonal.

A stochastic matrix is a nonnegative matrix whose row sums are all equal to $1$. A stochastic matrix satisfies $Ae = e$, where $e = [1,1,\dots,1]^T$, which means that $A$ has an eigenvalue $1$, and so $\rho(A) \ge 1$. Since $\rho(A) \le \|A\|$ for any norm, by taking the $\infty$-norm we conclude that $\rho(A) = 1$. For a stochastic matrix, Theorem 1 does not give any further information. If $A$ is irreducible then Theorem 2 tells us that $\rho(A)$ is a simple eigenvalue, and if $A$ is positive Theorem 3 tells us that every other eigenvalue has modulus less than $\rho(A)$.

The next result is easily proved using Theorem 3 together with the Jordan canonical form. It shows that the powers of a positive matrix behave like multiples of a rank-1 matrix.

Theorem 4. If $A\in\mathbb{R}^{n\times n}$ is positive, $x$ is the Perron vector of $A$, and $y$ is the Perron vector of $A^T$ then

$\notag \displaystyle\lim_{k\to\infty} \left( \displaystyle\frac{A}{\rho(A)} \right)^k = \displaystyle\frac{xy^T}{y^Tx}.$

Note that $y$ in the theorem is a left eigenvector of $A$ corresponding to $\rho(A)$, that is, $y^TA = \rho(A)y^T$ (since $\rho(A^T) = \rho(A)$).

If $A$ is stochastic and positive then Theorem 4 is applicable and $x = n^{-1}e$. If $A$ also has unit column sums, so that it is doubly stochastic, then $y = n^{-1}e$ and Theorem 4 says that $\lim_{k\to\infty}A^k = n^{-1}ee^T$. We illustrate this result in MATLAB using a scaled magic square matrix.

>> n = 4; M = magic(n), A = M/sum(M(1,:)) % Doubly stochastic matrix.
A =
16     2     3    13
5    11    10     8
9     7     6    12
4    14    15     1
A =
4.7059e-01   5.8824e-02   8.8235e-02   3.8235e-01
1.4706e-01   3.2353e-01   2.9412e-01   2.3529e-01
2.6471e-01   2.0588e-01   1.7647e-01   3.5294e-01
1.1765e-01   4.1176e-01   4.4118e-01   2.9412e-02

>> for k = 8:8:32, fprintf('%11.2e',norm(A^k-ones(n)/n,1)), end, disp(' ')
3.21e-05   7.37e-10   1.71e-14   8.05e-16


## References

This is a minimal set of references, which contain further useful references within.

• Roger A. Horn and Charles R. Johnson, Matrix Analysis, second edition, Cambridge University Press, 2013. Chapter 8. My review of the second edition.
• Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2000. Chapter 8.
• Helene Shapiro, Linear Algebra and Matrices. Topics for a Second Course, American Mathematical Society, Providence, RI, USA, 2015. Chapter 17.

# What Is the Kac–Murdock–Szegö Matrix?

The Kac–Murdock–Szegö matrix is the symmetric Toeplitz matrix

$\notag A_n(\rho) = \begin{bmatrix} 1 & \rho & \rho^2 & \dots & \rho^{n-1} \\ \rho & 1 & \rho & \dots & \rho^{n-2} \\ \rho^2 & \rho & 1 & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & \rho \\ \rho^{n-1} & \rho^{n-2} & \dots & \rho & 1 \end{bmatrix} \in\mathbb{R}^{n\times n}. \qquad(1)$

It was considered by Kac, Murdock, and Szegö (1953), who investigated its spectral properties. It arises in the autoregressive AR(1) model in statistics and signal processing.

The matrix is singular for $\rho=1$, as $A_n(1)$ is the rank-$1$ matrix $ee^T$, and it is also rank-$1$ for $\rho = -1$, as in this case every column is a multiple of the vector with alternating elements $\pm 1$. The determinant $\det(A_n(\rho)) = (1-\rho^2)^{n-1}$. For $\rho \ne \pm 1$, $A_n$ is nonsingular and the inverse is the tridiagonal (but not Toeplitz) matrix

$\notag A_n(\rho)^{-1} = \displaystyle\frac{1}{1-\rho^2} \begin{bmatrix} 1 & -\rho & 0 & \dots & \dots & 0 \\ -\rho & 1+\rho^2 & -\rho & \dots & \dots & 0 \\ 0 & -\rho & 1+\rho^2 & \ddots & \dots & \vdots \\ \vdots & \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \dots & \dots & -\rho & 1+\rho^2 & -\rho \\ 0 & \dots & \dots & 0 & -\rho & 1 \end{bmatrix}. \qquad (2)$

For $-1 < \rho < 1$, $A_n(\rho)$ is positive definite, since every leading principal submatrix has positive determinant, as can also be seen by noting that the inverse is diagonally dominant with positive diagonal, so that $A_n^{-1}$ is positive definite and hence $A_n$ is positive definite.

For $-1 \le \rho \le 1$, $A_n(\rho)$ is positive semidefinite, so it is a correlation matrix for $\rho$ in this range.

For $0 \le \rho \le 1$, $A_n(\rho)$ is totally nonnegative, that is. every submatrix has nonnegative determinant. For $0 < \rho < 1$, we know that $A_n(\rho)$ is nonsingular, and it is clearly irreducible, and together with the total nonnegativity these properties imply that the eigenvalues are distinct and positive (this can also be deduced from the fact that the inverse is tridiagonal with nonzero subdiagonal and superdiagonal entries).

It is straightforward to verify that $A_n$ has a factorization $A_n = LDL^*$ with $L$ the inverse of a unit lower bidiagonal matrix:

$\notag L = \begin{bmatrix} 1 & & & &\\ -\rho & 1 & & &\\ & -\rho & 1 & &\\ & & \ddots & \ddots &\\ & & &-\rho &1 \end{bmatrix}^{-1}, \quad D = \mathrm{diag}(1, 1-\rho^2, 1-\rho^2, \dots, 1-\rho^2). \qquad (3)$

This factorization can be used to prove all the properties stated above.

From (1) and (2) we can derive the formulas

\notag \begin{aligned} \|A_n\|_{1,\infty} &= 2 \left(\displaystyle\frac{1-\rho^{k+1}}{1-\rho}\right) -1 - (2k-n+1) \rho^k, \quad k = \lfloor n/2 \rfloor, \\ \|A_n^{-1}\|_{\infty} &= (1+2\rho+\rho^2)/(1-\rho^2) = (1+\rho)/(1-\rho). \end{aligned}

Hence we have an explicit formula for the condition number $\kappa_p(A_n) = \|A_n\|_{1,\infty} \|A_n^{-1}\|_{1,\infty}$ for $p = 1,\infty$.

We can allow $\rho$ to be complex, in which case the definition (1) is modified to conjugate the elements below the diagonal. The factorization $A = LDL^*$ continues to hold with $D$ in $(2)$ replaced by $\mathrm{diag}(1, 1-|\rho|^2, 1-|\rho|^2, \dots, 1-|\rho|^2)$.

The Kac–Murdock–Szegö matrix (for real or complex $\rho$) can be generated in MATLAB as gallery('kms',n,rho).

## References

This is a minimal set of references, which contain further useful references within.

## Related Blog Posts

By Len Freeman, Nick Higham and Jim Nagy.

Ian Gladwell passed away on May 23, 2021 at the age of 76. He was born in Bolton, Lancashire in 1944. He did his secondary education at Thornleigh College, Bolton and was an undergraduate at Hertford College, University of Oxford, from where he graduated with a B.A. Hons. in Mathematics in 1966. He did his postgraduate studies at the University of Manchester, gaining an MSc in Numerical Analysis and Computing in 1967 and a PhD in Numerical Analysis in 1970. He was the first PhD student of Christopher T. H. Baker (1939–2017).

Ian was appointed Lecturer in the Department of Mathematics at the University of Manchester in 1969 and progressed to Senior Lecturer in 1980. He was a member of the Numerical Analysis Group (along with Christopher Baker, Len Freeman, George Hall, Will McLewin, Jack Williams (1943–2015), and Joan Walsh (1932–2017)) who, together with colleagues at UMIST, made Manchester a major centre of numerical analysis activity from the 1970s onwards.

Ian’s research focused on ordinary differential equation (ODE) initial value problems and boundary value problems, mathematical software, and parallel computing, and he had a wide knowledge of numerical analysis and scientific computing. He was perhaps best known for his pioneering work on mathematical software for the numerical solution of ODEs, much of which was published in the NAG Library and in the journal ACM Transactions on Mathematical Software. A particular topic of interest for Ian was algorithms and software for the numerical solution of almost block diagonal linear systems, which arise in discretizations of boundary value problems for ODEs and partial differential equations.

More details on Ian’s publications can be found at his MathSciNet author profile (subscription required). It lists 55 publications with 19 co-authors, among which Richard Brankin, Larry Shampine, Ruth Thomas, and Marcin Paprzycki are his most frequent co-authors.

In his time at Manchester he collaborated with a variety of colleagues both inside and outside the department, and he was always ready to offer advice to students and colleagues across the campus on numerical computing (as evidenced by the common sight of people waiting outside his office door to be seen).

Ian was instrumental in setting up the Manchester Numerical Analysis Reports, a long-running technical report series to which he contributed many items.

Ian had a five-month visit to the Department of Computer Science at the University of Toronto in 1975. Links between the Manchester and Toronto departments were strong, and over the years numerical analysts made several visits in both directions.

In the mid 1980s, Ian was one of the first people in the UK to have an email address: igladwel@uk.ac.ucl.cs. His email account was on a computer at University College London (UCL), because UCL hosted a gateway between JANET, the UK computer network, and ARPANET in the USA. Ian kindly allowed Nick Higham and Len Freeman use of the account to communicate with colleagues in the US.

Ian had long-standing collaborations with the Numerical Algorithms Group (NAG) Ltd., Oxford. He contributed many codes and associated documentation to the NAG Library, principally in ordinary differential equations. In a 1979 paper in ACM Trans. Math. Software he wrote

“When the NAG library structure was designed in the late 1960s, it was decided to devote a chapter, named DO2, to the numerical solution of systems of ordinary differential equations and that this chapter would be contributed by members of the Department of Mathematics, University of Manchester, and in particular by J. E. Walsh, G. Hall, and the author.”

Ian was a long-term member of NAG and of the NAG Technical Policy Committee, and during 1986 he held a Royal Society/Science and Engineering Research Council Industrial Fellowship at NAG.

Nick Higham was taught by Ian in an upper level undergraduate course “Numerical Linear Algebra” that Ian was giving for the first time, in 1981. As an MSc student and PhD student he benefited greatly from Ian’s advice about how to think about and do research.

Ian moved to the Department of Mathematics at Southern Methodist University (SMU), Dallas, as a Visiting Associate Professor in 1987, which became a permanent position in 1988. He had collaborated during the 1980s with Larry Shampine, who was working at Sandia National Laboratories until he moved to the SMU Mathematics Department in 1986.

Ian served as chair of the department 1988–1994 and again in 1998. He was also Director of Graduate Studies from 2005–2008. Ian excelled in these roles as mentor, which is recognized by a PhD fellowship in his honor. Jim Nagy was extremely fortunate to have Ian as his first department chair in 1992; Ian mentored him during the challenging tenure-track years, advising on research, teaching and more, including extensive editing of his first successful grant proposals.

Ian wrote the book Solving ODEs with MATLAB (2003) with Larry Shampine and Skip Thompson, which was described as “an excellent treatment of the fundamentals for solving ODEs using MATLAB” in Mathematical Reviews. It is Ian’s most highly cited work, with around 900 citations on Google Scholar at the time of writing.

Ian served as editor for ten journals, including as Associate Editor (2002–2005) and Editor-in-Chief (2005–2008) of ACM Transactions on Mathematical Software, as Associate Editor of the IMA Journal on Numerical Analysis (1988–2007), and as Associate Editor of Scalable Computing: Practice and Experience (2005–2010). A special issue of the latter journal in 2009 was dedicated to him on the occasion of his retirement from SMU

Ian was a long-term member of the Institute of Mathematics and Its Applications, of which he was a Fellow, and the Society for Industrial and Applied Mathematics.

According to the Mathematics Genealogy Project, Ian had 23 PhD students, equally split between Manchester and SMU, with one jointly supervised at the University of Bari.