What Is a Block Matrix?

A matrix is a rectangular array of numbers treated as a single object. A block matrix is a matrix whose elements are themselves matrices, which are called submatrices. By allowing a matrix to be viewed at different levels of abstraction, the block matrix viewpoint enables elegant proofs of results and facilitates the development and understanding of numerical algorithms.

A block matrix is defined in terms of a partitioning, which breaks a matrix into contiguous pieces. The most common and important case is for an n\times n matrix to be partitioned as a block 2\times 2 matrix (two block rows and two block columns). For n = 4, partitioning into 2\times 2 blocks gives

\notag   A = \left[\begin{array}{cc|cc}         a_{11} & a_{12} & a_{13} & a_{14}\\         a_{21} & a_{22} & a_{23} & a_{24}\\\hline         a_{31} & a_{32} & a_{33} & a_{34}\\         a_{41} & a_{42} & a_{43} & a_{44}\\         \end{array}\right]    =  \begin{bmatrix}         A_{11} & A_{12} \\         A_{21} & A_{22}        \end{bmatrix},

where

\notag  A_{11} =   A(1\colon2,1\colon2) = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix},

and similarly for the other blocks. The diagonal blocks in a partitioning of a square matrix are usually square (but not necessarily so), and they do not have to be of the same dimensions. This same 4\times 4 matrix could be partitioned as

\notag   A = \left[\begin{array}{c|ccc}         a_{11} & a_{12} & a_{13} & a_{14}\\\hline         a_{21} & a_{22} & a_{23} & a_{24}\\         a_{31} & a_{32} & a_{33} & a_{34}\\         a_{41} & a_{42} & a_{43} & a_{44}\\         \end{array}\right]    =  \begin{bmatrix}         A_{11} & A_{12} \\         A_{21} & A_{22}        \end{bmatrix},

where A_{11} = (a_{11}) is a scalar, A_{21} is a column vector, and A_{12} is a row vector.

The sum C = A + B of two block matrices A = (A_{ij}) and B = (B_{ij}) of the same dimension is obtained by adding blockwise as long as A_{ij} and B_{ij} have the same dimensions for all i and j, and the result has the same block structure: C_{ij} = A_{ij}+B_{ij},

The product C = AB of an m\times n matrix A = (A_{ij}) and an n\times p matrix B = (B_{ij}) can be computed as C_{ij} = \sum_k A_{ik}B_{kj} as long as the products A_{ik}B_{kj} are all defined. In this case the matrices A and B are said to be conformably partitioned for multiplication. Here, C has as many block rows as A and as many block columns as B. For example,

\notag   AB =        \begin{bmatrix}         A_{11} & A_{12} \\         A_{21} & A_{22}        \end{bmatrix}        \begin{bmatrix}         B_{11} & B_{12} \\         B_{21} & B_{22}        \end{bmatrix}   =  \begin{bmatrix}         A_{11} B_{11} + A_{12} B_{21} & A_{11} B_{12} + A_{12} B_{22} \\         A_{21} B_{11} + A_{22} B_{21} & A_{21} B_{12} + A_{22} B_{22}        \end{bmatrix}

as long as all the eight products A_{ik}B_{kj} are defined.

Block matrix notation is an essential tool in numerical linear algebra. Here are some examples of its usage.

Matrix Factorization

For an n\times n matrix A with nonzero (1,1) element \alpha we can write

\notag   A =  \begin{bmatrix}         \alpha & b^T \\           c & D        \end{bmatrix}    =        \begin{bmatrix}         1 & 0 \\           c/\alpha & I_{n-1}        \end{bmatrix}       \begin{bmatrix}         \alpha  & b^T \\          0 & D - cb^T/\alpha        \end{bmatrix} = : L_1U_1

The first row and column of L_1 have the correct form for a unit lower triangular matrix and likewise the first row and column of U_1 have the correct form for an upper triangular matrix. If we can find an LU factorization D - cb^T/\alpha = L_2U_2 of the (n-1)\times (n-1) Schur complement D then A = L_1\mathrm{diag}(1,L_2)\cdot \mathrm{diag}(1,U_2)U_1 is an LU factorization of A. This construction is the basis of an inductive proof of the existence of an LU factorization (provided all the pivots are nonzero) and it also yields an algorithm for computing it.

The same type of construction applies to other factorizations, such as Cholesky factorization, QR factorization, and the Schur decomposition.

Matrix Inverse

A useful formula for the inverse of a nonsingular block triangular matrix

\notag      T = \begin{bmatrix}         T_{11} & T_{12} \\           0 & T_{22}        \end{bmatrix}

is

\notag      T^{-1}        =        \begin{bmatrix}      T_{11}^{-1} & - T_{11}^{-1}T_{12}T_{22}^{-1}\\               0  & T_{22}^{-1}        \end{bmatrix},

which has the special case

\notag        \begin{bmatrix}         I & X \\           0 & I        \end{bmatrix}^{-1}    =        \begin{bmatrix}              I   & -X\\               0  & I        \end{bmatrix}.

If T is upper triangular then so are T_{11} and T_{22}. By taking T_{11} of dimension the nearest integer to n/2 this formula can be used to construct a divide and conquer algorithm for computing T^{-1}.

We note that \det(T) = \det(T_{11}) \det(T_{22}), a fact that will be used in the next section.

Determinantal Formulas

Block matrices provides elegant proofs of many results involving determinants. For example, consider the equations

\notag        \begin{bmatrix}         I & -A \\           0 & I        \end{bmatrix}        \begin{bmatrix}         I+AB & 0 \\           B & I        \end{bmatrix}      =        \begin{bmatrix}               I  & -A\\               B  & I        \end{bmatrix}      =        \begin{bmatrix}         I & 0 \\         B & I+BA        \end{bmatrix}        \begin{bmatrix}         I & -A \\         0 & I        \end{bmatrix},

which hold for any A and B such that AB and BA are defined. Taking determinants gives the formula \det(I + AB) = \det(I + BA). In particular we can take A = x, B = y^T, for n-vectors x and y, giving \det(I + xy^T) = 1 + y^Tx.

Constructing Matrices with Required Properties

We can sometimes build a matrix with certain desired properties by a block construction. For example, if X is an n\times n involutory matrix (X^2 = I) then

\notag        \begin{bmatrix}         X & I \\         0 & -X        \end{bmatrix}

is a (block triangular) 2n\times 2n involutory matrix. And if A and B are any two n\times n matrices then

\notag        \begin{bmatrix}               I - BA & B \\               2A-ABA & AB-I        \end{bmatrix}

is involutory.

The Anti Block Diagonal Trick

For n\times n matrices A and B consider the anti block diagonal matrix

\notag     X = \begin{bmatrix}               0 & A \\               B & 0        \end{bmatrix}.

Note that

\notag     X^2 = \begin{bmatrix}               AB & 0 \\               0 & BA        \end{bmatrix}, \quad     X^{-1} = \begin{bmatrix}               0 & B^{-1} \\               A^{-1} & 0        \end{bmatrix}.

Using these properties one can show a relation between the matrix sign function and the principal matrix square root:

\notag     \mathrm{sign}\left(        \begin{bmatrix} 0 & A \\ I & 0 \end{bmatrix}                 \right)         = \begin{bmatrix} 0 & A^{1/2} \\ A^{-1/2} & 0 \end{bmatrix}.

This allows one to derive iterations for computing the matrix square root and its inverse from iterations for computing the matrix sign function.

It is easy to derive explicit formulas for all the powers of X, and hence for any power series evaluated at X. In particular, we have the formula

\notag   \mathrm{e}^X = \left[\begin{array}{cc}                   \cosh\sqrt{AB} & A (\sqrt{BA})^{-1} \sinh \sqrt{BA}                        \\[\smallskipamount]                       B(\sqrt{AB})^{-1} \sinh \sqrt{AB} &                      \cosh\sqrt{BA}                  \end{array}\right],

where \sqrt{Y} denotes any square root of Y. With B = I, this formula arises in the solution of the ordinary differential equation initial value problem y'' + Ay = 0, y(0)=y_0, y'(0)=y'_0,

The most well known instance of the trick is when B = A^T. The eigenvalues of

\notag     X = \begin{bmatrix}               0 & A \\               A^T & 0        \end{bmatrix}

are plus and minus the singular values of A, together with |m-n| additional zeros if A is m\times n with m \ne n, and the eigenvectors of X and the singular vectors of A are also related. Consequently, by applying results or algorithms for symmetric matrices to X one obtains results or algorithms for the singular value decomposition of A.

References

This is a minimal set of references, which contain further useful references within.

  • Gene Golub and Charles F. Van Loan, Matrix Computations, fourth edition, Johns Hopkins University Press, Baltimore, MD, USA, 2013.
  • Nicholas J. Higham, Functions of Matrices: Theory and Computation, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2008. (Sections 1.5 and 1.6 for the theory of matrix square roots.)
  • Roger A. Horn and Charles R. Johnson, Matrix Analysis, second edition, Cambridge University Press, 2013. My review of the second edition.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment

What Is a Householder Matrix?

A Householder matrix is an n\times n orthogonal matrix of the form

\notag         P = I - \displaystyle\frac{2}{v^Tv} vv^T, \qquad 0 \ne v \in\mathbb{R}^n.

It is easily verified that P is

  • orthogonal (P^TP = I),
  • symmetric (P^T = P),
  • involutory (P^2 = I that is, P is a square root of the identity matrix),

where the last property follows from the first two.

A Householder matrix is a rank-1 perturbation of the identity matrix and so all but one of its eigenvalues are 1. The eigensystem can be fully described as follows.

  • P has an eigenvalue -1 with eigenvector v, since Pv = -v.
  • P has n-1 eigenvalues 1 with eigenvectors any set of n-1 linearly independent vectors orthogonal to v, which can be taken to be mutually orthogonal: Px = x for every such x.

P has trace n-2 and determinant -1, as can be derived directly or deduced from the facts that the trace is the sum of the eigenvalues and the determinant is the product of the eigenvalues.

For n = 2, a Householder matrix can be written as

\notag      P = \begin{bmatrix} \cos\theta & \sin\theta\\                          \sin\theta & -\cos\theta           \end{bmatrix}.

Simple examples of Householder matrices are obtained by choosing v = e = [1,1,\dots,1]^T, for which P = I  - (2/n)ee^T. For n=2,3,4,5,6 we obtain the matrices

\notag    \begin{gathered}        \left[\begin{array}{@{\mskip2mu}rr@{\mskip2mu}}                      0 & -1 \\ -1 & 0        \end{array}\right], \quad   \displaystyle\frac{1}{3}        \left[\begin{array}{@{\mskip2mu}rrr@{\mskip2mu}}                       1 &  -2 &  -2\\                      -2 &   1 &  -2\\                      -2 &  -2 &   1\\        \end{array}\right], \quad    \displaystyle\frac{1}{2}        \left[\begin{array}{@{\mskip2mu}rrrr@{\mskip2mu}}                        1 &   -1 &   -1 &   -1\\                       -1 &    1 &   -1 &   -1\\                       -1 &   -1 &    1 &   -1\\                       -1 &   -1 &   -1 &    1\\        \end{array}\right], \\   \displaystyle\frac{1}{5}        \left[\begin{array}{@{\mskip2mu}rrrrr@{\mskip2mu}}    3 & -2 & -2 & -2 & -2\\ -2 & 3 & -2 & -2 &    -2\\ -2 & -2 & 3 & -2 & -2\\ -2 & -2 & -2 & 3 & -2\\ -2 & -2 & -2 & -2 & 3   \end{array}\right], \quad  \displaystyle\frac{1}{3} \left[\begin{array}{@{\mskip2mu}rrrrrr@{\mskip2mu}}   2 & -1 & -1 & -1 & -1 & -1\\ -1 & 2 & -1 & -1 & -1 & -1\\   -1 & -1 & 2 & -1 & -1 & -1\\ -1 & -1 & -1 & 2 & -1 & -1\\ -1 & -1 & -1 & -1 & 2 & -1\\   -1 & -1 & -1 & -1 & -1 & 2 \end{array}\right].    \end{gathered}

Note that the 4\times 4 matrix is 1/2 times a Hadamard matrix.

Applying P to a vector x gives

\notag    Px = x - \displaystyle\left( \frac{2 v^Tx}{v^Tv} \right) v.

This equation shows that P reflects x about the hyperplane {\mathrm{span}}(v)^{\perp}, as illustrated in the following diagram, which explains why P is sometimes called a Householder reflector. Another way of expressing this property is to write x = \alpha v + z, where z is orthogonal to v. Then Px = -\alpha v + z, so the component of x in the direction v has been reversed. If we take v = e_i, the ith unit vector, then P = I - 2e_ie_i^T = \mathrm{diag}(1,1,\dots,-1,1,\dots,1), which has -1 in the (i,i) position. In this case premultiplying a vector by P flips the sign of the ith component.

householder_fig.jpg

Transforming a Vector

Householder matrices are powerful tools for introducing zeros into vectors. Suppose we are given vectors x and y and wish to find a Householder matrix P such that Px=y. Since P is orthogonal, we require that \|x\|_2 = \|y\|_2, and since P can never equal the identity matrix we also require x \ne y. Now

Px = y \quad \Longleftrightarrow \quad             x - 2 \left( \displaystyle\frac{v^Tx}{v^Tv} \right)  v = y,

and this last equation has the form \alpha v = x-y for some \alpha. But P is independent of the scaling of v, so we can set \alpha=1. Now with v=x-y we have

\notag        v^Tv = x^Tx + y^Ty  -2x^Ty

and, since x^Tx = y^Ty,

\notag       v^Tx = x^Tx - y^Tx = \frac{1}{2} v^Tv.

Therefore

\notag       Px = x - v = y,

as required. Most often we choose y to be zero in all but its first component.

Square Roots

What can we say about square roots of a Householder matrix, that is, matrices X such that X^2 = P?

We note first that the eigenvalues of X are the square roots of those of P and so n-1 of them will be \pm 1 and one will be \pm \mathrm{i}. This means that X cannot be real, as the nonreal eigenvalues of a real matrix must appear in complex conjugate pairs.

Write P = I - 2vv^T, where v is normalized so that v^Tv = 1. It is natural to look for a square root of the form X = I - \theta vv^T. Setting X^2 = P leads to the quadratic equation \theta^2-2\theta + 2 = 0, and hence \theta = 1 \pm \mathrm{i}. As expected, these two square roots are complex even though P is real. As an example, \theta = 1 - \mathrm{i} gives the following square root of the matrix above corresponding to v = e/n^{1/2} with n = 3:

\notag X = \displaystyle\frac{1}{3} \left[\begin{array}{@{\mskip2mu}rrr} 2+\mathrm{i} & -1+\mathrm{i} & -1+\mathrm{i}\\ -1+\mathrm{i} & 2+\mathrm{i} & -1+\mathrm{i}\\ -1+\mathrm{i} & -1+\mathrm{i} & 2+\mathrm{i} \end{array}\right].

A good way to understand all the square roots is to diagonalize P, which can be done by a similarity transformation with a Householder matrix! Normalizing v^Tv = 1 again, let w = v - e_1 and H = I - 2ww^T/(w^Tw). Then from the construction above we know that Hv = e_1. Hence

\notag   H^T\!PH = HPH = I - 2 Hv v^T\!H = I - 2 e_1e_1^T         = \mathrm{diag}(-1,1,1,\dots,1)=: D.

Then P = HDH^T and so X = H \sqrt{D} H^T gives 2^n square roots on taking all possible combinations of signs on the diagonal for \sqrt{D}. Because P has repeated eigenvalues these are not the only square roots. The infinitely many others are obtained by taking non-diagonal square roots of D, which are of the form \mathrm{diag}(\pm i, Y), where Y is any non-diagonal square root of the (n-1)\times (n-1) identity matrix, which in particular could be a Householder matrix!

Block Householder Matrix

It is possible to define an n\times n block Householder matrix in terms of a given Z\in\mathbb{R}^{n\times p}, where n\ge p, as

\notag      P = I - 2 Z(Z^TZ)^+Z^T.

Here, “+” denotes the Moore–Penrose pseudoinverse. For p=1, P clearly reduces to a standard Householder matrix. It can be shown that (Z^TZ)^+Z^T = Z^+ (this is most easily proved using the SVD), and so

P = I - 2 ZZ^+ = I - 2 P_Z,

where P_Z = ZZ^+ is the orthogonal projector onto the range of Z (that is, \mathrm{range}(PZ) = \mathrm{range}(Z), P_Z^2 = P_Z, and P_Z = P_Z^T). Hence, like a standard Householder matrix, P is symmetric, orthogonal, and involutory. Furthermore, premultiplication of a matrix by P has the effect of reversing the component in the range of Z.

As an example, here is the block Householder matrix corresponding to Z = \bigl[\begin{smallmatrix} 1 & 2 & 3 & 4\\ 5 & 6 & 7 & 8 \end{smallmatrix}\bigr]^T:

\notag \displaystyle\frac{1}{5} \left[\begin{array}{@{\mskip2mu}rrrr@{\mskip2mu}} -2 & -4 & -1 & 2\\ -4 & 2 & -2 & -1\\ -1 & -2 & 2 & -4\\ 2 & -1 & -4 & -2 \end{array}\right].

One can show (using the SVD again) that the eigenvalues of P are -1 repeated r times and 1 repeated n-r times, where r = \mathrm{rank}(Z). Hence \mathrm{trace}(P) = n - 2r and \det(P) = (-1)^r.

Schreiber and Parlett (1988) note the representation for n = 2k,

\notag    P =  \pm \mathrm{diag}(Q_1,Q_2)         \begin{bmatrix} \cos(2\Theta) & \sin(2\Theta) \\                         \sin(2\Theta) & -\cos(2\Theta)         \end{bmatrix}         \mathrm{diag}(Q_1,Q_2)^T,

where Q_1 and Q_2 are orthogonal and \Theta is symmetric positive definite. This formula neatly generalizes the formula for a standard Householder matrix for n = 2 given above, and a similar formula holds for odd n.

Schreiber and Parlett also show how given E\in\mathbb{R}^{n\times p} (n > p) one can construct a block Householder matrix H such that

\notag      HE = \begin{bmatrix} F \\ 0 \end{bmatrix},          \qquad F \in \mathbb{R}^{p\times p}.

The polar decomposition plays a key role in the theory and algorithms for such H.

Rectangular Householder Matrix

We can define a rectangular Householder matrix as follows. Let m > n, u \in \mathbb{R}^n, v \in \mathbb{R}^{m-n}, and

\notag   P =   \begin{bmatrix} I_n\\0 \end{bmatrix}   + \alpha   \begin{bmatrix}     u\\v   \end{bmatrix}u^T =   \begin{bmatrix}     I_n + \alpha u u^T\\ \alpha vu^T   \end{bmatrix} \in \mathbb{R}^n.

Then P^TP = I, that is, P has orthonormal columns, if

\alpha = \displaystyle\frac{-2}{u^Tu + v^Tv}.

Of course, P is just the first n columns of the Householder matrix built from the vector [u^T~v^T]^T.

Historical Note

The earliest appearance of Householder matrices is in the book by Turnbull and Aitken (1932). These authors show that if \|x\|_2 = \|y\|_2 (x\ne -y) then a unitary matrix of the form R = \alpha zz^* - I (in their notation) can be constructed so that Rx = y. They use this result to prove the existence of the Schur decomposition. The first systematic use of Householder matrices for computational purposes was by Householder (1958) who used them to construct the QR factorization.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment

What is a Sparse Matrix?

A sparse matrix is one with a large number of zero entries. A more practical definition is that a matrix is sparse if the number or distribution of the zero entries makes it worthwhile to avoid storing or operating on the zero entries.

Sparsity is not to be confused with data sparsity, which refers to the situation where, because of redundancy, the data can be efficiently compressed while controlling the loss of information. Data sparsity typically manifests itself in low rank structure, whereas sparsity is solely a property of the pattern of nonzeros.

Important sources of sparse matrices include discretization of partial differential equations, image processing, optimization problems, and networks and graphs. In designing algorithms for sparse matrices we have several aims.

  • Store the nonzeros only, in some suitable data structure.
  • Avoid operations involving only zeros.
  • Preserve sparsity, that is, minimize fill-in (a zero element becoming nonzero).

We wish to achieve these aims without sacrificing speed, stability, or reliability.

An important class of sparse matrices is banded matrices. A matrix A has bandwidth p if the elements outside the main diagonal and the first p superdiagonals and subdiagonals are zero, that is, if a_{ij} = 0 for j>i+p and i>j+p.

The most common type of banded matrix is a tridiagonal matrix (p = 1), of which an archetypal example is the second-difference matrix, illustrated for n = 5 by

\notag    A_5 = \left[    \begin{array}{@{}*{4}{r@{\mskip10mu}}r}                 2 &  -1  & 0  & 0 & 0\\                 -1 & 2  & -1  & 0 & 0\\                  0 & -1  & 2 & -1 & 0\\                  0 &  0  &-1 & 2  & -1\\                  0 &  0  & 0 & -1 & 2    \end{array}\right].

This matrix (or more precisely its negative) corresponds to a centered finite difference approximation to a second derivative: f''(x) \approx (f(x+h) -2 f(x) + f(x-h))/h^2.

The following plots show the sparsity patterns for two symmetric positive definite matrices. Here, the nonzero elements are indicated by dots.

sparse_plots.jpg The matrices are both from power network problems and they are taken from the SuiteSparse Matrix Collection (https://sparse.tamu.edu/). The matrix names are shown in the titles and the nz values below the x-axes are the numbers of nonzeros. The plots were produced using MATLAB code of the form

W = ssget('HB/494_bus'); A = W.A; spy(A)

where the ssget function is provided with the collection. The matrix on the left shows no particular pattern for the nonzero entries, while that on the right has a structure comprising four diagonal blocks with a relatively small number of elements connecting the blocks.

It is important to realize that while the sparsity pattern often reflects the structure of the underlying problem, it is arbitrary in that it will change under row and column reorderings. If we are interested in solving Ax = b, for example, then for any permutation matrices P and Q we can form the transformed system PAQ (Q^*x) = Pb, which has a coefficient matrix PAQ having permuted rows and columns, a permuted right-hand side Pb, and a permuted solution. We usually wish to choose the permutations to minimize the fill-in or (almost equivalently) the number of nonzeros in L and U. Various methods have been derived for this task; they are necessarily heuristic because finding the minimum is in general an NP-complete problem. When A is symmetric we take Q = P^T in order to preserve symmetry.

For the HB/494_bus matrix the symmetric reverse Cuthill-McKee permutation gives a reordered matrix with the following sparsity pattern, plotted with the MATLAB commands

r = symrcm(A); spy(A(r,r))

sparse_plots_rcm.jpg

The reordered matrix with a variable band structure that is characteristic of the symmetric reverse Cuthill-McKee permutation. The number of nonzeros is, of course, unchanged by reordering, so what has been gained? The next plots show the Cholesky factors of the HB/494_bus matrix and the reordered matrix. The Cholesky factor for the reordered matrix has a much narrower bandwidth than that for the original matrix and has fewer nonzeros by a factor 3. Reordering has greatly reduced the amount of fill-in that occurs; it leads to a Cholesky factor that is cheaper to compute and requires less storage.

sparse_plots_chol.jpg

Because Cholesky factorization is numerically stable, the matrix can be permuted without affecting the numerical stability of the computation. For a nonsymmetric problem the choice of row and column interchanges also needs to take into account the need for numerical stability, which complicates matters.

The world of sparse matrix computations is very different from that for dense matrices. In the first place, sparse matrices are not stored as n\times n arrays, but rather just the nonzeros are stored, in some suitable data structure. Programming sparse matrix computations is, consequently, more difficult than for dense matrix computations. A second difference from the dense case is that certain operations are, for practical purposes, forbidden, Most notably, we never invert sparse matrices because of the possibly severe fill-in. Indeed the inverse of a sparse matrix is usually dense. For example, the inverse of the tridiagonal matrix given at the start of this article is

\notag   A_5^{-1} = \displaystyle\frac{1}{6}  \begin{bmatrix} 5 & 4 & 3 & 2 & 1\\ 4 & 8 & 6 & 4 & 2\\ 3 & 6 & 9 & 6 & 3\\ 2 & 4 & 6 & 8 & 4\\ 1 & 2 & 3 & 4 & 5  \end{bmatrix}.

While it is always true that one should not solve Ax = b by forming x = A^{-1} \times b, for reasons of cost and numerical stability (unless A is orthogonal!), it is even more true when A is sparse.

Finally, we mention an interesting property of A_5^{-1}. Its upper triangle agrees with the upper triangle of the rank-1 matrix

\notag  \begin{bmatrix}   1 \\ 2 \\ 3 \\ 4 \\ 5  \end{bmatrix}  \begin{bmatrix}    5 & 4 & 3 & 2 & 1  \end{bmatrix} =  \begin{bmatrix}    5 & 4 & 3 & 2 & 1\\    10 & 8 & 6 & 4 & 2\\    15 & 12& 9 & 6 & 3\\    20 & 16& 12& 8 & 4\\    25 & 20& 15& 10& 5  \end{bmatrix}.

This property generalizes to other tridiagonal matrices. So while a tridiagonal matrix is sparse, its inverse is data sparse—as it has to be because in general A depends on 2n-1 parameters and hence so does A^{-1}. One implication of this property is that it is possible to compute the condition number \kappa_{\infty}(A) = \|A\|_{\infty} \|A^{-1}\|_{\infty} of a tridiagonal matrix in O(n) flops.

References

This is a minimal set of references, which contain further useful references within.

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment

What Is the Sylvester Equation?

The Sylvester equation is the linear matrix equation

AX - XB = C,

where A\in\mathbb{C}^{m\times m}, B\in\mathbb{C}^{n\times n}, and X,C\in\mathbb{C}^{m\times n}. It is named after James Joseph Sylvester (1814–1897), who considered the homogeneous version of the equation, AX - XB = 0 in 1884. Special cases of the equation are Ax = b (a standard linear system), AX  = XA (matrix commutativity), Ax = \lambda x (an eigenvalue–eigenvector equation), and AX = I (matrix inversion).

In the case where B = A, taking the trace of both sides of the equation gives

\mathrm{trace}(C) = \mathrm{trace}(AX - XA) = \mathrm{trace}(AX) -  \mathrm{trace} (XA) = 0,

so a solution can exist only when C has zero trace. Hence AX - XA = I, for example, has no solution.

To determine when the Sylvester equation has a solution we will transform it into a simpler form. Let A = URU^* and B = VSV^* be Schur decompositions, where U and V are unitary and R and S are upper triangular. Premultiplying the Sylvester equation by U^*, postmultiplying by V, and setting Y = U^*XV and D = U^*CV, we obtain

RY - YS = D,

which is a Sylvester equation with upper triangular coefficient matrices. Equating the jth columns on both sides leads to

(R - s_{jj}I) y_j = d_j - \displaystyle\sum_{k=1}^{j-1} s_{kj} y_k, \quad j = 1\colon n.

As long as the triangular matrices R - s_{jj}I are nonsingular for all j we can uniquely solve for y_1, y_2, …, y_n in turn. Hence the Sylvester equation has a unique solution if r_{ii} \ne s_{jj} for all i and j, that is, if A and B have no eigenvalue in common.

Since the Sylvester is a linear equation it must be possible to express it in the standard form “Ax = b”. This can be done by applying the vec operator, which yields

\qquad\qquad\qquad\qquad\qquad    (I_n \otimes A - B^T \otimes I_m) \mathrm{vec}(X) = \mathrm{vec}(C),    \qquad\qquad\qquad\qquad\qquad(*)

where \otimes is the Kronecker product. Using the Schur transformations above it is easy to show that the eigenvalues of the coefficient matrix are given in terms of those of A and B by

\lambda_{ij} (I_n\otimes A - B^T\otimes I_m) = \lambda_i(A) - \lambda_j(B),   \quad i=1\colon m,  \quad j=1\colon n,

so the coefficient matrix is nonsingular when \lambda_i(A) \ne \lambda_j(B) for all i and j.

By considering the derivative of Z(t) = \mathrm{e}^{At}C\mathrm{e}^{-Bt}, it can be shown that if the eigenvalues of A and -B have negative real parts (that is, A and -B are stable matrices) then

X = -\displaystyle\int_0^{\infty} \mathrm{e}^{At} C \mathrm{e}^{-Bt} \, \mathrm{d}t

is the unique solution of AX - XB = C.

Applications

An important application of the Sylvester equation is in block diagonalization. Consider the block upper triangular matrix

T = \begin{bmatrix}       A & C\\       0 & B    \end{bmatrix}.

If we can find a nonsingular matrix Z such that Z^{-1}TZ = \mathrm{diag}(A,B) then certain computations with T become much easier. For example, for any function f,

f(T) = f(Z \mathrm{diag}(A,B) Z^{-1})          = Zf(\mathrm{diag}(A,B)) Z^{-1}          = Z\mathrm{diag}(f(A),f(B)) Z^{-1},

so computing f(T) reduces to computing f(A) and f(B). Setting

Z = \begin{bmatrix}       I & -X\\       0 & I    \end{bmatrix}.

and noting that Z^{-1} is just Z with the sign of the (1,2) block reversed, we find that

Z^{-1} TZ = \begin{bmatrix}       A & -AX + XB + C\\       0 & B    \end{bmatrix}.

Hence Z block diagonalizes T if X satisfies the Sylvester equation AX - XB = C, which we know is possible if the eigenvalues of A and B are distinct. This restriction is unsurprising, as without it we could use this construction to diagonalize a 2\times 2 Jordan block, which of course is impossible.

For another way in which Sylvester equations arises consider the expansion (X+E)^2 = X^2 + XE + EX + E^2 for square matrices X and E, from which it follows that XE + EX is the Fréchet derivative of the function x^2 at X in the direction E, written L_{x^2}(X,E). Consequently, Newton’s method for the square root requires the solution of Sylvester equations, though in practice certain simplifications can be made to avoid their appearance. We can find the Fréchet derivative of x^{1/2} by applying the chain rule to \bigl(x^{1/2}\bigr)^2 = x, which gives L_{x^2}\left(X^{1/2}, L_{x^{1/2}}(X,E)\right) = E. Therefore Z = L_{x^{1/2}}(X,E) is the solution to the Sylvester equation X^{1/2} Z + Z X^{1/2}  = E. Consequently, the Sylvester equation plays a role in the perturbation theory for matrix square roots.

Sylvester equations also arise in the Schur–Parlett algorithm for computing matrix functions, which reduces a matrix to triangular Schur form T and then solves TF-FT = 0 for F = f(T), blockwise, by a recurrence.

Solution Methods

How can we solve the Sylvester equation? One possibility is to solve (*) by LU factorization with partial pivoting. However, the coefficient matrix is mn\times mn and LU factorization cannot exploit the Kronecker product structure, so this approach is prohibitively expensive unless m and n are small. It is more efficient to compute Schur decompositions of A and B, transform the problem, and solve a sequence of triangular systems, as described above in our derivation of the conditions for the existence of a unique solution. This method was developed by Bartels and Stewart in 1972 and it is implemented in the MATLAB function sylvester.

In recent years research has focused particularly on solving Sylvester equations in which A and B are large and sparse and C has low rank, which arise in applications in control theory and model reduction, for example. In this case it is usually possible to find good low rank approximations to X and iterative methods based on Krylov subspaces have been very successful.

Sensitivity and the Separation

Define the separation of A and B by

\mathrm{sep}(A,B) =       \displaystyle\min_{Z\ne0} \displaystyle\frac{ \|AZ-ZB\|_F }{ \|Z\|_F }.

The separation is positive if A and B have no eigenvalue in common, which we now assume. If X is the solution to AX - XB = C then

\notag   \mathrm{sep}(A,B) \le \displaystyle\frac{ \|AX-XB\|_F }{ \|X\|_F }                      = \frac{\|C\|_F}{\|X\|_F},

so X is bounded by

\notag   \|X\|_F \le \displaystyle\frac{\|C\|_F}{\mathrm{sep}(A,B)}.

It is not hard to show that \mathrm{sep}(A,B)^{-1} = \|P^{-1}\|_2, where P is the matrix in (*). This bound on \|X\|_F is a generalization of \|x\|_2 \le \|A^{-1}\|_2 \|b\|_2 for Ax = b.

The separation features in a perturbation bound for the Sylvester equation. If

\notag     (A+\Delta A)(X+\Delta X) - (X+\Delta X)(B+\Delta B) = C+\Delta C,

then

\notag      \displaystyle\frac{ \|\Delta X\|_F }{ \|X\|_F }      \le 2\sqrt{3}\, \mathrm{sep}(A,B)^{-1} (\|A\|_F + \|B\|_F) \epsilon           + O(\epsilon^2),

where

\notag    \epsilon = \max \left\{ \displaystyle\frac{\|\Delta A\|_F}{\|A\|_F},                            \frac{\|\Delta B\|_F}{\|B\|_F},                            \frac{\|\Delta C\|_F}{\|C\|_F} \right\}.

While we have the upper bound \mathrm{sep}(A,B) \le \min_{i,j} |\lambda_i(A) - \lambda_j(B)|, this inequality can be extremely weak for nonnormal matrices, so two matrices can have a small separation even if their eigenvalues are well separated. To illustrate, let T(\alpha) denote the n\times n upper triangular matrix with \alpha on the diagonal and -1 in all entries above the diagonal. The following table shows the values of \mathrm{sep}(T(1),T(1.1)) for several values of n.

sep_table.jpg

Even though the eigenvalues of A and B are 0.1 apart, the separation is at the level of the unit roundoff for n as small as 8.

The sep function was originally introduced by Stewart in the 1970s as a tool for studying the sensitivity of invariant subspaces.

Variations and Generalizations

The Sylvester equation has many variations and special cases, including the Lyapunov equation AX + XA^* = C, the discrete Sylvester equation X + AXB = C, and versions of all these for operators. It has also been generalized to multiple terms and to have coefficient matrices on both sides of X, yielding

\displaystyle\sum_{i=1}^k A_i X B_i = C.

For k\le 2 and m=n this equation can be solved in O(n^3) flops. For k > 2, no O(n^3) flops algorithm is known and deriving efficient numerical methods remains an open problem. The equation arises in stochastic finite element discretizations of partial differential equations with random inputs, where the matrices A_i and B_i are large and sparse and, depending on the statistical properties of the random inputs, k can be arbitrarily large.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment

What is the Kronecker Product?

The Kronecker product of two matrices A\in\mathbb{C}^{m\times n} and B\in\mathbb{C}^{p\times q} (also called the tensor product) is the mp\times nq matrix1

A \otimes B =  \begin{bmatrix}     a_{11}B & a_{12}B & \dots & a_{1n}B \\     a_{21}B & a_{22}B & \dots & a_{2n}B \\     \vdots & \vdots   & \ddots & \vdots \\     a_{m1}B & a_{m2}B & \dots & a_{mn}B     \end{bmatrix}.

In other words, A\otimes B is the block m\times n matrix with (i,j) block a_{ij}B. For example,

\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \otimes \begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \end{bmatrix} = \left[\begin{array}{ccc|ccc} a_{11}b_{11} & a_{11}b_{12} & a_{11}b_{13} & a_{12}b_{11} & a_{12}b_{12} & a_{12}b_{13} \\ a_{11}b_{21} & a_{11}b_{22} & a_{11}b_{23} & a_{12}b_{21} & a_{12}b_{22} & a_{12}b_{23} \\\hline a_{21}b_{11} & a_{21}b_{12} & a_{21}b_{13} & a_{22}b_{11} & a_{22}b_{12} & a_{22}b_{13} \\ a_{21}b_{21} & a_{21}b_{22} & a_{21}b_{23} & a_{22}b_{21} & a_{22}b_{22} & a_{22}b_{23} \end{array}\right].

Notice that the entries of A\otimes B comprise every possible product a_{ij}b_{rs}, which is not the case for the usual matrix product AB when it is defined. Indeed if A and B are n\times n then

  • AB is n\times n and contains sums of n^3 of the products a_{ij}b_{rs},
  • A\otimes B is n^2\times n^2 and contains all n^4 products a_{ij}b_{rs}.

Two key properties of the Kronecker product are

\begin{aligned}       (A\otimes B)^* &= A^* \otimes B^*,   \label{KP1} \\       (A\otimes B)(C\otimes D)&= AC \otimes BD. \label{KP2} \end{aligned}

The second equality implies that when x_i is an eigenvector of A\in\mathbb{C}^{m\times m} with eigenvalue \lambda_i and y_j is an eigenvector of B\in\mathbb{C}^{n\times n} with eigenvalue \mu_j then

(A\otimes B)(x_i\otimes y_j) = (Ax_i \otimes By_j)                              = (\lambda_i x_i \otimes \mu_j y_j)                              = \lambda_i\mu_j ( x_i \otimes y_j),

so that \lambda_i\mu_j is an eigenvalue of A\otimes B with eigenvector x_i \otimes y_j. In fact, the mn eigenvalues of A\otimes B are precisely \lambda_i\mu_j for i = 1\colon m and j = 1\colon n.

Kronecker product structure arises in image deblurring models in which the blur is separable, that is, the blur in the horizontal direction can be separated from the blur in the vertical direction. Kronecker products also arise in the construction of Hadamard matrices. Recall that a Hadamard matrix is a matrix of \pm1s whose rows and columns are mutually orthogonal. If H_n is an n\times n Hadamard matrix then

\begin{bmatrix} \phantom{-} 1 & 1 \\ -1 & 1 \end{bmatrix} \otimes H_n = \begin{bmatrix} \phantom{-} H_n & H_n \\ -H_n & H_n \end{bmatrix}

is a 2n\times 2n Hadamard matrix.

The practical significance of Kronecker product structure is that it allows computations on a large matrix to be reduced to computations on smaller matrices. For example, suppose A and B are Hermitian positive definite matrices and C = A \otimes B, which can be shown to be Hermitian positive definite from the properties mentioned above. If A = R^*R and B = S^*S are Cholesky factorizations then

A \otimes B = (R^*R) \otimes (S^*S)               = (R^*\otimes S^*) (R \otimes S)               = (R \otimes S)^* (R \otimes S),

so R\otimes S, which is easily seen to be triangular with positive diagonal elements, is the Cholesky factor of A\otimes B. If A and B are n\times n then forming A\otimes B and computing its Cholesky factorization costs O(n^6) flops, whereas R and S can be computed in O(n^3) flops. If we want to solve a linear system (A \otimes B)x = b this can be done using R and S without explicitly form R\otimes S.

Vec Operator

The vec operator stacks the columns of a matrix into one long vector: if A = [a_1,a_2,\dots,a_m] then \mathrm{vec}(A) = [a_1^T a_2^T \dots a_m^T]^T. The vec operator and the Kronecker product interact nicely: for any A, X, and B for which the product AXB is defined,

\mathrm{vec}(AXB) = (B^T \otimes A) \mathrm{vec}(X).

This relation allows us to express a linear system AXB = C in the usual form “Ax=b”.

Kronecker Sum

The Kronecker sum of A\in\mathbb{C}^{m\times m} and B\in\mathbb{C}^{n\times n} is defined by A\oplus B = A \otimes I_n + I_m \otimes B. The eigenvalues of A\oplus B are \lambda_{ij} = \lambda_i(A) + \lambda_j(B), i=1\colon m, j=1\colon n, where the \lambda_i(A) are the eigenvalues of A and the \lambda_j(B) are those of B.

The Kronecker sum arises when we apply the vec operator to the matrix AX + XB:

\mathrm{vec}(AX + XB) = (I_m \otimes A + B^T \otimes I_n) \mathrm{vec}(X)                         = (B^T \oplus A) \mathrm{vec}(X).

Kronecker sum structure also arises in finite difference discretizations of partial differential equations, such as when Poisson’s equation is discretized on a square by the usual five-point operator.

Vec-Permutation Matrix

Since for A\in\mathbb{C}^{m\times n} the vectors \mathrm{vec}(A) and \mathrm{vec}(A^T) contain the same mn elements in different orders, we must have

\mathrm{vec}(A^T) = \Pi_{m,n} \mathrm{vec}(A),

for some mn\times mn permutation matrix \Pi_{m,n}. This matrix is called the the vec-permutation matrix, and is also known as the commutation matrix.

Kronecker multiplication is not commutative, that is, A \otimes B \ne B \otimes A in general, but A \otimes B and B \otimes A do contain the same elements in different orders. In fact, the two matrices are related by row and column permutations: if A\in\mathbb{C}^{m\times n} and B\in\mathbb{C}^{p\times q} then

(A\otimes B)\Pi_{n,q} = \Pi_{m,p} (B\otimes A).

This relation can be obtained as follows: for X\in\mathbb{C}^{n\times q},

\begin{aligned} (B \otimes A)\mathrm{vec}(X) &= \mathrm{vec}(AXB^T)\\                              &= \Pi_{p,m} \mathrm{vec}(BX^TA^T)\\                              &= \Pi_{p,m} (A\otimes B)\mathrm{vec}(X^T)\\                              &= \Pi_{p,m} (A\otimes B) \Pi_{n,q}\mathrm{vec}(X). \end{aligned}

Since these equalities hold for all X, we have B\otimes A = \Pi_{p,m} (A\otimes B) \Pi_{n,q}, from which the relation follows on using \Pi_{m,n}\Pi_{n,m} = I, which can be obtained by replacing A by A^T in the definition of vec.

An explicit expression for the the vec-permutation matrix is

\Pi_{m,n} = \displaystyle\sum_{i=1}^m \sum_{j=1}^n                   (e_i^{} e_j^T) \otimes (e_j^{} e_i^T),

where e_i is the ith unit vector.

The following plot shows the sparsity patterns of all the vec permutation matrices \Pi_{m,n} with mn = 120, where the title of each subplot is (m,n).

vecperm128.jpg

MATLAB

In MATLAB the Kronecker product A\otimes B can be computed as kron(A,B) and \mathrm{vec}(A) is obtained by indexing with a colon: A(:). Be careful using kron as it can generate very large matrices!

Historical Note

The Kronecker product is named after Leopold Kronecker (1823–1891). Henderson et al. (1983) suggest that it should be called the Zehfuss product, after Johann Georg Zehfuss (1832–1891), who obtained the result \det(A\otimes B) = \det(A)^n \det(B)^m for A\in\mathbb{C}^{m\times m} and B\in\mathbb{C}^{n\times n} in 1858.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Footnotes:

1

The \otimes symbol is typed in \LaTeX as \otimes.

Posted in what-is | 1 Comment

What Is the Gerstenhaber Problem?

When Cayley introduced matrix algebra in 1858, he did much more than merely arrange numbers in a rectangular array. His definitions of addition, multiplication, and inversion produced an algebraic structure that has proved to be immensely useful, and which still holds many mysteries today.

An n\times n matrix has n^2 parameters, so the vector space \mathbb{R}^{n\times n} of real matrices has dimension n^2, with a basis given by the matrices E_{ij} = e_i^{}e_j^T, where e_i is the vector that is zero except for a 1 in the ith entry. In his original paper, Cayley noticed the important property that the powers of a particular matrix A can never span \mathbb{R}^{n\times n}. The Cayley–Hamilton theorem says that A satisfies its own characteristic equation, that is, p(A) = 0 where p(t) = \det(t I - A) is the characteristic polynomial of A. This means that A^n can be expressed as a linear combination of I, A, …, A^{n-1}, so the powers of A span a vector space of dimension at most n.

Gerstenhaber proved a generalization of this property in 1961: if A and B are two commuting n\times n matrices then the matrices A^iB^j, for all nonnegative i and j, generate a vector space of dimension at most n. This result is much more difficult to prove than the Cayley–Hamilton theorem. Gerstenhaber’s proof was based on algebraic geometry, but purely matrix-theoretic proofs have been given.

A natural question, called the Gerstenhaber problem, is: does this result extend to three matrices, that is, does the vector space

S_n = \{\, A^iB^jC^k: 0\le i,j,k \le n-1 \,\}

have dimension at most n for any n\times n matrices A, B, and C that commute with each other? (We can limit the powers to n-1 by the Cayley–Hamilton theorem.) The problem is defined over any field, but here I focus on the reals.

Before considering the three matrix case let us note that the answer is known to be “no” for four commuting 4\times 4 matrices A, B, C, and D. Indeed let

A = e_1^{}e_3^T, \quad  B = e_1^{}e_4^T, \quad  C = e_2^{}e_3^T, \quad  D = e_2^{}e_4^T

and note that all possible products of two of these matrices are zero, so the matrices commute pairwise. Yet I = A^0, A, B, C, D are clearly five linearly independent matrices. Hence Gerstenhaber’s result does not extend to four matrices. It also does not extend to five or more matrices because it is known that the failure of the result for one value of n implies failure for all larger n. The question, then, is whether the three matrix case is like the two matrix case or the case for four or more matrices.

A great deal of effort has been put into proving or disproving the Gerstenhaber problem, so far without success. Here are two known facts.

  • The result holds for all n\le 11.
  • By a 1905 result of Schur, the dimension of S_n is at most 1 + \lfloor   n^2/4\rfloor, which is less than n^2 but still much bigger than n. (Here, \lfloor \cdot \rfloor is the floor function.)

One approach to investigating this problem is to look for a counterexample computationally. For some n\ge 12, choose three commuting n\times n matrices A, B, and C, select m\ge n monomials

X_i =  A^{i_p} B^{j_p} C^{k_p}, \quad 1\le p \le m,

form the matrix

Y = [\mathrm{vec}(X_1),             \mathrm{vec}(X_2), \dots,             \mathrm{vec}(X_m)],

where \mathrm{vec} converts a matrix into a vector by stacking the columns on top of each other, then compute \mathrm{rank}(Y), which is a lower bound on \mathrm{dim}(S_n), and check whether it is greater than n.

This simple-minded approach has some obvious difficulties. How do we choose A, B, and C? How do we choose the powers? How do we avoid overflow and underflow and compute a reliable rank, given that we might be dealing with large powers of large matrices?

Holbrook and O’Meara (2020), who have written several papers on the Gerstenhaber problem, which they call the GP problem, state that they “firmly believe the GP will turn out to have a negative answer” and they have developed a sophisticated approach to searching for a case with \mathrm{dim}(S_n) > n, which they call a “Eureka”. They first note that A, B and C can be assumed to be nilpotent. This means that all three matrices must be defective, because a nondefective nilpotent matrix is zero. Next they note that since commuting matrices are simultaneously unitarily triangularizable, A, B, and C can be assumed to be strictly upper triangular. Then they note that A can be assumed to be in Weyr canonical form.

The Weyr canonical form is a dual of the Jordan canonical form in which the Jordan matrix is replaced by a Weyr matrix, which is a direct sum of basic Weyr matrices. A basic Weyr matrix has one distinct eigenvalue and is upper block bidiagonal with diagonal blocks that are multiples of the identity matrix and superdiagonal blocks that are rectangular identity matrices. The difference between Jordan and Weyr matrices is illustrated by the example

J(\lambda) =     \left[\begin{array}{ccc|cc}     \lambda &  1      & 0 &&\\     0       & \lambda & 1 &&\\     0       & 0       & \lambda &&\\\hline             &         &   &\lambda & 1\\             &         &   & 0 &\lambda      \end{array}\right], \quad     W(\lambda) =     \left[\begin{array}{cc|cc|c}     \lambda &  0      & 1 & 0&\\     0       & \lambda & 0 & 1&\\\hline             &         & \lambda & 0 &1\\             &         & 0  &\lambda & 0\\\hline             &         &   & 0 &\lambda      \end{array}\right],

where the Jordan matrix J(\lambda) and Weyr matrix W(\lambda) are related by J(\lambda) = P^TW(\lambda)P for some permutation matrix P. There is an elegant way of relating the block sizes in the Jordan and Weyr matrices via a Young diagram. Form an array whose kth column contains j_k dots, where j_k is the size of the jth diagonal block of J:

\begin{array}{c|ccc}         &  j_1 & j_2\\\hline   w_1 &  \bullet & \bullet\\   w_2 &  \bullet & \bullet\\   w_3 &  \bullet & \end{array}

Then the number of dots in the kth row is the size of the kth diagonal block in the Weyr form.

The reason for using the Weyr form is that whereas any matrix that commutes with a Jordan matrix has Toeplitz blocks, any matrix that commutes with a Weyr matrix is block upper triangular and is uniquely determined by the first block row. By choosing a Weyr form for A, commuting matrices B and C can be built up in a systematic way.

Thanks to a result of O’Meara (2020) it suffices to compute modulo a prime p, so the computations can be done in exact arithmetic, removing the need to worry about rounding errors or overflow.

Holbrook and O’Meara have mostly tried matrices up to dimension 50, but they feel that “the Loch Ness monster probably lives in deeper water, closer to 100\times 100”. Their MATLAB codes (40 nicely commented but not optimized M-files) are available on request from the address given in their preprint. If you have the time to spare you might want to experiment with the codes and try to find a Eureka.

References

This is a minimal set of references, which contain further useful references within.

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | 4 Comments

What Is a Cholesky Factorization?

The Cholesky factorization of a symmetric positive definite matrix A is the factorization A = R^T\!R, where R is upper triangular with positive diagonal elements. It is a generalization of the property that a positive real number has a unique positive square root. The Cholesky factorization always exists and the requirement that the diagonal of R be positive ensures that it is unique.

As an example, the Cholesky factorization of the 4\times 4 matrix with (i,j) element \gcd(i,j) (gallery('gcdmat',4) in MATLAB) is

G_4 = \left[\begin{array}{cccc} 1 & 1 & 1 & 1\\ 1 & 2 & 1 & 2\\ 1 & 1 & 3 & 1\\ 1 & 2 & 1 & 4 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0\\ 1 & 0 & \sqrt{2} & 0\\ 1 & 1 & 0 & \sqrt{2} \end{array}\right] \left[\begin{array}{cccc} 1 & 1 & 1 & 1\\ 0 & 1 & 0 & 1\\ 0 & 0 & \sqrt{2} & 0\\ 0 & 0 & 0 & \sqrt{2} \end{array}\right].

The Cholesky factorization of an n\times n matrix contains n-1 other Cholesky factorizations within it: A_k = R_k^TR_k, k=1:n-1, where A_k = A(1\colon k,1\colon k) is the leading principal submatrix of order k. For example, for G_4 with k = 2, \bigl[\begin{smallmatrix}1 & 1\\ 1& 2\end{smallmatrix}\bigr]=  \bigl[\begin{smallmatrix}1 & 0\\ 1& 1\end{smallmatrix}\bigr]  \bigl[\begin{smallmatrix}1 & 1\\ 0& 1\end{smallmatrix}\bigr].

Inverting the Cholesky equation gives A^{-1} = R^{-1} R^{-T}, which implies the interesting relation that the (n,n) element of A^{-1} is r_{nn}^{-2}. So G_4^{-1} has (4,4) element 1/2. We also have \det(A) = \det(R)^2 = (r_{11}\dots r_{nn})^2, so \det(G_4) = 4 for this matrix.

The Cholesky factorization is named after André-Louis Cholesky (1875–1918), a French military officer involved in geodesy and surveying in Crete and North Africa, who developed it for solving the normal equations arising in least squares problems.

The existence and uniqueness of the factorization can be proved by induction on n, and the proof also serves as a method for computing it. Partition A\in\mathbb{R}^{n\times n} as

A = \begin{bmatrix}              A_{n-1} & c \\              c^T & \alpha              \end{bmatrix}.

We know that A_{n-1} is positive definite (any principal submatrix of a positive definite matrix is easily shown to be positive definite). Assume that A_{n-1} has a unique Cholesky factorization A_{n-1} = R_{n-1}^TR_{n-1} and define the upper triangular matrix

R = \begin{bmatrix}             R_{n-1} & r \\             0 & \beta            \end{bmatrix}.

Then

R^T\!R = \begin{bmatrix}                R_{n-1}^T R_{n-1} & R_{n-1}^T r \\                    r^T R_{n-1}   & r^Tr + \beta^2              \end{bmatrix},

which equals A if and only if

\begin{aligned}                   R_{n-1}^T r    &= c,        \\                   r^Tr + \beta^2 &= \alpha.     \end{aligned}

The first equation has a unique solution since R_{n-1} is nonsingular. Then the second equation gives \beta^2 = \alpha - r^Tr. It remains to check that there is a unique real, positive \beta satisfying this equation. From the inequality

0 < \det(A) = \det(R^T) \det(R) = \det(R_{n-1})^2 \beta^2

we see that \beta^2>0, hence there is a unique \beta>0. This completes the inductive step.

The quantity \alpha - r^Tr = \alpha - c^T A_{n-1}^{-1}c is the Schur complement of A_{n-1} in A. The essential reason why Cholesky factorization works is that the Schur complements of a positive definite matrix are themselves positive definite. Indeed we have the congruence

\begin{bmatrix} I                  & 0 \\  -c^T A_{n-1}^{-1} & 1 \end{bmatrix} \begin{bmatrix} A_{n-1}          & c \\ c^T               & \alpha \end{bmatrix} \begin{bmatrix} I  & -A_{n-1}^{-1}c\\ 0  & 1 \end{bmatrix}  = \begin{bmatrix} A_{n-1} & 0 \\ 0      & \alpha - c^T A_{n-1}^{-1} c \end{bmatrix},

and since congruences preserve definiteness it follows that \alpha - c^T A_{n-1}^{-1} c > 0.

In textbooks it is common to see element-level equations for computing the Cholesky factorization, which come from directly solving the matrix equation A = R^T\!R. If we equate elements on both sides, taking them a column at a time, starting with a_{11} = r_{11}^2, then the following algorithm is obtained:

\begin{array}{l} 1~ \mbox{for}~j=1:n  \\ 2~      \quad \mbox{for}~  i=1:j-1 \\ 3~     \qquad r_{ij} = \bigl(a_{ij} - \sum_{k=1}^{i-1} r_{ki}r_{kj} \bigr) /r_{ii} \\ 4~     \quad \mbox{end} \\ 5~     \quad r_{jj} = \bigl(a_{jj} - \sum_{k=1}^{j-1} r_{kj}^2\bigr)^{1/2} \\ 6~ \mbox{end} \end{array}

What happens if this algorithm is executed on a general (indefinite) symmetric matrix that is, one that has both positive and negative eigenvalues? Line 5 will either attempt to take the square root of a negative number for some j or it will produce r_{jj} = 0 and on the next iteration of the loop we will have a division by zero. In floating-point arithmetic it is possible for the algorithm to fail for a positive definite matrix, but only if it is numerically singular: the algorithm is guaranteed to run to completion if the condition number \kappa_2(A) = \|A\|_2 \|A^{-1}\|_2 is safely less than u^{-1}, where u is the unit roundoff.

The MATLAB function chol normally returns an error message if the factorization fails. But a second output argument can be requested, which is set to the number of the stage on which the factorization failed, or to zero if the factorization succeeded. In the case of failure, the partially computed R factor, returned in the first argument, can be used to compute a direction of negative curvature for A, which is a vector z such that z^T\!Az < 0. Here is an example. The code

n = 8;
A = gallery('lehmer',n) - 0.3*eye(n); % Indefinite matrix.
[R,p] = chol(A)
z = [-R\(R'\A(1:p-1,p)); 1; zeros(n-p,1)];
neg_curve = z'*A*z

produces the output

R =
   8.3666e-01   5.9761e-01   3.9841e-01
            0   5.8554e-01   7.3193e-01
            0            0   7.4536e-02
p =
     4
neg_curve =
  -9.1437e+00

Cholesky factorization has excellent numerical stability. The computed factor \widehat{R} satisfies

A  + \Delta A = \widehat{R}^T\widehat{R}, \quad       \|\Delta A\|_2 \le c_nu\|A\|_2,

where c_n is a constant. Unlike for LU factorization there is no possibility of element growth; indeed for i\le j,

r_{ij}^2 \le \displaystyle \sum_{k=1}^j r_{kj}^2 = a_{ij},

so the elements of R are nicely bounded relative to those of A.

Once we have a Cholesky factorization we can use it to solve a linear system Ax = b, by solving the lower triangular system R^Ty = b and then the upper triangular system Rx = y. The computed solution \widehat{x} can be shown to satisfy

(A+\Delta A)\widehat{x} = b, \quad \|\Delta A\|_2 \le d_nu\|A\|_2

where d_n is another constant. Thus \widehat{x} has a small backward error.

Finally, what if A is only symmetric positive semidefinite, that is, x^T\!Ax \ge 0 for all x, so that A is possibly singular? There always exists a Cholesky factorization (we can take the R factor in the QR factorization of the positive semidefinite square root of A) but it may not be unique. For example, we have

A = \left[\begin{array}{cccc}  1 & 1 & 1 & 1\\  1 & 1 & 1 & 1\\  1 & 1 & 2 & 2\\  1 & 1 & 2 & 4\\ \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0\\ 1 & 1 & x & y\\ \end{array}\right] \left[\begin{array}{cccc} 1 & 1 & 1 & 1\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & x\\ 0 & 0 & 0 & y\\ \end{array}\right]  = R^T\!R

for any x and y such that x^2 + y^2 = 2. Note that A has rank 3 but R has two zero diagonal elements. When A is positive semidefinite of rank r what is usually wanted is a Cholesky factorization in which R is zero in its last n-r rows. Such a factorization can be obtained by using complete pivoting, which at each stage permutes the largest remaining diagonal element into the pivot position, which gives a factorization

\Pi^T\mskip-5mu A\Pi = R^T\!R,      \quad      R = \begin{bmatrix}          R_{11} & R_{12} \\               0 & 0          \end{bmatrix}, \quad R_{11}\in\mathbb{R}^{r \times r},

where \Pi is a permutation matrix. For example, with \Pi the identity matrix with its columns in reverse order,

\Pi^T\mskip-5mu A\Pi  = \left[\begin{array}{cccc} 4 & 2 & 1 & 1\\ 2 & 2 & 1 & 1\\ 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 \end{array}\right]    = \left[\begin{array}{cccc} 2 & 0 & 0 & 0\\ 1 & 1 & 0 & 0\\ \frac{1}{2} & \frac{1}{2} & \frac{\sqrt{2}}{2} & 0\\[2pt] \frac{1}{2} & \frac{1}{2} & \frac{\sqrt{2}}{2} & 0 \end{array}\right] \left[\begin{array}{cccc} 2 & 1 & \frac{1}{2} & \frac{1}{2}\\[2pt] 0 & 1 & \frac{1}{2} & \frac{1}{2}\\[2pt] 0 & 0 & \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}\\ 0 & 0 & 0 & 0 \end{array}\right],

which clearly displays the rank of A.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment

What is Numerical Stability?

Numerical stability concerns how errors introduced during the execution of an algorithm affect the result. It is a property of an algorithm rather than the problem being solved. I will assume that the errors under consideration are rounding errors, but in principle the errors can be from any source.

Consider a scalar function y = f(x) of a scalar x. We regard x as the input data and y as the output. The forward error of a computed approximation \widehat{y} to y is the relative error |y - \widehat{y}|/|y|. The backward error of \widehat{y} is

\min\left\{\,   \displaystyle\frac{|\Delta x|}{|x|}: \widehat{y} = f(x+\Delta x) \,\right\}.

If \widehat{y} has a small backward error then it is the exact answer for a slightly perturbed input. Here, “small’ is interpreted relative to the floating-point arithmetic, so a small backward error means one of the form cu for a modest constant c, where u is the unit roundoff.

An algorithm that always produces a small backward error is called backward stable. In a backward stable algorithm the errors introduced during the algorithm have the same effect as a small perturbation in the data. If the backward error is the same size as any uncertainty in the data then the algorithm produces as good a result as we can expect.

If x undergoes a relative perturbation of size u then y = f(x) can change by as much as \mathrm{cond}_f(x) u, where

\mathrm{cond}_f(x) = \lim_{\epsilon\to0}                       \displaystyle\sup_{|\Delta x| \le \epsilon |x|}                       \displaystyle\frac{|f(x+\Delta x) - f(x)|}{\epsilon|f(x)|}

is the condition number of f at x. An algorithm that always produces \widehat{y} with a forward error of order \mathrm{cond}_f(x) u is called forward stable.

The definition of \mathrm{cond}_f(x) implies that a backward stable algorithm is automatically forward stable. The converse is not true. An example of an algorithm that is forward stable but not backward stable is Gauss–Jordan elimination for solving a linear system.

If \widehat{y} satisfies

\widehat{y} + \Delta y = f(x+\Delta x),     \quad |\Delta y| \le \epsilon |\widehat{y}|,     \quad |\Delta x| \le \epsilon |x|,

with \epsilon small in the sense described above, then the algorithm for computing y is mixed backward–forward stable. Such an algorithm produces almost the right answer for a slightly perturbed input. The following diagram illustrates the previous equation.

mixed-err-fig.jpg

With these definitions in hand, we can turn to the meaning of the term numerically stable. Depending on the context, numerical stability can mean that an algorithm is (a) backward stable, (b) forward stable, or (c) mixed backward–forward stable.

For some problems, backward stability is difficult or impossible to achieve, so numerical stability has meaning (b) or (c). For example, let Z = xy^T, where x and y are n-vectors. Backward stability would require the computed \widehat{Z} to satisfy \widehat{Z} = (x+\Delta x)(y+\Delta y)^T for some small \Delta x and \Delta y, meaning that \widehat{Z} is a rank-1 matrix. But the computed \widehat{Z} contains n^2 independent rounding errors and is very unlikely to have rank 1.

We briefly mention some other relevant points.

  • What is the data? For a linear system Ax = b the data is A or b, or both. For a nonlinear function we need to consider whether problem parameters are data; for example, for f(x) = \cos(3x^2+2) are the 3 and the 2 constants or are they data, like x, that can be perturbed?
  • We have measured perturbations in a relative sense. Absolute errors can also be used.
  • For problems whose inputs are matrices or vectors we need to use norms or componentwise measures.
  • Some algorithms are only unconditionally numerically stable, that is, they are numerically stable for some subset of problems. For example, the normal equations method for solving a linear least squares problem is forward stable for problems with a large residual.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment

What is the Polar Decomposition?

A polar decomposition of A\in\mathbb{C}^{m \times n} with m \ge n is a factorization A = UH, where U\in\mathbb{C}^{m \times n} has orthonormal columns and H\in\mathbb{C}^{n \times n} is Hermitian positive semidefinite. This decomposition is a generalization of the polar representation z = r \mathrm{e}^{\mathrm{i}\theta} of a complex number, where H corresponds to r\ge 0 and U to \mathrm{e}^{\mathrm{i}\theta}. When A is real, H is symmetric positive semidefinite. When m = n, U is a square unitary matrix (orthogonal for real A).

We have A^*A = H^*U^*UH = H^2, so H = (A^*\!A)^{1/2}, which is the unique positive semidefinite square root of A^*A. When A has full rank, H is nonsingular and U = AH^{-1} is unique, and in this case U can be expressed as

U = \displaystyle\frac{2}{\pi} A \displaystyle\int_{0}^{\infty} (t^2I+A^*\!A)^{-1}\, \mathrm{d}t.

An example of a polar decomposition is

A = \left[\begin{array}{@{}rr} 4 & 0\\ -5 & -3\\ 2 & 6 \end{array}\right]   =   \sqrt{2}\left[\begin{array}{@{}rr} \frac{1}{2} & -\frac{1}{6}\\[\smallskipamount] -\frac{1}{2} & -\frac{1}{6}\\[\smallskipamount] 0 & \frac{2}{3} \end{array}\right]   \cdot   \sqrt{2}\left[\begin{array}{@{\,}rr@{}} \frac{9}{2} & \frac{3}{2}\\[\smallskipamount] \frac{3}{2} & \frac{9}{2} \end{array}\right]   \equiv UH.

For an example with a rank-deficient matrix consider

A = \begin{bmatrix}                  0 & 1 & 0 \\                  0 & 0 & 1 \\                  0 & 0 & 0          \end{bmatrix},

for which A^*A = \mathrm{diag}(0,1,1) and so H = \mathrm{diag}(0,1,1). The equation A = UH then implies that

U = \begin{bmatrix}                  0 & 1 & 0 \\                  0 & 0 & 1 \\                  \theta & 0 & 0          \end{bmatrix}, \quad |\theta| = 1,

so U is not unique.

The polar factor U has the important property that it is a closest matrix with orthonormal columns to A in any unitarily invariant norm. Hence the polar decomposition provides an optimal way to orthogonalize a matrix. This method of orthogonalization is used in various applications, including in quantum chemistry, where it is called Löwdin orthogonalization. Most often, though, orthogonalization is done through QR factorization, trading optimality for a faster computation.

An important application of the polar decomposition is to the orthogonal Procrustes problem1

\min \bigl\{\, \|A-BW\|_F: W \in \mathbb{C}^{n\times n},\; W^*W = I \,\bigr\},

where A,B\in\mathbb{C}^{m\times n} and the norm is the Frobenius norm \|A\|_F^2 = \sum_{i,j} |a_{ij}|^2. This problem, which arises in factor analysis and in multidimensional scaling, asks how closely a unitary transformation of B can reproduce A. Any solution is a unitary polar factor of B^*\!A, and there is a unique solution if B^*\!A is nonsingular. Another application of the polar decomposition is in 3D graphics transformations. Here, the matrices are 3\times 3 and the polar decomposition can be computed by exploiting a relationship with quaternions.

For a square nonsingular matrix A, the unitary polar factor U can be computed by a Newton iteration:

X_{k+1} = \displaystyle\frac{1}{2} (X_k + X_k^{-*}), \quad X_0 = A.

The iterates X_k converge quadratically to U. This is just one of many iterations for computing U and much work has been done on the efficient implementation of these iterations.

If A = P \Sigma Q^* is a singular value decomposition (SVD), where P\in\mathbb{C}^{m\times n} has orthonormal columns, Q\in\mathbb{C}^{n\times n} is unitary, and \Sigma is square and diagonal with nonnegative diagonal elements, then

A = PQ^* \cdot Q \Sigma Q^* \equiv UH,

where U has orthonormal columns and H is Hermitian positive semidefinite. So a polar decomposition can be constructed from an SVD. The converse is true: if A = UH is a polar decomposition and H = Q\Sigma Q^* is a spectral decomposition (Q unitary, D diagonal) then A = (UQ)\Sigma Q^* \equiv P \Sigma Q^* is an SVD. This latter relation is the basis of a method for computing the SVD that first computes the polar decomposition by a matrix iteration then computes the eigensystem of H, and which is extremely fast on distributed-memory manycore computers.

The nonuniqueness of the polar decomposition for rank deficient A, and the lack of a satisfactory definition of a polar decomposition for m < n, are overcome in the canonical polar decomposition, defined for any m and n. Here, A = UH with U a partial isometry, H is Hermitian positive semidefinite, and U^*U = HH^+. The superscript “+” denotes the Moore–Penrose pseudoinverse and a partial isometry can be characterized as a matrix U for which U^+ = U^*.

Generalizations of the (canonical) polar decomposition have been investigated in which the properties of U and H are defined with respect to a general, possibly indefinite, scalar product.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Footnotes:

1

Procrustes: an ancient Greek robber who tied his victims to an iron bed, stretching their legs if too short for it, and lopping them if too long.

Posted in what-is | Leave a comment

What Is a Symmetric Positive Definite Matrix?

A real n\times n matrix A is symmetric positive definite if it is symmetric (A is equal to its transpose, A^T) and

x^T\!Ax > 0 \quad \mbox{for all nonzero vectors}~x.

By making particular choices of x in this definition we can derive the inequalities

\begin{alignedat}{2}     a_{ii} &>0                       \quad&&\mbox{for all}~i,\\     a_{ij} &< \sqrt{ a_{ii} a_{jj} } \quad&&\mbox{for all}~i\ne j.   \end{alignedat}

Satisfying these inequalities is not sufficient for positive definiteness. For example, the matrix

A = \begin{bmatrix}         1 & 3/4 & 0 \\         3/4 & 1 & 3/4 \\         0    & 3/4 & 1 \\        \end{bmatrix}

satisfies all the inequalities but x^T\!Ax < 0 for x = [1,{}-\!\sqrt{2},~1]^T.

A sufficient condition for a symmetric matrix to be positive definite is that it has positive diagonal elements and is diagonally dominant, that is, a_{ii} > \sum_{j \ne i} |a_{ij}| for all i.

The definition requires the positivity of the quadratic form x^T\!Ax. Sometimes this condition can be confirmed from the definition of A. For example, if A = B^T\!B and B has linearly independent columns then x^T\!Ax = (Bx)^T Bx > 0 for x\ne 0. Generally, though, this condition is not easy to check.

Two equivalent conditions to A being symmetric positive definite are

  • every leading principal minor \det(A_k), where the submatrix A_k = A(1\colon k, 1 \colon k) comprises the intersection of rows and columns 1 to k, is positive,
  • the eigenvalues of A are all positive.

The first condition implies, in particular, that \det(A) > 0, which also follows from the second condition since the determinant is the product of the eigenvalues.

Here are some other important properties of symmetric positive definite matrices.

  • A^{-1} is positive definite.
  • A has a unique symmetric positive definite square root X, where a square root is a matrix X such that X^2 = A.
  • A has a unique Cholesky factorization A = R^T\!R, where R is upper triangular with positive diagonal elements.

Sources of positive definite matrices include statistics, since nonsingular correlation matrices and covariance matrices are symmetric positive definite, and finite element and finite difference discretizations of differential equations.

Examples of symmetric positive definite matrices, of which we display only the 4\times 4 instances, are the Hilbert matrix

H_4 = \left[\begin{array}{@{\mskip 5mu}c*{3}{@{\mskip 15mu} c}@{\mskip 5mu}}             1 & \frac{1}{2} & \frac{1}{3}  & \frac{1}{4}  \\[6pt]            \frac{1}{2} & \frac{1}{3}   & \frac{1}{4}   & \frac{1}{5}\\[6pt]            \frac{1}{3} & \frac{1}{4}   &      \frac{1}{5}   & \frac{1}{6}\\[6pt]            \frac{1}{4} & \frac{1}{5}   &      \frac{1}{6}   & \frac{1}{7}\\[6pt]            \end{array}\right],

the Pascal matrix

P_4 = \left[\begin{array}{@{\mskip 5mu}c*{3}{@{\mskip 15mu} r}@{\mskip 5mu}}      1 &    1  &   1  &   1\\      1 &    2  &   3  &   4\\      1 &    3  &   6  &  10\\      1 &    4  &  10  &  20            \end{array}\right],

and minus the second difference matrix, which is the tridiagonal matrix

S_4 = \left[\begin{array}{@{\mskip 5mu}c*{3}{@{\mskip 15mu} r}@{\mskip 5mu}}      2 &   -1  &      &    \\     -1 &    2  &  -1  &    \\        &    -1 &   2  &  -1 \\        &       &  -1  &  2            \end{array}\right].

All three of these matrices have the property that a_{ij} is non-decreasing along the diagonals. However, if A is positive definite then so is P^TAP for any permutation matrix P, so any symmetric reordering of the row or columns is possible without changing the definiteness.

A 4\times 4 symmetric positive definite matrix that was often used as a test matrix in the early days of digital computing is the Wilson matrix

W =  \begin{bmatrix}        5 &7& 6& 5 \\        7 &10& 8& 7 \\        6& 8& 10& 9 \\        5& 7& 9 &10       \end{bmatrix}.

What is the best way to test numerically whether a symmetric matrix is positive definite? Computing the eigenvalues and checking their positivity is reliable, but slow. The fastest method is to attempt to compute a Cholesky factorization and declare the matrix positivite definite if the factorization succeeds. This is a reliable test even in floating-point arithmetic. If the matrix is not positive definite the factorization typically breaks down in the early stages so and gives a quick negative answer.

Symmetric block matrices

C = \begin{bmatrix}         A & X\\         X^T & B      \end{bmatrix}

often appear in applications. If A is nonsingular then we can write

\begin{bmatrix} A & X\\ X^T & B \end{bmatrix}  = \begin{bmatrix} I         & 0\\ X^TA^{-1} & I \end{bmatrix} \begin{bmatrix} A & 0\\ 0 & B-X^TA^{-1}X \end{bmatrix} \begin{bmatrix} I & A^{-1}X\\ 0 & I \end{bmatrix},

which shows that C is congruent to a block diagonal matrix, which is positive definite when its diagonal blocks are. It follows that C is positive definite if and only if both A and B - X^TA^{-1}X are positive definite. The matrix B - X^TA^{-1}X is called the Schur complement of A in C.

We mention two determinantal inequalities. If the block matrix C above is positive definite then \det(C) \le \det(A) \det(B) (Fischer’s inequality). Applying this inequality recursively gives Hadamard’s inequality for a symmetric positive definite A:

\det(A) \le a_{11}a_{22} \dots a_{nn},

with equality if and only if A is diagonal.

Finally, we note that if x^TAx \ge 0 for all x\ne0, so that the quadratic form is allowed to be zero, then the symmetric matrix A is called symmetric positive semidefinite. Some, but not all, of the properties above generalize in a natural way. An important difference is that semidefinitness is equivalent to all principal minors, of which there are 2^{n-1}, being nonnegative; it is not enough to check the n leading principal minors. Consider, as an example, the matrix

\begin{bmatrix}         1 & 1 & 1\\         1 & 1 & 1\\         1 & 1 & 0      \end{bmatrix},

which has leading principal minors 1, 0, and 0 and a negative eigenvalue.

A complex n\times n matrix A is Hermitian positive definite if it is Hermitian (A is equal to its conjugate transpose, A^*) and x^*Ax > 0 for all nonzero vectors x. Everything we have said above generalizes to the complex case.

References

This is a minimal set of references, which contain further useful references within.

Related Blog Posts

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.

Posted in what-is | Leave a comment