A matrix is a rectangular array of numbers treated as a single object. A block matrix is a matrix whose elements are themselves matrices, which are called submatrices. By allowing a matrix to be viewed at different levels of abstraction, the block matrix viewpoint enables elegant proofs of results and facilitates the development and understanding of numerical algorithms.
A block matrix is defined in terms of a partitioning, which breaks a matrix into contiguous pieces. The most common and important case is for an matrix to be partitioned as a block
matrix (two block rows and two block columns). For
, partitioning into
blocks gives
where
and similarly for the other blocks. The diagonal blocks in a partitioning of a square matrix are usually square (but not necessarily so), and they do not have to be of the same dimensions. This same matrix could be partitioned as
where is a scalar,
is a column vector, and
is a row vector.
The sum of two block matrices
and
of the same dimension is obtained by adding blockwise as long as
and
have the same dimensions for all
and
, and the result has the same block structure:
,
The product of an
matrix
and an
matrix
can be computed as
as long as the products
are all defined. In this case the matrices
and
are said to be conformably partitioned for multiplication. Here,
has as many block rows as
and as many block columns as
. For example,
as long as all the eight products are defined.
Block matrix notation is an essential tool in numerical linear algebra. Here are some examples of its usage.
Matrix Factorization
For an matrix
with nonzero
element
we can write
The first row and column of have the correct form for a unit lower triangular matrix and likewise the first row and column of
have the correct form for an upper triangular matrix. If we can find an LU factorization
of the
Schur complement
then
is an LU factorization of
. This construction is the basis of an inductive proof of the existence of an LU factorization (provided all the pivots are nonzero) and it also yields an algorithm for computing it.
The same type of construction applies to other factorizations, such as Cholesky factorization, QR factorization, and the Schur decomposition.
Matrix Inverse
A useful formula for the inverse of a nonsingular block triangular matrix
is
which has the special case
If is upper triangular then so are
and
. By taking
of dimension the nearest integer to
this formula can be used to construct a divide and conquer algorithm for computing
.
We note that , a fact that will be used in the next section.
Determinantal Formulas
Block matrices provides elegant proofs of many results involving determinants. For example, consider the equations
which hold for any and
such that
and
are defined. Taking determinants gives the formula
. In particular we can take
,
, for
-vectors
and
, giving
.
Constructing Matrices with Required Properties
We can sometimes build a matrix with certain desired properties by a block construction. For example, if is an
involutory matrix (
) then
is a (block triangular) involutory matrix. And if
and
are any two
matrices then
is involutory.
The Anti Block Diagonal Trick
For matrices
and
consider the anti block diagonal matrix
Note that
Using these properties one can show a relation between the matrix sign function and the principal matrix square root:
This allows one to derive iterations for computing the matrix square root and its inverse from iterations for computing the matrix sign function.
It is easy to derive explicit formulas for all the powers of , and hence for any power series evaluated at
. In particular, we have the formula
where denotes any square root of
. With
, this formula arises in the solution of the ordinary differential equation initial value problem
,
,
,
The most well known instance of the trick is when . The eigenvalues of
are plus and minus the singular values of , together with
additional zeros if
is
with
, and the eigenvectors of
and the singular vectors of
are also related. Consequently, by applying results or algorithms for symmetric matrices to
one obtains results or algorithms for the singular value decomposition of
.
References
This is a minimal set of references, which contain further useful references within.
- Gene Golub and Charles F. Van Loan, Matrix Computations, fourth edition, Johns Hopkins University Press, Baltimore, MD, USA, 2013.
- Nicholas J. Higham, Functions of Matrices: Theory and Computation, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2008. (Sections 1.5 and 1.6 for the theory of matrix square roots.)
- Roger A. Horn and Charles R. Johnson, Matrix Analysis, second edition, Cambridge University Press, 2013. My review of the second edition.
Related Blog Posts
This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.