A real matrix is nonnegative if all its elements are nonnegative and it is positive if all its elements are positive. Nonnegative matrices arise in a wide variety of applications, for example as matrices of probabilities in Markov processes and as adjacency matrices of graphs. Information about the eigensystem is often essential in these applications.
Perron (1907) proved results about the eigensystem of a positive matrix and Frobenius (1912) extended them to nonnegative matrices.
The following three results of increasing specificity summarize the key spectral properties of nonnegative matrices proved by Perron and Frobenius. Recall that a simple eigenvalue of an matrix is one with algebraic multiplicity
, that is, it occurs only once in the set of
eigenvalues. We denote by
the spectral radius of
, the largest absolute value of any eigenvalue of
.
Theorem 1. (Perron–Frobenius) If
is nonnegative then
is an eigenvalue of
,
- there is a nonnegative eigenvector
such that
.
A matrix is reducible if there is a permutation matrix
such that
where and
are square, nonempty submatrices; it is irreducible if it is not reducible. Examples of reducible matrices are triangular matrices and matrices with a zero row or column. A positive matrix is trivially irreducible.
Theorem 2. (Perron–Frobenius) If
is nonnegative and irreducible then
is an eigenvalue of
,
,
- there is a positive eigenvector
such that
,
is a simple eigenvalue.
Theorem 3. (Perron) If
is positive then Theorem 2 holds and, in addition,
for any eigenvalue
with
.
For nonnegative, irreducible , the eigenvalue
is called the Perron root of
and the corresponding positive eigenvector
, normalized so that
, is called the Perron vector.
It is a good exercise to apply the theorems to all binary matrices. Here are some interesting cases.
: Theorem 1 says that
is an eigenvalue and and that it has a nonnegative eigenvector. Indeed
is an eigenvector. Note that
is reducible and
is a repeated eigenvalue.
:
is irreducible and Theorem 2 says that
is a simple eigenvalue with positive eigenvector. Indeed the eigenvalues are
and
is the Perron vector for the Perron root
. This matrix has two eigenvalues of maximal modulus.
: Theorem 3 says that
is an eigenvalue with positive eigenvector and that the other eigenvalue has modulus less than
. Indeed the eigenvalues are the Perron root
, with Perron vector
, and
.
For another example, consider the irreducible matrix
Note that is a companion matrix and a permutation matrix. Theorem 2 correctly tells us that
is an eigenvalue of
, and that it has a corresponding positive eigenvector, the Perron vector
. Two of the eigenvalues are complex, however, and all three eigenvalues have modulus 1, as they must because
is orthogonal.
A stochastic matrix is a nonnegative matrix whose row sums are all equal to . A stochastic matrix satisfies
, where
, which means that
has an eigenvalue
, and so
. Since
for any norm, by taking the
-norm we conclude that
. For a stochastic matrix, Theorem 1 does not give any further information. If
is irreducible then Theorem 2 tells us that
is a simple eigenvalue, and if
is positive Theorem 3 tells us that every other eigenvalue has modulus less than
.
The next result is easily proved using Theorem 3 together with the Jordan canonical form. It shows that the powers of a positive matrix behave like multiples of a rank-1 matrix.
Theorem 4. If
is positive,
is the Perron vector of
, and
is the Perron vector of
then
Note that in the theorem is a left eigenvector of
corresponding to
, that is,
(since
).
If is stochastic and positive then Theorem 4 is applicable and
. If
also has unit column sums, so that it is doubly stochastic, then
and Theorem 4 says that
. We illustrate this result in MATLAB using a scaled magic square matrix.
>> n = 4; M = magic(n), A = M/sum(M(1,:)) % Doubly stochastic matrix. A = 16 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1 A = 4.7059e-01 5.8824e-02 8.8235e-02 3.8235e-01 1.4706e-01 3.2353e-01 2.9412e-01 2.3529e-01 2.6471e-01 2.0588e-01 1.7647e-01 3.5294e-01 1.1765e-01 4.1176e-01 4.4118e-01 2.9412e-02 >> for k = 8:8:32, fprintf('%11.2e',norm(A^k-ones(n)/n,1)), end, disp(' ') 3.21e-05 7.37e-10 1.71e-14 8.05e-16
References
This is a minimal set of references, which contain further useful references within.
- Roger A. Horn and Charles R. Johnson, Matrix Analysis, second edition, Cambridge University Press, 2013. Chapter 8. My review of the second edition.
- Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2000. Chapter 8.
- Helene Shapiro, Linear Algebra and Matrices. Topics for a Second Course, American Mathematical Society, Providence, RI, USA, 2015. Chapter 17.
This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.
Hi, and thanks for the nice post.
example: “…eigenvector and that the other eigenvalue has modulus less than 1” I think you meant to write “less than 2”.
In the
Thanks – corrected.
I am hoping PFT will help prove a seemingly simple thing:
For square non negative Q with spectral radius < 1, (I hope that)
(I + Q)^{-1}Q is also non negative.
Any hints or thoughts?
It’s false. Here is a counterexample in MATLAB:
>> rng(6); n=2; A = rand(n)/(1.1*max(abs(eig(A)))); (eye(n)+A)\A
ans =
2.3058e-01 2.3561e-01
9.5246e-02 -1.3619e-02
With a change of sign, (I-Q)^{-1}Q is nonnegative, as can be seen
from a Neumann expansion of the inverse.
thanks. now I see why I couldn’t prove it. I will have to think if I really need it.