# Randsvd Matrices with Large Growth Factors

Sixty years ago James Wilkinson published his backward error analysis of Gaussian elimination for solving a linear system $Ax = b$, where $A$ is a nonsingular $n\times n$ matrix. He showed that in floating-point arithmetic the computed solution $\widehat{x}$ satisfies

$(A+\Delta A) \widehat{x} = b, \qquad \|\Delta A\|_{\infty} \le p(n) \rho_n u \|A\|_{\infty},$

where $u$ is the unit roundoff and $p$ is a low degree polynomial. The term $\rho_n$ is the growth factor, defined by

$\rho_n = \displaystyle\frac{\max_{i,j,k} |a_{ij}^{(k)}|} {\max_{i,j}|a_{ij}|} \ge 1,$

where the $a_{ij}^{(k)}$ are the elements at the $k$th stage of Gaussian elimination. The growth factor measures how much elements grow during the elimination. We would like the product $p(n)\rho_n$ to be of order 1, so that $\Delta A$ is a small relative perturbation of $A$. We therefore need $\rho_n$ not to be too large.

With partial pivoting, in which row interchanges are used to ensure that at each stage the pivot element is the largest in its column, Wilkinson showed that $\rho_n \le 2^{n-1}$ and that equality is possible. Such exponential growth implies a large $\Delta A$ (unless we are lucky), meaning a severe loss of numerical stability. However, seventy years of digital computing experience have shown that $\rho_n$ is usually of modest size in practice. Explaining why this is the case is one of the outstanding problems in numerical analysis.

It is easy to experiment with growth factors in MATLAB. I will use the function

function g = gf(A)
%GF     Approximate growth factor.
%   g = GF(A) is an approximation to the
%   growth factor for LU factorization
%   with partial pivoting.
[~,U] = lu(A);
g = max(abs(U),[],'all')/max(abs(A),[],'all');


It computes a lower bound on the growth factor (since it only considers $k=n$ in the numerator in the definition), but it is entirely adequate for our purposes here. Let’s compute the growth factor for a random matrix of order 10,000 with elements from the standard normal distribution (mean 0, variance 1):

>> rng(1); n = 10000; gf(randn(n))
ans =
6.1335e+01


Growth of 61 is unremarkable for a matrix of this size. Now we try a matrix of the same size generated by the gallery('randsvd') function:

>> A = gallery('randsvd',n,1e6,2,[],[],1);
>> gf(A)
ans =
9.7544e+02


This function generates an $n\times n$ matrix with known singular value distribution and with singular vector matrices that are random orthogonal matrices from the Haar distribution. The parameter 1e6 specifies the 2-norm condition number, while the 2 (the mode parameter) specifies that there is only one small singular value, so the singular values are 1 repeated $n-1$ times and 1e-6. Growth of 975 is exceptional! These matrices have been in MATLAB since the 1990s, but this large growth property has apparently not been noticed before.

It turns out that mode 2 randsvd matrices generate with high probability growth factors of size at least $n/(4 \log n)$ for any condition number and for any pivoting strategy, not just partial pivoting. One way to check this is to randomly permute the columns of $A$ before doing the LU factorization with partial pivoting:

>> gf(A(:,randperm(n)))
ans =
7.8395e+02


Here is a plot showing the maximum over 12 randsvd matrices for each $n$ of the growth factors for three different pivoting strategies, along with the maximum growth factors for partial pivoting for rand and randn matrices. The black curve is $n/(4 \log n)$. This plot emphasizes the unusually large growth for mode 2 randsvd matrices.

What is the explanation for this large growth? It stems from three facts.

• Haar distributed orthogonal matrices have the property that that their elements are fairly small with high probability, as shown by Jiang in 2005.
• If the largest entries in magnitude of $A$ and $A^{-1}$ are both small, in the sense that their product is $\theta \ll 1$, then $A$ will produce a growth factor of at least $1/\theta$ for any pivoting strategy. This was proved by Des Higham and I in the paper Large Growth Factors in Gaussian Elimination with Pivoting.
• If $W$ is an orthogonal matrix generating large growth then a rank-1 perturbation of 2-norm at most 1 tends to preserve the large growth.

For full details see the new EPrint Random Matrices Generating Large Growth in LU Factorization with Pivoting by Des Higham, Srikara Pranesh and me.

Is growth of order $n$ a problem in practice? It can be for two reasons.

• The largest dense linear systems $Ax = b$ solved today are of dimension $n = 10^7$. If we work in single precision then $nu \approx 1$ and so LU factorization can potentially be completely unstable if there is growth of order $n$.
• For IEEE half precision arithmetic growth of order $n$ will cause overflow once $n$ exceeds $10^5 / \max_{i,j} |a_{ij}|$. It was overflow in half precision LU factorization on randsvd matrices that alerted us to the large growth.

# 1984 Symposium in Honour of James Wilkinson

In September 1984 a two-day Symposium on Computational Mathematics–State of the Art was held at Argonne National Laboratory in honour of James Hardy Wilkinson on his 65th birthday.

Wilkinson was one of the leading figures in 20th century numerical analysis. He developed the theory and practice of backward error analysis for floating-point computation, and developed, analyzed, and implemented in software many algorithms in numerical linear algebra. Among his many honours, Wilkinson was a Fellow of the Royal Society (elected 1969) and a winner of both the Turing Prize (1970) and the SIAM John von Neumman Lecture award (1970). His legacy endures and in 2019 we celebrated the centenary of his birth with the conference Advances in Numerical Linear Algebra.

The 1984 symposium provided “an overview of the state of the art in several of the major areas of computational mathematics” and “was particularly appropriate for this occasion in view of the many fundamental contributions in computational mathematics made by Professor Wilkinson throughout his distinguished career”.

The symposium attracted more than 260 attendees and comprised ten invited talks by distinguished researchers:

• Some Problems in the Scaling of Matrices, G. W. Stewart
• Matrix Calculations in Optimization Algorithms, M. J. D. Powell
• Numerical Solution of Differential-Algebraic Equations, C. W. Gear
• The Requirements of Interactive Data Analysis Systems, Peter J. Huber
• Linear Algebra Problems in Multivariate Approximation Theory, C. de Boor
• Computational linear Algebra in Continuation and Two Point Boundary Value Problems, H. B. Keller
• Second Thoughts on the Mathematical Software Effort: A Perspective, W. J. Cody
• Enhanced Resolution in Shock Wave Calculations, James Glimm
• Givens’ Eigenvector Recurrence Revisited, Beresford Parlett
• The State of the Art in Error Analysis, James H. Wilkinson

Slides from the talks are available in an informal proceedings available here in PDF form and on Google Books.

Jack Dongarra has provided me with contact prints of photos taken at the workshop. I have scanned some of these and they are shown below.

One of the photos shows Jim Wilkinson with an Apple Mac that was presented to him. He used it to type up several of his later papers. Sadly, Jim died just over two years after the symposium.

# The Argonne Tapes

A few weeks ago, I was in contact with Chris Paige, an emeritus Professor of Computer Science at McGill University, Montreal. I mentioned that Sven Hammarling and I are collecting information and memorabilia about the numerical analyst James Hardy Wilkinson FRS (1919-1986) for our Wilkinson webpage, and asked Chris if he knew of anything we didn’t already have. He replied “I have 5 1973 Video cassettes, each about 1 hour, by Jim labelled `Eigensystem Workshop June 1973′. … His wonderful lecturing style, and his idiosynchrasies, might be of interest, as well as the marvellous content.”

The tapes contain four talks by Wilkinson and one by Cleve Moler from an Eigensystem Workshop held at Argonne National Laboratory, Illinois, USA, in June 1973.

The tapes are Scotch UC60 High Energy color Videocassettes, in U-matic format, one of which is shown to the right. Chris was able to have the tapes digitized and the results are pretty good given the age of the tapes. We have put the videos on the Numerical Linear Algebra Group YouTube channel and link to them below.

The videos were announced at the recent conference Advances in Numerical Linear Algebra: Celebrating the Centenary of the Birth of James H. Wilkinson, held in Manchester May 29-30, 2019.

• Inverse Iteration – James H. Wilkinson, Eigensystem Workshop, Argonne National Laboratory, Argonne, Illinois, USA, June 1973.

# Advances in Numerical Linear Algebra Conference and James Hardy Wilkinson Centenary

This year marks the 100th anniversary of the birth of James Hardy Wilkinson, FRS. Wilkinson developed the theory and practice of backward error analysis for floating-point computation, and developed, analyzed, and implemented in software many algorithms in numerical linear algebra. While much has changed since his untimely passing in 1986, we still rely on the analytic and algorithmic foundations laid by Wilkinson.

This micro website about Wilkinson, set up by Sven Hammarling and me, contains links to all kinds of information about Wilkinson, including audio and video recordings of him.

With Sven Hammarling and Françoise Tisseur, I am organizing a conference Advances in Numerical Linear Algebra: Celebrating the Centenary of the Birth of James H. Wilkinson at the University of Manchester, May 29-30, 2019. Among the 13 invited speakers are several who knew or worked with Wilkinson. As well as focusing on recent developments and future challenges for numerical linear algebra, the talks will include reminiscences about Wilkinson and discuss his legacy.

Contributed talks and posters are welcome (deadline April 1, 2019) and some funding is available to support the attendance of early career researchers and PhD students.