Advances in Numerical Linear Algebra Conference and James Hardy Wilkinson Centenary


This year marks the 100th anniversary of the birth of James Hardy Wilkinson, FRS. Wilkinson developed the theory and practice of backward error analysis for floating-point computation, and developed, analyzed, and implemented in software many algorithms in numerical linear algebra. While much has changed since his untimely passing in 1986, we still rely on the analytic and algorithmic foundations laid by Wilkinson.

This micro website about Wilkinson, set up by Sven Hammarling and me, contains links to all kinds of information about Wilkinson, including audio and video recordings of him.

With Sven Hammarling and Françoise Tisseur, I am organizing a conference Advances in Numerical Linear Algebra: Celebrating the Centenary of the Birth of James H. Wilkinson at the University of Manchester, May 29-30, 2019. Among the 13 invited speakers are several who knew or worked with Wilkinson. As well as focusing on recent developments and future challenges for numerical linear algebra, the talks will include reminiscences about Wilkinson and discuss his legacy.

Contributed talks and posters are welcome (deadline April 1, 2019) and some funding is available to support the attendance of early career researchers and PhD students.

Who Invented the Matrix Condition Number?


The condition number of a matrix is a well known measure of ill conditioning that has been in use for many years. For an n\times n matrix A it is \kappa(A) = \|A\| \|A^{-1}\|, where \|\cdot\| is any matrix norm. If A is singular we usually regard the condition number as infinite.

The first occurrences of the term “condition number” and of the formula \kappa(A) = \|A\| \|A^{-1}\| that I am aware of are in Turing’s 1948 paper Rounding-Off Errors in Matrix Processes. He defines the M-condition number n\|A\|_M \|A^{-1}\|_M and the N-condition number n^{-1}\|A\|_F \|A^{-1}\|_F, where \|A\|_M = \max_{i,j}|a_{ij}| and \|A\|_N = (\sum_{i,j}|a_{ij}|^2)^{1/2}, the latter N-norm being what we now call the Frobenius norm. He suggests using these condition numbers to measure the ill conditioning of a matrix with respect to linear systems, using a statistical argument to make the connection. He also notes that “the best conditioned matrices are the orthogonal ones”.

In his 1963 book Rounding Errors in Algebraic Processes, Wilkinson credits the first use of “condition number” to Turing and notes that “the term `ill-condition’ had been in common use among numerical analysts for some considerable time before this”. An early mention of linear equations being ill conditioned is in the 1933 paper An Electrical Calculating Machine by Mallock. According to Croarken, Mallock’s machine “could not adequately deal with ill conditioned equations, letting out a very sharp whistle when equilibrium could not be reached”.

As noted by Todd (The Condition of a Certain Matrix, 1950), von Neumann and Goldstine (in their monumental 1947 paper Numerical Inverting of Matrices of High Order) and Wittmeyer (1936) used the ratio of largest to smallest eigenvalue of a positive definite matrix in their analyses, which amounts to the 2-norm condition number \kappa_2(A) = \|A\|_2 \|A^{-1}\|_2, though this formula is not used by these authors. Todd called this the P condition number. None of the M, N or P names have stuck.

Nowadays we know that \kappa(A) can be thought of both as a measure of the sensitivity of the solution of a linear system to perturbations in the data and as a measure of the sensitivity of the matrix inverse to perturbations in the matrix (see, for example, Condition Numbers and Their Condition Numbers by D. J. Higham). How to formulate the definition of condition number for a wide class of problems was worked out by John Rice in his 1966 paper A Theory of Condition.

Reflections on a SIAM Presidency

The Drexel Dragon, on the Drexel University campus a couple of blocks from SIAM.
Face Fragment (1975) sculpture, a block from SIAM.

My two-year term as SIAM President ended on December 31, 2018. It’s been an exciting and enjoyable two years, not least because of the excellent SIAM staff, leadership and other volunteers I’ve worked with.

My blog post Taking Up the SIAM Presidency and my SIAM News article Evolving and Innovating set out some ideas that I wanted to pursue during my two years as president. I will not attempt to review these here, but just list five highlights from the last two years.

  • We held the SIAM ADVANCE in April 2018: a two-day strategic planning workshop attended by 25 officers, staff, and other members of the SIAM community. The many ideas that emerged from the event are summarized in an 80-page report provided to the SIAM Council and Board of Trustees. Many of these have already been acted upon, others are in progress, and yet more will be considered in the future. My SIAM News article Advancing SIAM gives more details of the workshop.
  • A new journal SIAM Journal on Mathematics of Data Science was created. The first issue will be published in the first few months of 2019.
  • A new SIAM book series Data Science was created.
  • A new SIAG, the SIAM Activity Group on Applied and Computational Discrete Algorithms was approved and will begin operation in 2019.
  • The new SIAM website was launched (in June 2018).
The location of the SIAM office: 3600 Market Street, Philadelphia.

Here is a summary of my presidency in numbers:

  • 12 trips to the USA (with 0 upgrades from economy class to business class).
  • 8 visits to SIAM headquarters and 1 SIAM staff meeting attended.
  • 20 “From the SIAM President” columns written for SIAM News: they are listed here.
  • 2 SIAM Council Meetings chaired and 4 SIAM Board meetings attended.
  • 1 ICIAM board meeting attended and 1 ICIAM board meeting and workshop hosted by SIAM in Philadelphia.
  • 2 meetings of the Joint Policy Board for Mathematics in Washington chaired and 2 attended.
  • Over 230 appointments made to committees and of candidates for elections (with the advice of various SIAM committees).