1984 Symposium in Honour of James Wilkinson

In September 1984 a two-day Symposium on Computational Mathematics–State of the Art was held at Argonne National Laboratory in honour of James Hardy Wilkinson on his 65th birthday.

Wilkinson was one of the leading figures in 20th century numerical analysis. He developed the theory and practice of backward error analysis for floating-point computation, and developed, analyzed, and implemented in software many algorithms in numerical linear algebra. Among his many honours, Wilkinson was a Fellow of the Royal Society (elected 1969) and a winner of both the Turing Prize (1970) and the SIAM John von Neumman Lecture award (1970). His legacy endures and in 2019 we celebrated the centenary of his birth with the conference Advances in Numerical Linear Algebra.

The 1984 symposium provided “an overview of the state of the art in several of the major areas of computational mathematics” and “was particularly appropriate for this occasion in view of the many fundamental contributions in computational mathematics made by Professor Wilkinson throughout his distinguished career”.

The symposium attracted more than 260 attendees and comprised ten invited talks by distinguished researchers:

  • Some Problems in the Scaling of Matrices, G. W. Stewart
  • Matrix Calculations in Optimization Algorithms, M. J. D. Powell
  • Numerical Solution of Differential-Algebraic Equations, C. W. Gear
  • The Requirements of Interactive Data Analysis Systems, Peter J. Huber
  • Linear Algebra Problems in Multivariate Approximation Theory, C. de Boor
  • Computational linear Algebra in Continuation and Two Point Boundary Value Problems, H. B. Keller
  • Second Thoughts on the Mathematical Software Effort: A Perspective, W. J. Cody
  • Enhanced Resolution in Shock Wave Calculations, James Glimm
  • Givens’ Eigenvector Recurrence Revisited, Beresford Parlett
  • The State of the Art in Error Analysis, James H. Wilkinson

Slides from the talks are available in an informal proceedings available here in PDF form and on Google Books.

Jack Dongarra has provided me with contact prints of photos taken at the workshop. I have scanned some of these and they are shown below.

One of the photos shows Jim Wilkinson with an Apple Mac that was presented to him. He used it to type up several of his later papers. Sadly, Jim died just over two years after the symposium.

Numerical Algorithms for High-Performance Computational Science: Highlights of the Meeting

wordle_nahpcs.png
Word cloud from abstracts.

The Royal Society discussion meeting Numerical Algorithms for High-Performance Computational Science, which I organized with Jack Dongarra and Laura Grigori, was held in London, April 8-9, 2019. The meeting had 16 invited speakers and 23 contributed posters, with a poster blitz preceding the lively poster session.

The meeting brought together around 150 people who develop and implement numerical algorithms and software libraries—numerical analysts, computer scientists, and high-performance computing researchers—with those who use them in some of today’s most challenging applications. The aim was to discuss the challenges brought about by evolving computer architectures and the characteristics and requirements of applications, both to produce greater awareness of algorithms that are currently available or under development and to steer research on future algorithms.

Several key themes emerged across multiple talks, all in the context of today’s high performance computing landscape in which processor clock speeds have stagnated (with the end of Moore’s law) and exascale machine are just two or three years away.

  • An important way of accelerating computations is through the use of low precision floating-point arithmetic—in particular by exploiting a hierarchy of precisions.
  • We must exploit low rank matrix structure where it exists, for example in hierarchical (H-matrix) form, combining it with randomized approximations.
  • Minimizing data movement (communication) is crucial, because of its increasing costs relative to the costs of floating-point arithmetic.
  • Co-design (the collaborative and concurrent development of hardware, software, and numerical algorithms, with knowledge of applications) is increasingly important for numerical computing.

Applications treated included deep learning, climate modeling, computational biology, and the Square Kilometre Array (SKA) radio telescope project. We learned that the latter project needs faster FFTs in order to deal with the petabytes of data it will generate each year.

Papers from the meeting will be published in a future issue of Philosophical Transactions of the Royal Society A.

Here are the talks (in order of presentation), with links to the slides.

For a brief summary of the talks see George Constantinides’s insightful thoughts on the meeting.

Here are some photos from the meeting.

Advances in Numerical Linear Algebra Conference and James Hardy Wilkinson Centenary

Wilkinson_021.jpg

This year marks the 100th anniversary of the birth of James Hardy Wilkinson, FRS. Wilkinson developed the theory and practice of backward error analysis for floating-point computation, and developed, analyzed, and implemented in software many algorithms in numerical linear algebra. While much has changed since his untimely passing in 1986, we still rely on the analytic and algorithmic foundations laid by Wilkinson.

This micro website about Wilkinson, set up by Sven Hammarling and me, contains links to all kinds of information about Wilkinson, including audio and video recordings of him.

With Sven Hammarling and Françoise Tisseur, I am organizing a conference Advances in Numerical Linear Algebra: Celebrating the Centenary of the Birth of James H. Wilkinson at the University of Manchester, May 29-30, 2019. Among the 13 invited speakers are several who knew or worked with Wilkinson. As well as focusing on recent developments and future challenges for numerical linear algebra, the talks will include reminiscences about Wilkinson and discuss his legacy.

Contributed talks and posters are welcome (deadline April 1, 2019) and some funding is available to support the attendance of early career researchers and PhD students.

Bohemian Matrices in Manchester

Bohemian matrices are families of matrices in which the entries are drawn from a fixed discrete set of small integers (or some other discrete set). The term is a contraction of BOunded HEight Matrix of Integers and was coined by Rob Corless and Steven Thornton of the University of Western Ontario. Such matrices arise in many situations:

  • adjacency matrices of graphs have entries from \{0, 1\};
  • Bernoulli matrices, which occur in compressed sensing, have entries from \{-1,1\};
  • Hadamard matrices have entries from \{-1,1\} and orthogonal columns; and
  • matrices with elements from \{-1, 0, 1\} provide worst case growth factors for Gaussian elimination with partial pivoting and yield the most ill conditioned triangular matrices with elements bounded by 1.

Rob’s group have done extensive computations of eigenvalues and characteristic polynomials of Bohemian matrices, which have led to interesting observations and conjectures. Many beautiful visualizations are collected on the website http://www.bohemianmatrices.com as well as on the Bohemian Matrices Twitter feed.

DoublyCompanion_19x19_1_gallery@2x_1200px.jpg
A density plot in the complex plane of the eigenvalues of a sample of 10 million 19×19 doubly companion matrices. Image credit: Steven Thornton].

In June 2018, Rob and I organized a 3-day workshop Bohemian Matrices and Applications, bringing together 16 people with an interest in the subject from a variety of backgrounds. The introductory talks by Rob, Steven, and I were videod (and are embedded below), and the slides from most of the talks are available on the conference website.

We scheduled plenty of time for discussion and working together. New collaborations were started, several open problems were solved and numerous new questions were posed.

The workshop has led to various strands of ongoing work. Steven has created the Characteristic Polynomial Database, which contains more than 10^9 polynomials from more than 10^{12} Bohemian matrices and has led to a number of conjectures concerning matches of properties to sequences at the On-Line Encyclopedia of Integer Sequences. Three recent outputs are

Sponsorship of the workshop by the Heilbronn Institute for Mathematical Research, the Royal Society and the School of Mathematics at the University of Manchester, as well as support from Canada for some of the Canadian participants, is gratefully acknowledged.

180621-1341-45_9805_1200px.jpg
Lalo Gonzalez-Vega (Universidad de Cantabria)

180620-1242-15_0003_1200px.jpg

180622-0930-54_9861_1200px.jpg
Eunice Chan (University of Western Ontario)



Conversations with Gil Strang

At JuliaCon 2018 in London, one of the keynote presentations was a conversation with Gil Strang led by Alan Edelman and Pontus Stenetorp. Gil is well known for his many books on linear algebra, applied mathematics and numerical analysis, as well as his research contributions.

Gil talked about his famous 18.06 linear algebra course at MIT, which in its YouTube form has had almost 3 million views. A number of people in the audience commented that they had learned linear algebra from Gil.

180810-0858-43_0655.jpg

Gil also talked about his book Linear Algebra and Learning from Data, due to be published at the end of 2018, which includes chapters on deep neural networks and back propagation. Many people will want to read Gil’s insightful slant on these topics (see also my SIAM News article The World’s Most Fundamental Matrix Equation).

As well as the video of Gil’s interview embedded below, two written interviews will be of interest to Gil’s fans:

Tricks and Tips in Numerical Computing

In a keynote talk at JuliaCon 2018 I described a variety of tricks, tips and techniques that I’ve found useful in my work in numerical computing.

The first part of the talk was about two aspects of complex arithmetic: the complex step method for approximating derivatives, and understanding multivalued functions with the help of the unwinding number. Then I talked about the role of the associativity of matrix multiplication, which turns out to be the key property that makes the Sherman-Morrison formula work (this formula gives the inverse of a matrix after a rank 1 update). I pointed out the role of associativity in backpropagation in neural networks and deep learning.

After giving an example of using randomization to avoid pathological cases, I discussed why low precision (specifically half precision) arithmetic is of growing interest and identified some issues that need to be overcome in order to make the best use of it.

Almost every talk at JuliaCon was livecast on YouTube, and these talks are available to watch on the Julia Language channel. The slides for my talk are available here.

Also at the conference, my PhD student Weijian Zhang spoke about the work he has been doing on evolving graphs in his PhD.

Lectures on Multiprecision Algorithms in Kácov

At the end of May, I was one of four lecturers at the ESSAM school on Mathematical Modelling, Numerical Analysis and Scientific Computing, held in Kácov, about a hour’s drive south-east of Prague in the Czech Republic.

The event was superbly organized by Josef Malek, Miroslav Rozlozník, Zdenek Strakos and Miroslav Tuma. This was a relaxed and friendly event, and the excellent weather enabled most meals to be taken on the terrace of the family-run Sporthotel Kácov in which we were staying.

180529-1443-05_8098.jpg
Françoise Tisseur lecturing on the nonlinear eigenvalue problem.

I gave three lectures of about one hour each on Multiprecision Algorithms. The slides are available from this link. Here is an abstract for the lectures:

Today’s computing environments offer multiple precisions of floating-point arithmetic, ranging from quarter precision (8 bits) and half precision (16 bits) to double precision (64 bits) and even quadruple precision (128 bits, available only in software), as well as arbitrary precision arithmetic (again in software). Exploiting the available precisions is essential in order to reduce the time to solution, minimize energy consumption, and (when necessary) solve ill-conditioned problems accurately.

In this course we will describe the precision landscape, explain how we can exploit different precisions in numerical linear algebra, and discuss how to analyze the accuracy and stability of multiprecision algorithms.

  • Lecture 1. IEEE standard arithmetic and availability in hardware and software. Motivation for low precision from applications, including machine learning. Exploiting reduced communication cost of low precision. Issued relating to rounding error analyses in low precision. Simulating low precision for testing purposes. Challenges of implementing algorithms in low precision.
  • Lecture 2. Basics of rounding error analysis, illustrated with summation. Why increasing precision is not a panacea. Software for high precision and its cost. Case study: the matrix logarithm in high precision.
  • Lecture 3. Solving very linear systems (possibly very ill conditioned and/or sparse) using mixed precision: iterative refinement in three precisions. A hybrid direct-iterative method: GMRES-IR.

I gave an earlier version of these lectures in March 2018 at the EU Regional School held at the Aachen Institute for Advanced Study in Computational Engineering Science (AICES), Germany. This single two and a half hour lecture was recorded and can be viewed on YouTube. The slides are available here.

180601-1145-07_8145.jpg
Sporthotel Kácov, Czech Republic.

Conference in Honour of Walter Gautschi

Last week I had the pleasure of attending and speaking at the Conference on Scientific Computing and Approximation (March 30-31, 2018) at Purdue University, held in honour of Walter Gautschi (Professor Emeritus of Computer Science and Mathematics at Purdue University) on the occasion of his 90th birthday.

180330-2051-24_5956.jpg

The conference was expertly organized by Alex Pothen and Jie Shen. The attendees, numbering around 70, included many of Walter’s friends and colleagues.

The speakers made many references to Walter’s research contributions, particularly in the area of orthogonal polynomials. In my talk, Matrix Functions and their Sensitivity, I emphasized Walter’s work on conditioning of Vandermonde matrices.

A Vandermonde matrix V_n is an n\times n matrix depending on parameters x_1,x_2,\ldots,x_n that has j th column [1, x_j, \ldots, x_j^{n-1}]^T. It is nonsingular when the x_i are distinct. This is a notoriously ill conditioned class of matrices. Walter said that he first experienced the ill conditioning when he computed Gaussian quadrature formulas from moments of a weight function.

Walter has written numerous papers on Vandermonde matrices that give much insight into their conditioning. Here is a very a brief selection of Walter’s results. For more, see my chapter Numerical Conditioning in Walter’s collected works.

In a 1962 paper he showed that

\displaystyle\|V_n^{-1}\|_{\infty} \le \max_i \prod_{j\ne i}\frac{ 1+|x_j| }{ |x_i-x_j| }.

In 1978 he obtained

\displaystyle\|V_n^{-1}\|_{\infty} \ge \max_i \prod_{j\ne i} \frac{ \max(1,|x_j|) }{ |x_i-x_j| },

which differs from the upper bound by at most a factor 2^{n-1}. A 1975 result is that for x_i equispaced on [0,1],

\displaystyle\kappa(V_n)_{\infty} \sim \frac{1}{\pi} e^{-\frac{\pi}{4}} (3.1)^n.

A 1988 paper returns to lower bounds, showing that for x_i \ge 0 and n\ge 2,

\displaystyle\kappa(V_n)_{\infty} > 2^{n-1}.

When some of the x_i coincide a confluent Vandermonde matrix can be defined, in which columns are “repeatedly differentiated”. Walter has obtained bounds for the confluent case, too.

These results quantify the extreme ill conditioning. I should note, though, that appropriate algorithms that exploit structure can nevertheless obtain accurate solutions to Vandermonde problems, as described in Chapter 22 of Accuracy and Stability of Numerical Algorithms.

180330-0955-28_8866.jpg
Ron DeVore speaking on Optimal Data Assimilation

180329-1542-29_5725.jpg
Photo of Nick Higham’s talk by David Gleich.

Photo Highlights of 2017

Here are some of my favourite photos taken at events that I attended in 2017.

Atlanta (January)

This was the first time I have attended the Joint Mathematics Meetings, which were held in Atlanta, January 4-7, 2017. It was a huge conference with over 6000 attendees. A highlight for me was the launch of the third edition of MATLAB Guide on the SIAM booth, with the help of The MathWorks: 170106-1123-14-5528.jpg Elizabeth Greenspan and Bruce Bailey looked after the SIAM stand: 170105-2056-12_5438.jpg If you are interested in writing a book or SIAM, Elizabeth would love to hear from you!

The conference was held in the Marriott Marquis Hotel and the Hyatt Regency Hotel, both of which have impressive atriums. This photo is taken taken with a fish-eye lens, looking up into the Marriott Marquis Hotel’s atrium 170104-2015-50-5388.jpg (For more photos, see Fuji Fisheye Photography: XT-2 and Samyang 8mm).

Atlanta (March)

I was back in Atlanta for the SIAM Conference on Computational Science and Engineering, February 27-March 3, 2017. A highlight was a 70th birthday dinner celebration for Iain Duff, pictured here speaking at the Parallel Numerical Linear Algebra for Extreme Scale Systems minisymposium: 170228-1020-54-5783.jpg Here is Sarah Knepper of Intel speaking in the Batched Linear Algebra on Multi/Many-Core Architectures symposium (a report on which is given in the blog post by Sam Relton) 170227-1712-57_5700.jpg Torrential rain one night forced me to take shelter on the way back from dinner, allowing a moment to capture this image of Peach Tree Street. 170301-1945-22_6376.jpg

Washington (April)

The National Math Festival was held at the Walter E. Washington Convention Center in Washington DC on April 22, 2017: 170422-1537-57_6202.jpg I caught the March for Science on the same day: 170422-1917-01_6325.jpg 170422-1944-58_6455.jpg

Pittsburgh (July)

The SIAM Annual Meeting, held July 10-14, 2017 at the David Lawrence Convention Center in Pittsburgh, was very busy for me as SIAM president. Here is conference co-chair Des Higham speaking in the minisymposium “Advances in Mathematics of Large-Scale and Higher-Order Networks”: 170713-1033-16_7413.jpg Emily Shuckburgh gave the I.E. Block Community Lecture “From Flatland to Our Land: A Mathematician’s Journey through Our Changing Planet”: 170712-1821-50_7337.jpg The Princeton Companion to Applied Mathematics was on display on the Princeton University Press stand: 170710-1728-15_7289.jpg Here are Des and I on the Roberto Clemente bridge over the Allegheny River, the evening before the conference started: 170708-2121-33_7255.jpg

SIAM Annual Meeting 2017 Highlights

170712-1821-50-7337.jpg
Emily Shuckburgh delivering the I. E. Block Community Lecture “From Flatland to Our Land: A Mathematician’s Journey through Our Changing Planet”. A recording of her lecture is available at https://www.pathlms.com/siam/courses/4988/sections/7425.

It’s a couple of weeks since the 2017 SIAM Annual Meeting, which I previewed in an earlier post. The meeting was held at the David Lawrence Convention Center in Pittsburgh and was co-chaired by Des Higham (University of Strathclyde) and Jennifer Mueller (Colorado State University).

The approximately 1150 attendees enjoyed five days packed from morning to evening with lectures, panel sessions, careers events, professional development sessions, and other activities.

You can catch up with what went on at the meeting in several ways.

Other Links

Here is a Storify that captures many of the Tweets from the meeting.

Here is blog post about the meeting by Scott Morgan, the president of the SIAM Student Chapter.

SIAM Presents contains recordings of selected talks delivered in the main ballroom. These include all the invited lectures and prize lectures.

Join us in Portland, Oregon, July 9-13 next year, for the 2018 SIAM Annual Meeting!