Fourth Edition (2013) of Golub and Van Loan’s Matrix Computations

Back in 1980 there were not many up to date books on numerical linear algebra. Stewart’s Introduction to Matrix Computations (1973) was a popular textbook, and was the text for the final year undergraduate course that I took on the subject. Parlett’s The Symmetric Eigenvalue Problem (1980) was a graduate level treatment of the symmetric eigenvalue problem. And Wilkinson’s The Algebraic Eigenvalue Problem (1965) was still the bible of numerical linear algebra, albeit already somewhat out of date due the fast moving research developments since it was published.

While an MSc student, I heard about the impending publication of a new book on matrix computations by Golub and Van Loan. I pre-ordered a copy and in spring 1983 received one of the first copies in the UK. The book was a revelation. It presented a completely fresh and up to date perspective on the subject. Some of the most exciting features were

  • extensive use of pseudocode, with MATLAB-style indexing notation, to describe algorithms,
  • the use of flops to measure computational cost,
  • emphasis on the use of the SVD,
  • modern presentation of rounding error analysis, with rounding error bounds given for each algorithm,
  • systematic treatment of the conjugate gradient and Lanczos methods,
  • coverage of topics not found in earlier books, such as condition estimation, generalized SVD, and total least squares,
  • very lively writing style.

I studied the book in great detail and learned a huge amount from it.

Covers of first to fourth editions.

A second edition was published in 1989. It was written while Charlie Van Loan was in the UK on sabbatical and I was spending a year at Cornell (Charlie’s home university). I had the opportunity to read and comment on draft chapters. The second edition maintained all the material from the first and added new chapters on matrix multiplication (and the relevant machine architecture considerations) and parallel algorithms, and it was typeset in LaTeX for the first time. The term flop was redefined so that a+b*c represents two flops (as it does today) instead of one as in the first edition. A number of other changes were introduced to address a criticism in some reviews of the first edition that the book was rather terse and fast-paced for use as a course textbook.

A third edition followed in 1996. After a 17 year gap the fourth edition has just been published. Work on this edition began following the untimely death of Gene Golub in 2007. Some statistics indicate the development of the book:

Edition Year Number of pages Pages of master bibliography
First 1983 472 25
Second 1989 642 34
Third 1996 694 50
Fourth 2013 756 65^\dagger

\dagger The master bibliography of the fourth edition is not printed in the book but is downloadable from the book’s web page.

What is Different About the Fourth Edition?

The new edition is physically larger than its predecessors, with a text width of 13 cm versus 11.5 cm in the last edition, so the content is increased by more than the page count would suggest. Moreover, the paper is extremely high quality, and this makes the book bigger and heavier than you would expect. I bought the hardback, because I know from experience that the softback of all three previous editions did not stand up well to heavy use. The image shows the third and fourth editions along with Horn and Johnson’s Matrix Analysis (second edition, 2013) and my Accuracy and Stability of Numerical Algorithms (second edition, 2002).


A number of new topics are included, of which I would pick out

  • fast transforms
  • Hamiltonian and product eigenvalue problems
  • large-scale SVD
  • multigrid
  • tensor computations

I like the statement in the preface that “References that are historically important have been retained because old ideas have a way of resurrecting themselves.” This is of course particularly true as regards methods suitable for high-performance computing.

Lists of relevant LAPACK codes at the start of each chapter have been removed, as have many of the small, illustrative numerical examples, which are replaced by MATLAB codes to be made available on the book’s web page.

The fourth edition remains the best general reference on matrix computations and a must-have for any serious researcher in the field. A big difference from 1983, when the first edition appeared, is that now a separate research monograph is available covering almost every topic in the book (and due reference is made to 28 such “Global References”). But Matrix Computations brings together and unifies a wide variety of topics in one place.

2013 has been a good year for books on matrices and approximation, with the publication of a second edition of Horn and Johnson’s Matrix Analysis, Trefethen’s Approximation Theory and Approximation Practice, and now this very welcome fourth edition of Golub and Van Loan. It is available from the usual sources as well as from SIAM. Consider the Kindle edition to save your back. You can still have it signed!


Workshop on Matrix Functions and Matrix Equations

Last month we (Stefan Guettel, Nick Higham and Lijing Lin) organized a 2.5 day workshop Advances in Matrix Functions and Matrix Equations We had 57 attendees from around the world (see group photo): UK (19), Italy (7), USA (7), Germany (6), Canada (2), France (2), Portugal (2), South Africa(2), Saudi Arabia(2), Austria (1), Belgium (1), India (1), Ireland (1), Poland (1), Russia (1), Sweden (1), Switzerland (1).

We last organized a workshop on matrix functions in Manchester in 2008 (MIMS New Directions Workshop Functions of Matrices). The field has advanced significantly since then. Some emerging themes of this year’s workshop were as follows.

Krylov methods: Several speakers presented new results on this class of methods for the approximation of large-scale matrix functions, including a convergence analysis by Grimm of the extended Krylov subspace method taking into account smoothness properties of the starting vector, black-box parameter selection for the rational Krylov approximation of Markov matrix functions by Guettel and Knizhnerman, and an adaptive tangential interpolation strategy for MIMO model order reduction by Simoncini and Druskin.

Matrix exponential: Research continues to focus on this, the most important of all matrix functions (the inverse is excluded as being too special). We were delighted that Charlie Van Loan opened the workshop with a talk “What Isn’t There To Learn from the Matrix Exponential?”. Charlie wrote some of the key early papers on exp(A). Indeed his work on exp(A) began when he was a postdoc at Manchester in the early 1970s, and his 1975 Manchester technical report A Study of the Matrix Exponential contains ideas that later appeared in his papers and his book (with Golub) Matrix Computations. In particular, it makes the case that “anything that the Jordan decomposition can do, the Schur decomposition can do better”, and is still worth reading.

Exotic matrix functions: Two talks focused on newer, more “exotic” matrix functions and had links to Rob Corless, who was in the audience. Bruno Iannazzo discussed how to compute the Lambert W function of a matrix, which is any solution of the matrix equation X e^X = A. The scalar Lambert W function was named and popularized in a 1996 paper by Corless, Gonnet, Hare, Jeffrey and Knuth, On the Lambert W Function; it has many applications, including in delay differential equations. Bruno finished with a striking photo of the equation written in sand. images/130410-1058-06-0754.jpg Mary Aprahamian presented a new matrix function called the matrix unwinding function, defined as U(A) = (A - \log e^A )/(2\pi i), which arises from the scalar unwinding number introduced by Corless, Hare and Jeffrey in 1996. She showed that it is useful as a means for obtaining correct identities involving multivalued functions at matrix arguments, as well as being useful for argument reduction in evaluating the matrix exponential.

A special afternoon session celebrated the 70th birthday of Krystyna Zietak, who has made many contributions to numerical linear algebra and approximation theory. Krystyna gave the opening talk in which she described some highlights of her international travels and of hosting visitors in Wroclaw, well illustrated by photos.

Happy birthday Krystyna!
Following the session we had a reception in the Living Worlds gallery of the Manchester Museum, followed by a dinner in the Fossil gallery, with Stan the Tyrannosaurus Rex looking over us.

Dinner in the Fossil gallery

Financial support for the workshop came from the European Research Council and book displays were kindly provided by Cambridge University Press, Oxford University Press, Princeton University Press and SIAM.

Most of the talks are available in PDF format from the workshop programme page.

A gallery of photos from the workshop has been produced, combining the efforts of several photographers.