At JuliaCon 2018 in London, one of the keynote presentations was a conversation with Gil Strang led by Alan Edelman and Pontus Stenetorp. Gil is well known for his many books on linear algebra, applied mathematics and numerical analysis, as well as his research contributions.

Gil talked about his famous 18.06 linear algebra course at MIT, which in its YouTube form has had almost 3 million views. A number of people in the audience commented that they had learned linear algebra from Gil.

Gil also talked about his book Linear Algebra and Learning from Data, due to be published at the end of 2018, which includes chapters on deep neural networks and back propagation. Many people will want to read Gil’s insightful slant on these topics (see also my SIAM News article The World’s Most Fundamental Matrix Equation).

As well as the video of Gil’s interview embedded below, two written interviews will be of interest to Gil’s fans:

In an earlier post I provided links to videos of PCAM authors giving talks related to the topics of their PCAM articles. To add to those, here are the four PCAM author talks from the SAMSI workshop.

I have also included the talk by Margaret Wright, because it provides insight into a number of important topics covered in PCAM in a very lucid way.

John Burns, Parameter Identification for Dynamical Systems with Structured Uncertainty (author of PCAM article Optimal Sensor Location in the Control of Energy-Efficient Buildings)

Jack Dongarra, The Road to Exascale and Legacy Software for Dense Linear Algebra (author of PCAM article High-Performance Computing)

Yonina Eldar, Phase Retrieval and Analog to Digital Compression (author of PCAM article Compressed Sensing)

Stephen Wright, Randomness in Coordinate Descent (author of PCAM article Continuous Optimization (Nonlinear and Linear Programming)

Margaret Wright, Old, New, Borrowed, and Blue in the Marriage of Statistics and Optimization

Charlie Van Loan, Joseph C. Ford Professor of Engineering in the Department of Computer Science at Cornell University, retires in summer 2016.

Charlie has been a huge inspiration to me and many others, not least through his book Matrix Computations, with Gene Golub, now in its fourth edition. I wrote about the book on the occasion of the publication of the fourth edition (2013) in this previous post.

Following his PhD at the University of Michigan, Charlie visited the Department of Mathematics at the University of Manchester in 1974–1975 as a Science Research Council Research Fellow. He wrote the department’s first Numerical Analysis Report as well as three more of the first ten reports, as explained in this post.

A 55-minute video interview with Charlie by his colleague Kavita Bala, recorded in 2015, is available at the Cornell University eCommons. In it, Charlie talks about his PhD, with Cleve Moler as advisor, life as a young Cornell faculty member, the “GVL” book, computer science education, and many other things.

A two-part minisymposium is being held in Charlie’s honor at the SIAM Annual Meeting in Boston, July 11-14, 2016, organized by David Bindel (Cornell University) and Ilse Ipsen (North Carolina State University). I will be speaking in the second part about Charlie’s work on the matrix exponential. The details are below. If you will be at the meeting come and join us. I hope to provides links to the slides after the event.

SIAM Annual Meeting 2016. Numerical Linear and Multilinear Algebra: Celebrating Charlie Van Loan. Wednesday, July 13

Part I: MS73, MS89: 10:30 AM – 12:30 PM. BCEC Room 254B. Abstracts

10:30-10:55 Parallel Tucker-Based Compression for Regular Grid Data, Tamara G. Kolda, Sandia National Laboratories, USA

11:00-11:25 Cancer Diagnostics and Prognostics from Comparative Spectral Decompositions of Patient-Matched Genomic Profiles, Orly Alter, University of Utah, USA

11:30-11:55 Exploiting Structure in the Simulation of Super Carbon Nanotubes, Christian H. Bischof, Technische Universität Darmstadt, Germany

12:00-12:25 A Revisit to the GEMM-Based Level 3 BLAS and Its Impact on High Performance Matrix Computations abstract Bo T. Kågström, Umeå University, Sweden

4:30-4:55 Nineteen Dubious Ways to Compute the Zeros of a Polynomial, Cleve Moler, The MathWorks, Inc., USA

5:00-5:25 The Efficient Computation of Dense Derivative Matrices in MATLAB Using ADMAT and Why Sparse Linear Solvers Can Help, Thomas F. Coleman, University of Waterloo, Canada

5:30-5:55 On Rank-One Perturbations of a Rotation, Robert Schreiber, Hewlett-Packard Laboratories, USA

Jack Williams passed away on November 13th, 2015, at the age of 72.

Jack obtained his PhD from the University of Oxford Computing Laboratory in 1968 and spent two years as a Lecturer in Mathematics at the University of Western Australia in Perth. He was appointed Lecturer in Numerical Analysis at the University of Manchester in 1971.

He was a member of the Numerical Analysis Group (along with Christopher Baker, Ian Gladwell, Len Freeman, George Hall, Will McLewin, and Joan Walsh) that, together with numerical analysis colleagues at UMIST, took the subject forward at Manchester from the 1970s onwards.

Jack’s main research area was approximation theory, focusing particularly on Chebyshev approximation of real and complex functions. He also worked on stiff ordinary differential equations (ODEs). His early work on Chebyshev approximation in the complex plane by polynomials and rationals was particularly influential and is among his most-cited. Example contributions are

His later work on discrete Chebyshev approximation was of particular interest to me as it involved linear systems with Chebyshev-Vandermonde coefficient matrices, which I, and a number of other people, worked on a few years later:

On the differential equations side, Jack wrote the opening chapter “Introduction to discrete variable methods” of the proceedings of a summer school organized jointly by the University of Liverpool and the University of Manchester in 1975 and published in G. Hall and J. M. Watt, eds, Modern Numerical Methods for Ordinary Differential Equations, Oxford University Press, 1976. This book’s timely account of the state of the art, covering stiff and nonstiff problems, boundary value problems, delay-differential equations, and integral equations, was very influential, as indicted by its 549 citations on Google Scholar. Jack contributed articles on ODEs and PDEs to three later Liverpool–Manchester volumes (1979, 1981, 1986).

Jack’s interests in approximation theory and differential equations were combined in his later work on parameter estimation in ODEs, where a theory of Chebyshev approximation applied to solutions of parameter-dependent ODEs was established, as exemplified by

Jack spent a sabbatical year in the Department of Computer Science at the University of Toronto, 1976–1977, at the invitation of Professor Tom Hull. Over a number of years several visits between Manchester and Toronto were made in both directions by numerical analysts in the two departments.

It’s a fact of academic life that seminars can be boring and even impenetrable. Jack could always be relied on to ask insightful questions, whatever the topic, thereby improving the experience of everyone in the room.

Jack was an excellent lecturer, who taught at all levels from first year undergraduate through to Masters courses. He was confident, polished, and entertaining, and always took care to emphasize practicalities along with the theory. He had the charisma—and the loud voice!—to keep the attention of any audience, no matter how large it might be.

He studied Spanish at the Instituto Cervantes in Manchester, gaining an A-level in 1989 and a Diploma Basico de Espanol Como Lengua Extranjera from the Spanish Ministerio de Educación y Ciencia in 1992. He subsequently set up a four-year degree in Mathematics with Spanish, linking Manchester with Universidad Complutense de Madrid.

Jack was promoted to Senior Lecturer in 1996 and took early retirement in 2000. He continued teaching in the department right up until the end of the 2014/2015 academic year.

I benefited greatly from Jack’s advice and support both as a postgraduate student and when I began as a lecturer. My office was next to his, and from time to time I would hear strains of classical guitar, which he studied seriously and sometimes practiced during the day. For many years I shared pots of tea with him in the Senior Common Room at the refectory, where a group of mathematics colleagues met for lunchtime discussions.

Jack was gregarious, ever cheerful, and a good friend to many of his colleagues. He will be sadly missed.

I have collected a set of links to videos (or, in some cases, audio captures with slides) of authors speaking on or around the topics of their Companion articles. These should give readers added insight into the topics and their authors.

At the time of posting all links were valid, but links have a habit of changing or disappearing. Please let me know of any new links that can be added to this list or existing ones that need changing.

The Princeton Companion to Applied Mathematics has a 23-page Part I article “History of Applied Mathematics”, but apart from that it does not contain any articles with a historical or biographical emphasis. In designing the book we felt that the articles in Part II, “Equations, Laws and Functions of Applied Mathematics”, would provide a link into the history of applied mathematics through the various equations, laws, and functions included, most of which are eponymous.

The index was produced by a professional indexer, who made a judgement on which of the many names in the book had significant enough mentions to index. A phrase “Newton’s method” would not generate an index entry for “Newton”, but a phrase describing something that Newton did might.

The index revealed some interesting features. First, there are many entries for famous mathematicians and scientists: 76 in total, ranging from to Niels Henrik Abel to Thomas Young. This means that even though there are no biographical articles, authors have included plenty of historical and biographical snippets. Second, many of the mathematicians might equally well have been mentioned in a book on pure mathematics (Halmos, Poincaré, Smale, Weil), which indicates the blurred boundary between pure and applied mathematics.

A third feature of the index is that the number of locators for the mathematicians and scientists that it contains varies greatly, from 1 to 20. We can use this to produce a highly non-scientific ranking. Here is a Wordle, in which the font size is proportional to the number of times that each name occurs.

John von Neumann (1903–-1957) emerges as The Companion’s “most mentioned” applied mathematician. Indeed von Neumann was a hugely influential mathematician who contributed to many fields, as his index entry shows:

von Neumann, John: applied mathematics and, 56–59, 73; computational science and, 336–37, 350; economics and, 71, 644, 650, 869; error analysis and, 77; foams and, 740; Monte Carlo method and, 57; random number generation and, 762; shock waves and, 720; spectral theory and, 239–40, 426

von Neumann’s work has strong connections with my own research interests. With Herman Goldstine he published an important rounding error analysis of Gaussian elimination for inverting a symmetric positive definite matrix. He also introduced the alternating projections method that I have used to solve the nearest correlation matrix problem. And he derived important result on unitarily invariant matrix norms and singular value inequalities

More about von Neumann can be found in the biographies

A book that inspired me early in my career is Numerical Methods That Work by Forman S. Acton, published in 1970 by Harper and Row. Acton, a professor in the electrical engineering department at Princeton University, had a deep understanding of numerical computation and the book captures his many years of experience of practical problem solving using a combination of hand computations and early computers.

Although written in the 1960s, Acton’s book is more about the 1950s world of computation; it makes only brief mention of the QR algorithm for eigenvalues and does not cover the singular value decomposition or variable step size ODE solvers. Moreover, the author has an aversion to library routines and to rigorous error bounds. Acton states that the students who have attended his numerical methods course have mostly “been Engineers and Scientists. (Mathematicians at Princeton are proudly Pure while most Computer Scientists find an obligatory decimal point to be slightly demeaning.)”. What, then, is special about this book from an applied mathematics point of view?

The book promotes timeless principles that are taught less and less nowadays. A general theme is to analyze a problem and exploit its structure, before applying the simplest suitable numerical method. One example that has stuck with me is the idea of trying to treat a given equation as a perturbation of an easier equation. For example, a quadratic equation with small can be thought of as a small perturbation of the linear equation . Then simple fixed point iteration can be used to solve the quadratic with as a (good) starting value.

The book is particularly strong on estimation or evaluation of integrals, dealing with singularities in functions, solving scalar nonlinear equations, exploiting asymptotic series, and avoiding instabilities. Several of these issues arise in the “railroad rail problem” presented at the start of the book, which every serious user of numerical methods should have a go at solving.

The pièce de résistance of the book is undoubtedly the 13-page “Interlude: What Not to Compute”. Described as a “cathartic essay” by James Daniel in SIAM Review in 1971, this essay is as relevant as ever, though Acton’s professed dislike of recursive calculations seems dated now that most programming languages fully support recursion.

Contemporary reviewers all note the practical slant of the book. I particularly like H. F. Trotter’s comment that “this reviewer, for one, would find it easier to supply theoretical discussion to supplement this text than to supply the lively practicality that is not always present in other books on this subject” (American Scientist, 59 (4), 1971). As this comment indicates, not only is the book full of excellent advice, but it is written in a distinctive and highly entertaining style. Here are a few examples:

“Newton’s predilection for wandering off to East Limbo on encountering a minimum” (On Newton’s method for solving nonlinear equations.)

“Only a socially irresponsible man would ignore such computational savings.” (On methods with operation counts proportional to versus , respectively.)

“Many theorems are available for your pleasure.” (About positive definite matrices.)

The typesetting is excellent. One could hardly do better in . Moreover the diagrams are a paragon of good, minimal design and would not be easy to equal with today’s drawing packages.

In the original book the title on the cover is embossed in silver and the word “Usually” has been inserted, unembossed, just before “Work”. In the 1990 reprint by the Mathematical Association of America the “Usually” is in feint grey text. The reprint includes an extra “Preface-90”, an “Afterthoughts” (the quote in the first paragraph is taken from the latter), and some extra problems. The reprint is available on Google Books

In 1996 Acton, by then an emeritus professor of computer science, published a second book Real Computing Made Real: Preventing Errors in Scientific and Engineering Calculations with Princeton University Press. It contains similar material on a smaller range of topics, and didn’t have the same impact on me as Numerical Methods that Work. Indeed, being published 26 years later it feels much more out of date. Unlike the first book, this one does mention Gaussian quadrature, but only to advise against its use. This book is now out of print at PUP but is available from Dover and at Google Books.

I would like to share a couple of photos of Mike Powell, FRS, who passed away last month. The photos are from early and late in his career.

The first is one of a set of contact prints from a role of Kodak black and white film that I came across in a collection of photos belonging to Gene Golub, which I was able to look through after Gene’s death in 2007. It is clear from the complete set of images that they were taken in or around the Courant Institute. The photos are undated, but Olof Widlund (who appears in some of them) tells me that the photos are most likely from 1965-1966.

The second image is from June 2013 and was taken at the banquet at the Biennial Conference in Numerical Analysis at the University of Strathclyde. Mike is flanked on his right by Iain Duff and on his left by Juan Meza. Mike was a regular attendee at this conference and starting at next month’s conference there will be a regular Fletcher-Powell Invited Lecture, honouring Roger Fletcher and Mike Powell’s contributions to numerical analysis and, particularly, nonlinear optimization.

I first met Hans in 1984 at the Gatlinburg meeting IX in Waterloo, Canada, at which time I was a PhD student. When I discussed my work on matrix square roots with him he recalled a 1966 paper by Culver “On the Existence and Uniqueness of the Real Logarithm of a Matrix”, of which I was unaware. By the time I returned to Manchester, after visiting Stanford for a few weeks, a copy of the paper was waiting for me, with an explanation of how the results of that paper could be adapted to analyze real square roots of a real matrix.

As chair of the 2002 Householder symposium XV in Peebles, Scotland, I was delighted to invite Hans to deliver the after-dinner speech. (The Gatlinburg meeting was renamed the Householder symposium in 1990, in honour of Alston Householder, who organized the early meetings.) Having Hans speak was particularly appropriate as he had studied at the nearby University of Edinburgh. I believe this was the last Householder Symposium that Hans attended.

I kept a copy of my introduction of Hans at the banquet. It seems appropriate to reproduce it here.

Ladies and gentlemen, our after-dinner speaker this evening is Hans Schneider, who is James Joseph Sylvester Emeritus Professor of Mathematics at the University of Wisconsin.

There’s an old definition that an intellectual is somebody who can hear the William Tell overture and not think of the Lone Ranger. I don’t think there are many people who can hear the term “linear algebra and its applications” and not think of Hans Schneider. After all, Hans has been Editor-in-Chief of the journal of that name since 1972, and developed it into a major mathematics journal. Hans was also instrumental in the foundation of the International Linear Algebra Society, of which he served as President from 1987 to 1996.

Some of you may be surprised to know that Hans has a strong connection with Scotland. He studied here and received his Ph.D. at Edinburgh University in 1952 under the famous Alexander Craig Aitken. I understand that Aitken gave him two words of advice: “Read Frobenius!”.

Well, it’s a real pleasure to introduce Hans and to ask him to speak on “The Debt Linear Algebra Owes Helmut Wielandt”.

The reference to Frobenius is apposite, given my original conversation with Hans since, as I have only recently discovered, Frobenius gave one of the earliest proofs of the existence of matrix square roots in 1896. That result, and much more about Frobenius’s wide range of contributions to mathematics is discussed in a 2013 book by Thomas Hawkins, The Mathematics of Frobenius in Context. A Journey Through 18th to 20th Century Mathematics (of which my copy has the rare error of having the odd pages on the left, rather than the right, of each two-page spread).

The photo below was taken during Hans’s after-dinner speech (more photos from the meeting are available in this gallery).

I’ve drawn on many sources for this post, but the most important is the 2006 biography by Karen Parshall, James Joseph Sylvester. Jewish Mathematician in a Victorian World. That title brings out two key points: that Sylvester was Jewish, which hindered his career, as we will see, and that he lived much of his life in Victorian England, when almost everything that today we take for granted when doing our research did not exist.

Thumbnail Sketch of The Man

Sylvester was born in London in 1814. He was short, mercurial, absent-minded, temperamental, fluent in French, German, Italian, Latin and Greek, and loved poetry but was not very good at it. He was a man of remarkable tenacity, as his career on both sides of the Atlantic shows.

Career Outline

I’ll give a brief outline of Sylvester’s unusual career, with its many ups and downs, then go on to discuss some specific events in his life.

First Spell in UK

Sylvester was a student at University College London (UCL) under De Morgan, age 14. He was withdrawn by his family after attempting to stab a fellow pupil.

He was a student at Cambridge, but was not able to take the degree because he was Jewish.

He held the chair of natural philosophy at University College London (UCL) for three years.

First Sojourn in USA

Sylvester became Professor of Mathematics at the University of Virginia in 1841. He left after four months after an altercation with an unruly student, because he was felt that the faculty did not back him up in a subsequent inquiry.

After leaving Virginia he sought a position at Columbia University, with a recommendation from one of America’s leading scientists, Joseph Henry. In a wonderful irony … the selection committee informed him that his rejection was in no way connected with the fact that he was British, only the fact that he was Jewish.

Rest of Career (age 29–).

Sylvester Worked for the next decade as an actuary for the Equity and Law Life Assurance Society in London and trained for the Bar. He founded the Institute of Actuaries. This is when he met Cayley, who became his best friend. For this ten-year period he was doing mathematics in his spare time.

He was appointed Chair at the Royal Military Academy, Woolwich and spent 15 years there.

He was appointed Chair at the newly founded Johns Hopkins University, Baltimore, at the age of 61. He negotiated a salary of $5000 payable in gold, plus an annual housing allowance of $1000 also payable in gold.

His final position was as the Savilian Professor of Geometry at New College, Oxford in 1883, which he took up at the age of 69.

The Neologist

Sylvester introduced many terms that are still in use today, including matrix (1850), canonical form (1851), Hessian (1851), and Jacobian (1852). Another notable example is the term latent root, which Sylvester introduced in 1883, with two charming similes:

“It will be convenient to introduce here a notion (which plays a conspicuous part in my new theory of multiple algebra), namely that of the latent roots of a matrix—latent in a somewhat similar sense as vapour may be said to be latent in water or smoke in a tobacco-leaf.”

Sylvester did a great deal of editorial work. He was an editor of the Quarterly Journal of Mathematics for 23 years. He founded the American Journal of Mathematics in 1878 when he was at Johns Hopkins University. This was the first mathematics research journal in the USA, and indeed Sylvester set up the first mathematics research department in the country. As Editor-in-Chief he experienced some of the problems that subsequent journal editors have suffered from.

He had to work very hard to secure high quality contributions, e.g., from his friend Cayley and from students and colleagues at Johns Hopkins, in addition to his own papers.

He solicited Alfred Kempe’s proof of the four color theorem. After Sylvester had accepted the paper his managing editor, William Story, realized there was a gap in the reasoning, due to overlooked cases, and wrote a note the accompany the paper in which he unsuccessfully tried to patch the proof. This all happened while Sylvester was in England and he was very unhappy with the incident.

Author

Even though Sylvester was an editor himself, he was also the author from hell! He was notorious for what his biographer Parshall calls “an impatience with bibliographic research”—something that led him into disputes with other mathematicians.

MacFarlane states that

Sylvester never wrote a paper without foot-notes, appendices, supplements; and the alterations and corrections in his proofs were such that the printers found their task well-nigh impossible. … Sylvester read only what had an immediate bearing on his own researches, and did little, if any, work as a referee.

The title of one particular paper illustrates this point:

J. J. Sylvester, Explanation of the Coincidence of a Theorem Given by Mr
Sylvester in the December Number of This Journal, With One Stated by
Professor Donkin in the June Number of the Same, Philosophical Magazine
(Fourth Series) 1, 44-46, 1851

Secular Equation Paper

Out of Sylvester’s hundreds of papers, one in particular stands out as notable to me: “On the Equation to the Secular Inequalities in the Planetary Theory”, Philosophical Magazine 16, 267-269, 1883, for the following reasons.

The title has virtually nothing to do with the paper.

This is the paper in which Sylvester defines the term latent roots—but as if a totally new concept, even though the concept of matrix eigenvalue was already known.

He states a theorem about a sum of products of latent roots of a product being expressible in terms of sums of products of minors of and .

He gives the first general definition of function of a matrix (later refined by Buchheim).

He discusses the special case of th roots.

The paper is short (3 pages), no proper introduction is given to these concepts, and no proofs are given. In short, a brilliant but infuriating paper!

Baltimore Summer

In these days of ubiquitous air conditioning it is interesting to note one of the things that made it difficult for Sylvester to do research. Parshall writes, of Sylvester in Baltimore,

“He could not concentrate on his research on matrices in the debilitating summer heat and humidity”.

Teaching

Sylvester’s enthusiasm for matrices is illustrated by his attempt to teach the theory of substitutions out of a new book by Netto. Sylvester

“lectured about three times, following the text closely and stopping sharp at the end of the hour. Then he began to think about matrices again. `I must give one lecture a week on those,’ he said. He could not confine himself to the hour, nor to the one lecture a week. Two weeks were passed, and Netto was forgotten entirely and never mentioned again.” (Parshall, p. 271, quoting Ellery W. Davis).

Compare this with the following quote about E. T. Bell (famous for his book Men of Mathematics, 1937), from Constance Reid’s book about Bell:

Bell’s method of teaching was to read a sentence aloud and announce that he didn’t believe it. `By the time we students convinced him that it was true,’ concedes Highberg, `we pretty well understood it ourselves.’

Inaugural Lecture at Oxford, 12 December 1885

There are many ways in which we are more fortunate today than mathematicians of Sylvester’s time. But there were some advantages to those times. From his inaugural lecture, published as On the Method of Reciprocants as Containing an Exhaustive Theory of the Singularities of Curves (Nature, 1886)

It is now two years and seven days since a message by the Atlantic cable containing the single word “elected” reached me in Baltimore informing me that I had been appointed Savilian Professor of Geometry in Oxford, so that for three weeks I was in the unique position of filling the post and drawing the pay of Professor of Mathematics in each of two Universities:

Obstinacy

Emile Picard recounted how Sylvester, on a visit to Paris, asked him if in six weeks he could learn the theory of elliptic functions. Picard said yes, so Sylvester asked if a young geometer could be assigned to give him lessons several times per week. This began, but from the second lesson reciprocants and matrices started to compete with elliptic functions and in the ensuing several lessons Sylvester taught the young geometer about his latest research and they remained on that topic.

What Can We Learn from Sylvester’s Life?

If I had to draw two pieces of advice from Sylvester’s life story I would choose the following.

You are never too old to take on a major challenge (he took up the chair at Johns Hopkins University at the age of 61).

If you want to be remembered, define some new terms and have some theorems named after you!