In the January/February 2000 issue of Computing in Science and Engineering, Jack Dongarra and Francis Sullivan chose the “10 algorithms with the greatest influence on the development and practice of science and engineering in the 20th century” and presented a group of articles on them that they had commissioned and edited. (A SIAM News article by Barry Cipra gives a summary for anyone who does not have access to the original articles). This top ten list has attracted a lot of interest.
Sixteen years later, I though it would be interesting to produce such a list in a different way and see how it compares with the original top ten. My unscientific—but well defined— way of doing so is to determine which algorithms have the most page locators in the index of The Princeton Companion to Applied Mathematics (PCAM). This is a flawed measure for several reasons. First, the book focuses on applied mathematics, so some algorithms included in the original list may be outside its scope, though the book takes a broad view of the subject and includes many articles about applications and about topics on the interface with other areas. Second, the content is selective and the book does not attempt to cover all of applied mathematics. Third, the number of page locators is not necessarily a good measure of importance. However, the index was prepared by a professional indexer, so it should reflect the content of the book fairly objectively.
A problem facing anyone who compiles such a list is to define what is meant by “algorithm”. Where does one draw the line between an algorithm and a technique? For a simple example, is putting a rational function in partial fraction form an algorithm? In compiling the following list I have erred on the side of inclusion. This top ten list is in decreasing order of the number of page locators.
Newton and quasi-Newton methods
Matrix factorizations (LU, Cholesky, QR)
Singular value decomposition, QR and QZ algorithms
Note that JPEG (1992) and PageRank (1998) were youngsters in 2000, but all the other algorithms date back at least to the 1960s.
By comparison, the 2000 list is, in chronological order (no other ordering was given)
Metropolis algorithm for Monte Carlo
Simplex method for linear programming
Krylov subspace iteration methods
The decompositional approach to matrix computations
The Fortran optimizing compiler
QR algorithm for computing eigenvalues
Quicksort algorithm for sorting
Fast Fourier transform
Integer relation detection
Fast multipole method
The two lists agree in 7 of their entries. The differences are:
Newton and quasi-Newton methods
The Fortran Optimizing Compiler
Quicksort algorithm for sorting
Integer relation detection
Fast multipole method
Of those in the right-hand column, Fortran is in the index of PCAM and would have made the list, but so would C, MATLAB, etc., and I draw the line at including languages and compilers; the fast multipole method nearly made the PCAM table; and quicksort and integer relation detection both have one page locator in the PCAM index.
There is a remarkable agreement between the two lists! Dongarra and Sullivan say they knew that “whatever we came up with in the end, it would be controversial”. Their top ten has certainly stimulated some debate, but I don’t think it has been too controversial. This comparison suggests that Dongarra and Sullivan did a pretty good job, and one that has stood the test of time well.
The article describes the story behind the Princeton Companion to Applied Mathematics, published in October 2015, which I edited (along with associate editors Mark Dennis, Paul Glendinning, Paul Martin, Fadil Santosa and Jared Tanner).
Among the topics covered are
the motivation for the book and for publishing it in hard copy (as well as in e-book form),
the question “What is applied mathematics?”,
the challenge of producing 1000 pages of high quality typeset mathematical text,
the design of the book.
Complementary to the article is the YouTube video below, recorded and produced by George Miller, in which I talk about the project. It was filmed in and around the Alan Turing building at the University of Manchester. An interesting tidbit about George is that when he worked at Oxford University Press he set up and commissioned the Very Short Introductions series, of which Tim Gowers’s Mathematics: Very Short Introductions was one of the first volumes to be published.
I wondered if we had included these equations in The Princeton Companion to Applied Mathematics (PCAM), specifically in Part III: Equations, Laws, and Functions of Applied Mathematics. We had indeed included the ones most relevant to applied mathematics. Here are those equations, with links to the BBC articles.
The wave equation (which quotes PCAM author Ian Stewart). PCAM has a short article by Paul Martin of the same title (III.31), and the wave equation appears throughout the book.
Einstein’s field equation. PCAM has a 2-page article Einstein’s Field Equations (note the plural), by Malcolm MacCallum (article III.10).
The Euler-Lagrange equation. PCAM article III.12 by Paul Glendinning is about these equations, and more appears in other articles, especially The Calculus of Variations (IV.6), by Irene Fonseca and Giovanni Leoni.
The Dirac equation. A 3-page PCAM article by Mark Dennis (III.9) describes this equation and its quantum mechanics roots.
The logistic map. PCAM article The logistic equation (III.19), by Paul Glendinning treats this equation, in both differential and difference forms. It occurs in several places in the book.
Bayes’ theorem. This theorem appears in the PCAM article Bayesian Inference in Applied Mathematics (V.11), by Des Higham, and in other articles employing Bayesian methods.
A natural equation is: Are there other worthy equations that are the subject of articles in Part III of PCAM that have not been included in the BBC list? Yes! Here are some examples (assuming that only single equations are allowed, which rules out the Cauchy-Riemann equations, for example).
I have collected a set of links to videos (or, in some cases, audio captures with slides) of authors speaking on or around the topics of their Companion articles. These should give readers added insight into the topics and their authors.
At the time of posting all links were valid, but links have a habit of changing or disappearing. Please let me know of any new links that can be added to this list or existing ones that need changing.
The Princeton Companion to Applied Mathematics has a 23-page Part I article “History of Applied Mathematics”, but apart from that it does not contain any articles with a historical or biographical emphasis. In designing the book we felt that the articles in Part II, “Equations, Laws and Functions of Applied Mathematics”, would provide a link into the history of applied mathematics through the various equations, laws, and functions included, most of which are eponymous.
The index was produced by a professional indexer, who made a judgement on which of the many names in the book had significant enough mentions to index. A phrase “Newton’s method” would not generate an index entry for “Newton”, but a phrase describing something that Newton did might.
The index revealed some interesting features. First, there are many entries for famous mathematicians and scientists: 76 in total, ranging from to Niels Henrik Abel to Thomas Young. This means that even though there are no biographical articles, authors have included plenty of historical and biographical snippets. Second, many of the mathematicians might equally well have been mentioned in a book on pure mathematics (Halmos, Poincaré, Smale, Weil), which indicates the blurred boundary between pure and applied mathematics.
A third feature of the index is that the number of locators for the mathematicians and scientists that it contains varies greatly, from 1 to 20. We can use this to produce a highly non-scientific ranking. Here is a Wordle, in which the font size is proportional to the number of times that each name occurs.
John von Neumann (1903–-1957) emerges as The Companion’s “most mentioned” applied mathematician. Indeed von Neumann was a hugely influential mathematician who contributed to many fields, as his index entry shows:
von Neumann, John: applied mathematics and, 56–59, 73; computational science and, 336–37, 350; economics and, 71, 644, 650, 869; error analysis and, 77; foams and, 740; Monte Carlo method and, 57; random number generation and, 762; shock waves and, 720; spectral theory and, 239–40, 426
von Neumann’s work has strong connections with my own research interests. With Herman Goldstine he published an important rounding error analysis of Gaussian elimination for inverting a symmetric positive definite matrix. He also introduced the alternating projections method that I have used to solve the nearest correlation matrix problem. And he derived important result on unitarily invariant matrix norms and singular value inequalities
More about von Neumann can be found in the biographies
Here are some examples of how different people can use the book.
Undergraduate students can use it to get an overview of topics they are studying and to find out what areas of applied mathematics they might like to pursue at graduate level. Many of the articles have minimal pre-requisites (indeed some contain few, if any, equations). My article Color Spaces and Digital Imaging, for example, requires just knowledge of integration and basic linear algebra.
A teacher might find useful the articles The History of Applied Mathematics and the four-part article Teaching Applied Mathematics, as well as the various short articles on interesting problems and applications (e.g., Cloaking, Bubbles, The Flight of a Golf Ball, Robotics, Medical Imaging, Text Mining, and Voting Systems).
Researchers can use the book to find out about topics outside their area that they encounter in seminars but never have the time to study in the research literature.
Engineers can use the book to find out about some of the latest mathematical developments relevant to their interests. The articles Aircraft Noise, Inerters, and Signal Processing, and the index entries “aerodynamics”, “energy-efficient buildings”, and “finite-element methods”, are good starting points.
Students at all levels can learn about how to read and write mathematics, including the use of relevant computer tools, from several articles in Part VII, “Final Perspectives”.
Anyone can use the book for reference. Although it is not a dictionary, encyclopedia, or handbook, The Companion‘s extensive index makes it easy to locate material, including definitions, equations, functions, laws, theorems, and so on.
The book, produced with , is a great example of how to typeset mathematics, with examples of all kinds of equations, figures, and tables. For those learning or new to mathematical typesetting it should be a source of ideas and inspiration. The source code is not provided, but feel free to contact me with questions about how things were done and I will write a post that answers the most common questions.
The final collection of articles, by mathematicians from China, France, the UK, and the USA, gives advice on how to make the case for mathematics to politicians, and will be of interest to anyone who wishes to promote the importance of mathematics.
A lot of applied mathematics relies on computation, whether symbolic or numeric, so many applied mathematicians need to program as part of their work. It was therefore natural to include an article on programming languages in The Princeton Companion to Applied Mathematics.
The article, which I wrote, has two main purposes. The first is to give a history of those programming languages relevant to applied mathematics. The first such language, and indeed the first high-level programming language, was Fortran (1957). The language was standardized in 1966 and it is still going strong, with the most recent standard published in 2008. Developments in programming languages show no sign of abating, with the introduction in recent years of new languages such as Scala (2003), Clojure (2007, a dialect of Lisp), and Julia (2012), as well as new standards for older languages such as C (2011) and C++ (2011).
The second purpose of the article is to discuss mathematical aspects of programming, including
notation (infix, prefix, reverse-Polish)
implementation of complex arithmetic
complexity analysis of codes
One issue that I discuss is the mutually beneficial influences that mathematics and programming languages have had on each other. For example, the notation for the ceiling and floor functions that map a real number to the next larger or smaller integer, respectively, illustrated by and , was first introduced in the programming language APL. The colon notation for array subscripting, A(i:j,r:s), used in Algol 68 and MATLAB, is now routinely used in linear algebra, in both equations and in pseudocode.
On the other hand, mathematics has influenced, or anticipated, syntax in programming languages. In the 1800s Cayley proposed two different notations to distinguish between the products and in the context of groups, but both were ungainly and difficult to typeset. In 1928, Hensel suggested the notation and . Although his suggestion appears to have attracted little or no attention, it was was reinvented by Cleve Moler for MATLAB and is now a notation familiar to anyone who works in numerical linear algebra.
At the start of the article I include a figure containing the first program written for a stored-program computer, namely the highest factor routine that ran on the Manchester “Baby” on June 21, 1948. The program was by Tom Kilburn and is taken from Geoff Tootill‘s notebook. Tootill is still alive (aged 93), and he kindly gave permission for me to reproduce the image.
In The Princeton Companion to Applied Mathematics (page 50) I mention that a four-legged table provides an example of an ill-posed problem. If we take a table having four legs of equal length lying on a flat surface and shorten one leg by an arbitrarily small amount then the weight supported by that leg will jump from one quarter of the total weight to zero.
A table with one leg shorter than the others wobbles, as may one sitting on an uneven floor, and how to cure wobbly tables has been the subject of a number of papers over the years. The tongue-in cheek article
Hanspeter Kraft, The Wobbly Garden Table, Journal of Biological Physics and Chemistry 1, 95-96, 2001
describes how an engineer, a physicist, and a mathematician would go about solving the problem. The engineer would invent an adjustable leg. The physicist would submit a research proposal to tackle the more general problem of “the stability of multiply-legged objects on rough surfaces”. The mathematician would construct an argument based on the intermediate value theorem to show that stability can be restored with a suitable rotation of no more than 90 degrees. This argument has been discussed by several authors, but turning it into a mathematically precise statement with appropriate assumptions on the table and the ground on which it rests is not easy.
The two most recent contributions to this topic that I am aware of are
In the latter paper it is shown that if the ground on which a rectangular table rests slopes by less than 35.36 degrees and the legs of the table are at least half as long as its diagonals then the rotation trick works.
For more insight into this problem you may like to watch the video below in which Matthias Kreck explains the problem with the aid of some excellent animations.