Corless, Knuth and Lambert W

Attendees at the SIAM Annual Meeting in Boston last month had the opportunity to meet Donald Knuth. He was there to give the John von Neumann lecture, about which I reported at SIAM News.

At the Sunday evening Welcome Reception I captured this photo of Don and Rob Corless (whose graduate textbook on numerical analysis I discussed here).

160710-1933-50-4318.jpg

Don and Rob are co-authors on the classic paper

Robert M. Corless, Gaston N. Gonnet, D. E. G. Hare and David J. Jeffrey and Donald Knuth, On the Lambert W Function, Adv. in Comput. Math. 5, 329-359, 1996

The Lambert W function is a multivalued function W_k(x), with a countably infinite number of branches, that solves the equation x e^x = a. According to Google Scholar this is Don’s most-cited paper. Here is a diagram of the ranges of the branches of W_k(x), together with values of W_k(1) (+), W_k(10 + 10i) (×), and W_k(-0.1) (o).

fhi15-fig-branches.jpg

This is to be compared with the the corresponding plot for the logarithm, which consists of horizontal strips of height 2\pi with boundaries at odd multiples of \pi.

Following the Annual Meeting, Rob ran a conference Celebrating 20 years of the Lambert W function at the University of Western Ontario.

Rob co-authored with David Jeffrey an article on the Lambert W function for the Princeton Companion to Applied Mathematics. The article summarizes the basic theory of the function and some of its many applications, which include delay differential equations. Rob and David note that

The Lambert W function crept into the mathematics literature unobtrusively, and it now seems natural there.

The article is one of the sample articles that can be freely downloaded from this page.

I have worked on generalizing the Lambert W function to matrices, as discussed in

Robert M. Corless, Hui Ding, Nicholas J. Higham and David J. Jeffrey, The solution of S exp(S) = A is not always the Lambert W function of A. in ISSAC ’07: Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, ACM Publications, pp. 116-121, 2007.

Massimiliano Fasi, Nicholas J. Higham and Bruno Iannazzo, An Algorithm for the Matrix Lambert W Function, SIAM J. Matrix Anal. Appl., 36, 669-685, 2015.

The diagram above is from the latter paper.

400 Years of Logarithms

The logarithm was first presented in John Napier’s 1614 book Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Canon of Logarithms). Last week I was celebrating 400 years of logarithms at the Napier 400 workshop held at the ICMS in Edinburgh and organized by NAIS. The previous such celebrations had been in 1914 and, as one speaker remarked, it is nice to participate in an event held only once every 100 years.

This one-day workshop included talks by Mike Giles on computing logarithms and other special functions on GPUs, and Jacek Gondzio on the history of the logarithmic barrier function in linear and nonlinear optimization.

My interest is in the matrix logarithm. The earliest explicit occurrence that I am aware of is in an 1892 paper by Metzler On the Roots of Matrices, so we are only just into the second century of matrix logarithms.

140402-1631-24-111.jpg
Photo and Tweet by @DesHigham: “@nhigham introduced by Dugald Duncan at @ICMS_Edinburgh”.

In my talk The Matrix Logarithm: from Theory to Computation I explained how the inverse scaling and squaring (ISS) algorithm that we use today to compute the matrix logarithm is a direct analogue of the method Henry Briggs used to produce his 1624 tables Arithmetica Logarithmica, which give logarithms to the base 10 of the numbers 1–20,000 and 90,000–100,000 to 14 decimal places. Briggs’s impressive hand computations were done by using the formulas \log a = 2^k \log a^{1/2^k} and \log(1+x) \approx x to write \log_{10} a \approx 2^k \cdot \log_{10}e \cdot (a^{1/2^k} - 1). The ISS algorithm for the matrix case uses the same idea, with the square roots being matrix square roots, but approximates \log(1+x) at a matrix argument using Padé approximants, evaluated using a partial fraction expansion. The Fréchet derivative of the logarithm can be obtained by Fréchet differentiating the formulas used in the ISS algorithm. For details see Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm (2012) and Computing the Fréchet Derivative of the Matrix Logarithm and Estimating the Condition Number (2013).

As well as the logarithm itself, various log-like functions are of interest nowadays. One is the unwinding function, discussed in my previous post. Another is the Lambert W function, defined as the solution W(z) of W(z) e^{W(z)} = Z. Its many applications include the solution of delay differential equations. Rob Corless and his colleagues produced a wonderful poster about the Lambert W function, which I have on my office wall. Cleve Moler has a recent blog post on the function.

A few years ago I wrote a paper with Rob, Hui Ding and David Jeffrey about the matrix Lambert W function: The solution of S exp(S) = A is not always the Lambert W function of A. We show that as a primary matrix function the Lambert W function does not yield all solutions to S \exp(S) = A, just as the primary logarithm does not yield all solutions to e^X = A. I am involved in some further work on the matrix Lambert W function and hope to have more to report in due course.