Corless and Fillion’s A Graduate Introduction to Numerical Methods from the Viewpoint of Backward Error Analysis

cofi13_cover.jpg

I acquired this book when it first came out in 2013 and have been dipping into it from time to time ever since. At 868 pages long, the book contains a lot of material and I have only sampled a small part of it. In this post I will not attempt to give a detailed review but rather will explain the distinctive features of the book and why I like it.

As the title suggests, the book is pitched at graduate level, but it will also be useful for advanced undergraduate courses. The book covers all the main topics of an introductory numerical analysis course: floating point arithmetic, interpolation, nonlinear equations, numerical linear algebra, quadrature, numerical solution of differential equations, and more.

In order to stand out in the crowded market of numerical analysis textbooks, a book needs to offer something different. This one certainly does.

  • The concepts of backward error and conditioning are used throughout—not just in the numerical linear algebra chapters.
  • Complex analysis, and particularly the residue theorem, is exploited throughout the book, with contour integration used as a fundamental tool in deriving interpolation formulas. I was pleased to see section 11.3.2 on the complex step approximation to the derivative of a real-valued function, which provides an interesting alternative to finite differences. Appendix B, titled “Complex Numbers”, provides in just 8 pages excellent advice on the practical usage of complex numbers and functions of a complex variable that would be hard to find in complex analysis texts. For example, it has a clear discussion of branch cuts, making use of Kahan’s counterclockwise continuity principle (eschewing Riemann surfaces, which have “almost no traction in the computer world”), and makes use of the unwinding number introduced by Corless, Hare, and Jeffrey in 1996.
  • The barycentric formulation of Lagrange interpolation is used extensively, possibly for the first time in a numerical analysis text. This approach was popularized by Berrut and Trefethen in their 2004 SIAM Review paper, and my proof of the numerical stability of the formulas has helped it to gain popularity. Polynomial interpolation and rational interpolation are both covered.
  • Both numerical and symbolic computation are employed—whichever is the most appropriate tool for the topic or problem at hand. Corless is well known for his contributions to symbolic computation and to Maple, but he is equally at home in the world of numerics. Chebfun is also used in a number of places. In addition, section 11.7 gives a 2-page treatment of automatic differentiation.

This is a book that one can dip into at any page and quickly find something that is interesting and beyond standard textbook content. Not many numerical analysis textbooks include the Lambert W function, a topic on which Corless is uniquely qualified to write. (I note that Corless and Jeffrey wrote an excellent article on the Lambert W function for the Princeton Computation to Applied Mathematics.) And not so many use pseudospectra.

I like Notes and References sections and this book has lots of them, with plenty of detail, including references that I was unaware of.

As regards the differential equation content, it includes initial and boundary value problems for ODEs, as well as delay differential equations (DDEs) and PDEs. The DDE chapter uses the MATLAB dde23 and ddesd functions for illustration and, like the other differential equation chapters, discusses conditioning.

The book would probably have benefited from editing to reduce its length. The index is thorough, but many long entries need breaking up into subentries. Navigation of the book would be easier if algorithms, theorems, definitions, remarks, etc., had been numbered in one sequence instead of as separate sequences.

Part of the book’s charm is its sometimes unexpected content. How many numerical analysis textbooks recommend reading a book on the programming language Forth (a small, reverse Polish notation-based language popular on microcomputers when I was a student)? And how many would point out the 1994 “rediscovery” of the trapezoidal rule in an article in the journal Diabetes Care (Google “Tai’s model” for some interesting responses to that article).

I bought the book from SpringerLink via the MyCopy feature, whereby any book available electronically via my institution’s subscription can be bought in (monochrome) hard copy for 24.99 euros, dollars, or pounds (the same price in each currency!).

I give the last word to John Butcher, who concludes the Foreword with “I love this book.”

Numerical Methods That (Usually) Work

A book that inspired me early in my career is Numerical Methods That Work by Forman S. Acton, published in 1970 by Harper and Row. Acton, a professor in the electrical engineering department at Princeton University, had a deep understanding of numerical computation and the book captures his many years of experience of practical problem solving using a combination of hand computations and early computers.

acto70-cover.jpg

Although written in the 1960s, Acton’s book is more about the 1950s world of computation; it makes only brief mention of the QR algorithm for eigenvalues and does not cover the singular value decomposition or variable step size ODE solvers. Moreover, the author has an aversion to library routines and to rigorous error bounds. Acton states that the students who have attended his numerical methods course have mostly “been Engineers and Scientists. (Mathematicians at Princeton are proudly Pure while most Computer Scientists find an obligatory decimal point to be slightly demeaning.)”. What, then, is special about this book from an applied mathematics point of view?

acton-forman.jpg
(c) Princeton University Press Office of Communications

The book promotes timeless principles that are taught less and less nowadays. A general theme is to analyze a problem and exploit its structure, before applying the simplest suitable numerical method. One example that has stuck with me is the idea of trying to treat a given equation as a perturbation of an easier equation. For example, a quadratic equation \epsilon a x^2 + bx + c = 0 with small |\epsilon| can be thought of as a small perturbation of the linear equation bx + c = 0. Then simple fixed point iteration can be used to solve the quadratic with -c/b as a (good) starting value.

acto70-railroad.jpg

The book is particularly strong on estimation or evaluation of integrals, dealing with singularities in functions, solving scalar nonlinear equations, exploiting asymptotic series, and avoiding instabilities. Several of these issues arise in the “railroad rail problem” presented at the start of the book, which every serious user of numerical methods should have a go at solving.

The pièce de résistance of the book is undoubtedly the 13-page “Interlude: What Not to Compute”. Described as a “cathartic essay” by James Daniel in SIAM Review in 1971, this essay is as relevant as ever, though Acton’s professed dislike of recursive calculations seems dated now that most programming languages fully support recursion.

Contemporary reviewers all note the practical slant of the book. I particularly like H. F. Trotter’s comment that “this reviewer, for one, would find it easier to supply theoretical discussion to supplement this text than to supply the lively practicality that is not always present in other books on this subject” (American Scientist, 59 (4), 1971). As this comment indicates, not only is the book full of excellent advice, but it is written in a distinctive and highly entertaining style. Here are a few examples:

  • “Newton’s predilection for wandering off to East Limbo on encountering a minimum” (On Newton’s method for solving nonlinear equations.)
  • “Only a socially irresponsible man would ignore such computational savings.” (On methods with operation counts proportional to n^2 versus n^3, respectively.)
  • “Many theorems are available for your pleasure.” (About positive definite matrices.)

The typesetting is excellent. One could hardly do better in \LaTeX. Moreover the diagrams are a paragon of good, minimal design and would not be easy to equal with today’s drawing packages.

In the original book the title on the cover is embossed in silver and the word “Usually” has been inserted, unembossed, just before “Work”. In the 1990 reprint by the Mathematical Association of America the “Usually” is in feint grey text. The reprint includes an extra “Preface-90”, an “Afterthoughts” (the quote in the first paragraph is taken from the latter), and some extra problems. The reprint is available on Google Books

acto96-cover.jpg

In 1996 Acton, by then an emeritus professor of computer science, published a second book Real Computing Made Real: Preventing Errors in Scientific and Engineering Calculations with Princeton University Press. It contains similar material on a smaller range of topics, and didn’t have the same impact on me as Numerical Methods that Work. Indeed, being published 26 years later it feels much more out of date. Unlike the first book, this one does mention Gaussian quadrature, but only to advise against its use. This book is now out of print at PUP but is available from Dover and at Google Books.

Acton died in 2014. Some brief biographical information can be found at a Wikipedia page, a Princeton University obituary, and a tribute from a former student.

More Tips on Book and Thesis Writing

Following my earlier post Top Five Tips on Book Writing, here are seven more tips. These apply equally well to writing a thesis.

090826-0010-22-7268.jpg
Book sculpture at Fudan University, Shanghai.

1. Signpost Citations

In academic writing we inevitably include a fair number of citations to entries in the bibliography. In a book, even more so than in a paper, we do not want the reader to have to turn to the bibliography every time a citation is reached in order to understand what is being cited. So a sentence such as

The matrix logarithm appears in a wide variety of applications
[2], [8], [14].

is better phrased as the more informative

The matrix logarithm appears in a wide variety of applications,
such as reduced-order models [2], image registration [8],
and computer animations [14].

Likewise, instead of

Versions of the algorithm have been developed by several authors
[1], [3], [7].

I would write

Versions of the algorithm have been developed by Chester [1], 
Hughes [3, Sec. 2], and Smith and Jones [7].

Even that example lacks information about the date of publication. In my books I have used my own version of the \LaTeX \cite macro that allows me to include the year:

Versions of the algorithm have been developed by Baker and
Chester [1, 2006], Hughes [3, 2001, Sec. 2], 
and Smith and Jones [7, 2004].

The macro is

\def\ycite[#1#2#3#4#5]#6{\cite[$\mit{#1#2#3#4}$#5]{#6}}

(which puts the year in the distinctive math italic font) and the first two citations in the previous sentence would be typed as \ycite[2006]{bach06} and \ycite[2001, Sec.~2]{hugh01}.

2. Produce a Good Index

A good index is essential, since it is the main way that readers can find content. The vast majority of books that I read have an inadequate index, as I have noted in my post A Call for Better Indexes at SIAM Blogs. Usually the index is too small. Occasionally the index is of about right length but is flawed. The main problems are

  • Items that should be indexed are absent from the index.
  • An index entry does not point to all (significant) occurrences of the term.
  • Related entries are not grouped properly.

Advice on producing an index can be found in Section 13.4 of my Handbook of Writing for the Mathematical Sciences and various other sources (try a Google search), and I intend to wrote a post on indexing soon.

\LaTeX, through its \index command, used in conjunction with the MakeIndex program, provides an excellent way to produce an index.

3. Use the Backref \LaTeX package

Backref.sty is a \LaTeX package that adds to each bibliography entry the text “cited on pages” and then lists the pages on which that item was cited. It costs nothing to use it, but it adds great value to the bibliography, which then functions as a separate index into the book. I started using backref with my book MATLAB Guide (2005). To a large extent it removes the need for an author index, and if I do a third edition of Accuracy and Stability of Numerical Algorithms I will probably use backref and drop the author index.

The backref package is not widely used, though a number of SIAM books have made use of it.

4. Use Hyperlinks

For a book provided in PDF form, hyperlinks from an equation reference to the equation, a citation to the bibliography entry, a URL to the web page, and so on, are a great aid to the reader. In \LaTeX obtaining the hyperlinks is usually just a matter of adding \usepackage{hyperref} in the pre-amble.

5. Make Figures Readable and Consistent

It’s very easy nowadays to produce figures containing plots of functions or computational results. But it’s much harder to produce a set of figures that

  • are clearly legible,
  • have labels, legends, and annotations that are of similar size to the main text,
  • are consistent in format (axes, line thicknesses etc.)

All too often I see figures in which the text is so small that I cannot read it at a normal reading distance. My experience (which is mainly with MATLAB, and with the \LaTeX packages TikZ and PGFplots) is that it is a time-consuming process to produce high quality plots. But it is worth the effort.

6. Use Short Captions in the List of Figures/Tables

The general form of the \LaTeX caption command is \caption[short caption]{long caption}. The short caption is what is printed in the List of Figures or List of Tables at the front of the book, if you are printing those lists. The short caption will be read in isolation from the figure or table so it should omit all unnecessary detail, such as explaining line or marker types. All too often, the short and long captions are the same, resulting in unnecessarily long and detailed lists of figures or tables.

Here is an example (simplified, with other macros removed) of the caption from a figure in my book Functions of Matrices:

\caption[Illustration of condition (b) of Theorem~11.4.]%
        {Illustration of condition (b) of Theorem~11.4,
         which requires every eigenvalue of $B$ to lie in the
         open half-plane (shaded) of the corresponding eigenvalue
         of $A^{1/2}$.}

7. Make the Header Contain the Section and Chapter Number and Title

I like to know where I am when I am reading a book, so I expect the page headers to tell me the section number and chapter number, and preferably their titles as well. I cannot understand why some books omit this information. Without it, phrases such as “as discussed in the previous chapter” become harder to follow up, and searching for a particular section is more difficult.

Top Five Tips on Book Writing

Snoopy writing

I’ve written four books, and am currently writing and editing a fifth (The Princeton Companion to Applied Mathematics). I am also an editor of two SIAM book series and chair the SIAM Book Committee. Based on this experience here are my top five tips about writing an (academic) book. These cover high level issues. In a subsequent post I will give some more specific tips relating to writing and typesetting a book or thesis.

1. Identify Your Audience

Book publishers ask prospective authors to complete a proposal form, one part of which asks who is the audience for the book. This is a crucial question that should be answered before a book is written, as the answer will influence the book in many ways.

As an example, you might be contemplating writing a book about the numerical solution of a certain class of equations and intend to include computer code. Your audience might be

  • readers in mathematics or a related subject who wish to learn about numerical methods for solving the equations and are most concerned with the theory or algorithms,
  • readers whose primary interest is in solving the equations and who wish to have lots of sample code that they can run,
  • readers in the previous class who also need to learn the language in which the examples are written.

The choice of content, and how the book is presented, will depend very much on which audience you are writing for.

2. Revise, Revise, Revise

Just like a paper, a book draft needs to go through multiple revisions, and you must not be afraid to make major changes at any stage. You may receive constructive criticisms from reviewers of your book proposal, but reviewers may not have time to read the complete manuscript carefully and you should not assume that they have found all errors, typos, and areas for improvement.

3. Take Time to Choose Your Publisher

Given the huge effort that goes into writing a book you should take the time to find the right publisher. Discuss your book with several publishers and compare what they can offer in the way of

  • format (hardback, paperback, electronic) and, if more than one format, the timescale in which each is made available,
  • if the publisher has branches in more than one country, how price and publication schedule will differ between the countries,
  • whether you are allowed to make a PDF version of the book freely available on your website, if this interests you,
  • willingness to allow you to choose the book design (page size, font, cover, etc.),
  • use of colour (which increases the cost),
  • royalties (including a possible advance),
  • pricing,
  • the publisher’s policy on translations,
  • copy editing (see the next section),
  • time from delivering a completed manuscript to publication,
  • marketing (will the book be advertised at all, and if so how?), and
  • how long your book is guaranteed to stay in print.

It is perfectly acceptable to submit a proposal to several publishers and see what they are willing to offer. However, it is only fair and proper to make clear to a publisher that you are talking to other publishers and, once you have set the wheels of a publisher’s review process in motion, to wait for an offer before making a decision to go with another publisher.

I am always surprised when I hear of authors who approach only one publisher, or who go with the first publisher to express an interest in the book. As in many contexts, it is best to make an informed choice from among the available options.

4. Ensure Your Book is Copy Edited

If you are an inexperienced writer, or your first language is not English, the benefits of copy editing are obvious. But even an experienced author finds it virtually impossible to think about all the little details that a copy editor will check for, such as correctness and consistency of spelling, notation, punctuation (notably the serial comma), citations, and references. For example, I sometimes mix US and UK spellings and don’t want to have to worry about finding and correcting my occasional lapses. A good copy editor will also suggest minor improvements of the text that might escape even the best writers.

Unfortunately, not all publishers copy edit all books nowadays. Notable exceptions that always do copy edit (and, as I know from experience, work to the highest standards in every respect) are Princeton University Press and SIAM.

If your publisher has a Style Manual it obviously makes sense to follow its guidelines in order to minimize changes at the copy editing stage. Here is a link to the SIAM Style Manual.

5. Think Twice Before Co-Authoring a Book

It might seem an attractive proposition to share authorship of a book: surely having n co-authors reduces the work by a factor 1/n? Unfortunately it often does not work out like that, despite best intentions. In fact, n co-authors can easily take n times as long to write a book as any one of them would. One of the biggest difficulties is timescale: one author may be willing and able to finish a book in a year but another may need twice that period to make their contribution. Indeed it is rare for the co-authors to be matched in the amount of effort they can put into the book; this is clearly problematic if initial expectations are not realized. Other potential problems are potentially differing opinions on content, notation, level, length, and almost anything else associated with a book.

Successful authorship teams often have a track record of co-authoring papers together. Although it is no guarantee that a much larger book project will run smoothly, experience with writing papers together will at least have given a good indication of where disagreements are likely to lie.

Second Edition (2014) of Handbook of Linear Algebra edited by Hogben

One of the two or three largest books I have ever owned was recently delivered to me. The second edition of the Handbook of Linear Algebra, edited by Leslie Hogben (with the help of associate editors Richard Brualdi and G. W. (Pete) Stewart), comes in at over 1900 pages, 7cm thick and about half a kilogram. It is the same height and width, but much thicker than, the fourth edition of Golub and Van Loan’s Matrix Computations.

140302-1126-31-6120.jpg
140302-1132-20-6124.jpg

The second edition is substantially expanded from the 1400 page first edition of 2007, with 95 articles as opposed to the original 77. The table of contents and list of contributors is available at the book’s website.

The handbook aims to cover the major topics of linear algebra at both undergraduate and graduate level, as well as numerical linear algebra, combinatorial linear algebra, applications to different areas, and software.

The distinguished list of about 120 authors have produced articles in the CRC handbook style, which requires everything to be presented as a definition, a fact (without proof), an algorithm, or an example. As the author of the chapter on Functions of Matrices, I didn’t find this a natural style to write in, but one benefit is that it encourages the presentation of examples and the large number of illustrative examples is a characteristic feature of the book.

The 18 new chapters include

  • Tensors and Hypermatrices by Lek-Heng Lim
  • Matrix Polynomials by Joerg Liesen and Christian Mehl
  • Matrix Equations by Beatrice Meini
  • Invariant Subspaces by G. W. Stewart
  • Tournaments by T. S. Michael
  • Nonlinear Eigenvalue Problems by Heinrich Voss
  • Linear Algebra in Mathematical Population Biology and Epidemiology by Fred Brauer and Carlos Castillo-Chavez
  • Sage by Robert A. Bezer, Robert Bradshaw, Jason Grout, and William Stein

A notable absence from the applications chapters is network analysis, which in recent years has increasingly made use of linear algebra to define concepts such as centrality and communicability. However, it is impossible to cover every topic and in such a major project I would expect that some articles are invited but do not come to fruition by publication time.

The book is typeset in \LaTeX, like the first edition, but now using the Computer Modern fonts, which I feel give better readability than the font used previously.

A huge amount of thought has gone into the book. It has a 9 page initial section called Preliminaries that lists key definitions, a 51 page glossary, a 12 page notation index, and a 54 page main index.

For quite a while I was puzzled by index entries such as “50-12–17”. I eventually noticed that the second dash is an en-dash and realized that the notation means “pages 12 to 17 of article 50”. This should have been noted at the start of the index.

In fact my only serious criticism of the book is the index. It is simply too hard to find what you are looking for. For example, there is no entry for Gerhsgorin’s theorem, which appears on page 16-6. Nor is there one for Courant-Fischer, whose variational eigenvalue characterization theorem is on page 16-4. There is no index entry under “exponential”, but the matrix exponential appears under two other entries and they point to only one of the various pages where the exponential appears. The index entry for Loewner partial ordering points to Chapter 22, but the topic also has a substantial appearance in Section 9.5. Surprisingly, most of these problems were not present in the index to the first edition, which is also two pages longer!

Fortunately the glossary is effectively a high-level index with definitions of terms (and an interesting read in itself). So to get the best from the book use the glossary and index together!

An alternative book for reference is Bernstein’s Matrix Mathematics (second edition, 2009), which has an excellent 100+ page index, but no glossary. I am glad to have both books on my shelves (the first edition at home and the second edition at work, or vice versa—these books are too heavy to carry around!).

Overall, Leslie Hogben has done an outstanding job to produce a book of this size in a uniform style with such a high standard of editing and typesetting. Ideally one would have both the hard copy and the ebook version, so that one can search the latter. Unfortunately, the ebook appears to have the same relatively high list price as the hard copy (“unlimited access for $169.95”) and I could not see a special deal for buying both. Nevertheless, this is certainly a book to ask your library to order and maybe even to purchase yourself.

Fourth Edition (2013) of Golub and Van Loan’s Matrix Computations

Back in 1980 there were not many up to date books on numerical linear algebra. Stewart’s Introduction to Matrix Computations (1973) was a popular textbook, and was the text for the final year undergraduate course that I took on the subject. Parlett’s The Symmetric Eigenvalue Problem (1980) was a graduate level treatment of the symmetric eigenvalue problem. And Wilkinson’s The Algebraic Eigenvalue Problem (1965) was still the bible of numerical linear algebra, albeit already somewhat out of date due the fast moving research developments since it was published.

While an MSc student, I heard about the impending publication of a new book on matrix computations by Golub and Van Loan. I pre-ordered a copy and in spring 1983 received one of the first copies in the UK. The book was a revelation. It presented a completely fresh and up to date perspective on the subject. Some of the most exciting features were

  • extensive use of pseudocode, with MATLAB-style indexing notation, to describe algorithms,
  • the use of flops to measure computational cost,
  • emphasis on the use of the SVD,
  • modern presentation of rounding error analysis, with rounding error bounds given for each algorithm,
  • systematic treatment of the conjugate gradient and Lanczos methods,
  • coverage of topics not found in earlier books, such as condition estimation, generalized SVD, and total least squares,
  • very lively writing style.

I studied the book in great detail and learned a huge amount from it.

file://d:/dropbox/org/images/gova13.jpg
Covers of first to fourth editions.

A second edition was published in 1989. It was written while Charlie Van Loan was in the UK on sabbatical and I was spending a year at Cornell (Charlie’s home university). I had the opportunity to read and comment on draft chapters. The second edition maintained all the material from the first and added new chapters on matrix multiplication (and the relevant machine architecture considerations) and parallel algorithms, and it was typeset in LaTeX for the first time. The term flop was redefined so that a+b*c represents two flops (as it does today) instead of one as in the first edition. A number of other changes were introduced to address a criticism in some reviews of the first edition that the book was rather terse and fast-paced for use as a course textbook.

A third edition followed in 1996. After a 17 year gap the fourth edition has just been published. Work on this edition began following the untimely death of Gene Golub in 2007. Some statistics indicate the development of the book:

Edition Year Number of pages Pages of master bibliography
First 1983 472 25
Second 1989 642 34
Third 1996 694 50
Fourth 2013 756 65^\dagger

\dagger The master bibliography of the fourth edition is not printed in the book but is downloadable from the book’s web page.

What is Different About the Fourth Edition?

The new edition is physically larger than its predecessors, with a text width of 13 cm versus 11.5 cm in the last edition, so the content is increased by more than the page count would suggest. Moreover, the paper is extremely high quality, and this makes the book bigger and heavier than you would expect. I bought the hardback, because I know from experience that the softback of all three previous editions did not stand up well to heavy use. The image shows the third and fourth editions along with Horn and Johnson’s Matrix Analysis (second edition, 2013) and my Accuracy and Stability of Numerical Algorithms (second edition, 2002).

file://d:/dropbox/org/images/mc4-bookpile.jpg

A number of new topics are included, of which I would pick out

  • fast transforms
  • Hamiltonian and product eigenvalue problems
  • large-scale SVD
  • multigrid
  • tensor computations

I like the statement in the preface that “References that are historically important have been retained because old ideas have a way of resurrecting themselves.” This is of course particularly true as regards methods suitable for high-performance computing.

Lists of relevant LAPACK codes at the start of each chapter have been removed, as have many of the small, illustrative numerical examples, which are replaced by MATLAB codes to be made available on the book’s web page.

The fourth edition remains the best general reference on matrix computations and a must-have for any serious researcher in the field. A big difference from 1983, when the first edition appeared, is that now a separate research monograph is available covering almost every topic in the book (and due reference is made to 28 such “Global References”). But Matrix Computations brings together and unifies a wide variety of topics in one place.

2013 has been a good year for books on matrices and approximation, with the publication of a second edition of Horn and Johnson’s Matrix Analysis, Trefethen’s Approximation Theory and Approximation Practice, and now this very welcome fourth edition of Golub and Van Loan. It is available from the usual sources as well as from SIAM. Consider the Kindle edition to save your back. You can still have it signed!

file://d:/dropbox/org/images/mc4-sign.jpg

Second Edition (2013) of Matrix Analysis by Horn and Johnson

file://d:/dropbox/org/images/hojo13-cover.jpg

Horn and Johnson’s 1985 book Matrix Analysis is the standard reference for the subject, along with the companion volume Topics in Matrix Analysis (1991). This second edition, published 28 years after the first, is long-awaited. It’s a major revision: 643 pages up from 561 and with much more on each page thanks to pages that are wider and taller. The number of problems and the number of index entries have both increased, by 60% and a factor 3, respectively, according to the preface. Hints for solutions of the problems are now given in an appendix.

The number of chapters is unchanged and their titles are essentially the same. New material has been added, such as the CS decomposition; existing material has been reorganized, with the singular value decomposition appearing much earlier now; and the roles of block matrices and left eigenvectors have been expanded.

Unlike the first edition, the book has been typeset in LaTeX (in Times Roman) and it’s been beautifully done, except for the too-large solid five-pointed star used in some displays. Moreover, the print quality is superb. Oddly, equations are not punctuated! (The same is true of the first edition, though I must admit I had not noticed.)

The new edition is clearly a must-have for anyone seriously interested in matrix analysis.

Note, however, that this book is not, and cannot be without greatly increasing its size, a comprehensive research monograph. Thus exhaustive references to the literature are not given (as stated in the preface to the original edition). Also, in some cases a story is partly told in the main text and completed in the Problems, or in the Notes and Further Reading. For example, Theorem 3.2.11.1 on page 184 compares the Jordan structure of the nonzero eigenvalues of AB and BA (previously a Problem in the first edition), but the comparison for zero eigenvalues is only mentioned in the Notes and Further Reading seven pages later and is not signposted in the main text.

The 37-page index is extremely comprehensive and covers the Problems as well as the main text. It’s not perfect: Sylvester equation is missing (or rather, is hidden as the subentry Sylvester’s theorem, linear matrix equations).

A final point: the References (bibliography) contains several books that are out of print from the indicated publisher but are available in reprints from other publishers, notably in the SIAM Classics in Applied Mathematics series. They are:

  • Rajendra Bhatia, Perturbation Bounds for Matrix Eigenvalues, SIAM, 2007: hard copy, ebook.
  • Françoise Chatelin, Eigenvalues of Matrices, SIAM, 2012: ebook, hard copy
  • Charles Cullen, Matrices and Linear Transformations, Second edition, Dover, 1990: Google Books.
  • Israel Gohberg, Peter Lancaster & Leiba Rodman, Matrix Polynomials, SIAM, 2009: hard copy, ebook.
  • Israel Gohberg, Peter Lancaster & Leiba Rodman, Indefinite Linear Algebra and Applications, Birkhauser, 2005: ebook.
  • Marvin Marcus & Henryk Minc, A Survey of Matrix Theory and Matrix Inequalities, Dover, 1992: Google Books.
  • Stephen Campbell & Carl Meyer, Generalized Inverses of Linear Transformations, SIAM, 2009: hard copy, ebook.

SIAM Books on Google Play

In 2011 SIAM launched an institutional e-book program, which makes SIAM books available by chapter in PDF form for readers at subscribing institutions. As of late 2012, SIAM books are now available for individual e-book purchase from Google Play, for use on tablets, smartphones, e-readers, or computers (but not Kindles). Unlike in the institutional program, these e-books are subject to full digital rights management (DRM), which means users cannot copy them or print from them and only the Google account holder has access to the book.

I’ve used the Preview facility to look at a few books on Google Play. My own SIAM books, such as Functions of Matrices (2008), are shown as “scanned pages” and appear to have been scanned from the hard copy; zooming in is supported.

file://d:/dropbox/org/images/google_play_FM.jpg

By comparison, the Princeton Companion to Mathematics can be viewed as “scanned pages” or “flowing text” (ePub format). In the latter, which reformats as you zoom in and seems to be the default, the mathematics renders poorly; this is a shame given the impeccable LaTeX typesetting of the original book.

Is there a good solution yet for how to render mathematics in e-books?

Trefethen’s Approximation Theory and Approximation Practice

This new 305-page SIAM book by Nick Trefethen presents a modern approach to approximation by polynomials and rational functions. Much of the theory here underlies the Chebfun software package and almost every page of the book contains examples computed using Chebfun.

file://d:/dropbox/org/images/tref12_cover.jpg

The book is certainly a must-read for anyone interested in numerical computation. But the most unusual feature of the book is not immediately obvious: it was entirely produced from 29 MATLAB M-files, one for each chapter. Each M-file contains the book’s text in comment lines intertwined with the MATLAB code that generates the examples and the figures. The book was created by using the MATLAB command publish to generate LaTeX output, which was then run through LaTeX (with a few tweaks for the actual printed book). Nick has made the M-files available at the book’s web page and you can generate the book by running them all through publish.

When I ran publish on one of the M-files it gave a strange error beginning

No method 'createTextNode' with matching signature found for class
'org.apache.xerces.dom.DocumentImpl'.

and I got the same error whatever M-file I tried to publish. This seems to be caused by a clash with some nonstandard M-file on my path, because if I reset the MATLAB path with the matlabrc command (and then add back chebfun to the path) everything works fine.