Manchester Numerical Analysis Reports

This post is an edited and updated version of an article that I published in 2006 in the IMANA Newsletter (“Newsletter of the Numerical Analysis Group of the Institute of Mathematics and its Applications”). Very few issues of the Newsletter appear to be electronically available, so I thought it worthwhile to reproduce the article here.

Three different cover designs, from 1988, 1996, and 2005.

The University of Manchester Numerical Analysis (NA) Report series began in 1974. The two key movers in setting up the series were Ian Gladwell, a member of the Department of Mathematics at the University of Manchester (now retired from the Department of Mathematics at Southern Methodist University, Dallas), and Charlie Van Loan, an SERC-funded postdoctoral visitor to the department in 1974–1975 (and subsequently a professor in the Department of Computer Science at Cornell University). The first report was

Charles F. Van Loan, Least Squares Problems with Emphasis Upon Singular Value Techniques, Numerical Analysis Report No. 1, September 1974.

and Charlie wrote four of the first 10 reports. Particularly notable is

Charles F. Van Loan. A study of the matrix exponential. Numerical Analysis Report No. 10, August 1975.

This was an early version of the classic, highly-cited article “Nineteen Dubious Ways to Compute the Exponential of a Matrix” written with Cleve Moler and subsequently published in SIAM Review in 1978, with an updated reprint in SIAM Review in 2003. The report has been reissued as MIMS EPrint 2206.397. Ian’s early contributions include the often-cited

J. L. Siemieniuch and I. Gladwell On time discretization for linear time-dependent partial differential equations. Numerical Analysis Report No. 5, September 1974.

One of the main aims of the series was to provide a vehicle for pre-publication of a preliminary version of a piece of work, prior it to being submitted to a journal. Right from the start this aim was achieved, with at least 15 of the first 20 reports known to have appeared in refereed journals. Nevertheless, a number of important early reports, such as Number 5 mentioned above, were not submitted but surely would have been in today’s academic climate.

The contents of the series naturally reflect the interests of the numerical analysts at the University of Manchester and UMIST over the years. The first 125 reports (taking us up to October 1986) include contributions on stiff differential equations (George Hall, Jack Williams), complex approximation (Jack Williams), Volterra integral equations (Christopher Baker), polynomial zero-finding (Len Freeman), methods for second order ordinary differential equations (Ian Gladwell, Ruth Thomas), multigrid (Joan Walsh), numerical linear algebra (Nick Higham), and numerical analysis of partial differential equations (Ian Gladwell, David Silvester, Ron Thatcher, Joan Walsh).

As well as containing preprints of research papers, the series includes all thirteen Annual Reports of the Manchester Centre for Computational Mathematics and the proceedings of two 1982 meetings:

Ian Gladwell (ed.), Proceedings of a One-Day Colloquium On Numerical Linear Algebra and Its Applications. Numerical Analysis Report No. 78, July 1982.

George Hall and Jack Williams (eds), Proceedings of a One-Day Colloquium on the Numerical Solution of Ordinary Differential Equations. Numerical Analysis Report No. 84, December 1982.

The reports illustrate the changes in typesetting mathematics since the 1970s. Early reports were typewritten, sometimes with equations written in by hand. In the 1980s many of the reports were wordprocessed using Vuwriter—a technical wordprocessor produced by Vuman Ltd., a spin-off company of the University of Manchester, targeted at the Apricot microcomputer. I wordprocessed several reports on a Commodore 64 microcomputer using an Epson printer, with Greek letters and mathematical characters produced in the printer’s graphics mode (see this earlier post for more details).

The first TeXed reports were around 1986/1987 and by the early 1990s most reports were produced in \LaTeX, as they are today.

The printed reports retained their distinctive green card cover to the end, but a major change came in May 1993 when they were first made available over the internet—originally by anonymous ftp from and then from the Manchester Centre for Computational Mathematics (MCCM) web site set up in 1994. The web page from which the reports are available, now located here, was automatically created from a BibTeX bib file, the latter being maintained by hand, as was the repository of PDF and PS files.

In 2005, the NA Report series was folded into the new MIMS EPrints archive ( which hosts research outputs of members of the School of Mathematics and associated researchers. EPrints entries are assigned an AMS subject classification and can be searched by those numbers. Reports that would have appeared in the old NA Report series can now generally be found under the classification 65 Numerical Analysis.

On a recent visit to the University of Manchester library I was pleased to find that many of the NA reports up to 2001 are still available in hard copy on the shelves. (A search of the catalogue for “numerical analysis report” reveals the details.)

Most Popular Posts of 2015

WordPress provides detailed statistics on views of posts. These are the five most-viewed posts published on thus blog in 2015.

  1. The Rise of Mixed Precision Arithmetic (October).
  2. Programming Languages: An Applied Mathematics View (September).
  3. The Princeton Companion to Applied Mathematics (July).
  4. Top Tips for New LaTeX Users (September).
  5. Numerical Methods That (Usually) Work (May).

WordPress has also prepared a 2015 annual report for this blog, which can be found here.

Mathematics at the Victoria University of Manchester


The Victoria University of Manchester (VUM) merged with the University of Manchester Institute of Science and Technology (UMIST) in 2004 to form The University of Manchester. The two former Departments of Mathematics joined together to form the School of Mathematics. In 2007 the School moved into a new building at the heart of the campus: the Alan Turing Building. The School is one of the largest integrated schools of mathematics in the UK, with around 75 permanent lecturing staff and over 1000 undergraduates.

As the School moves ahead it is important to keep an eye on the past, and to maintain valuable historical information about the predecessor departments. I know from emails I receive and contact with alumni (most recently at a reception in London last summer) that former students and staff like to look at photos and documents relating to their time here.

I have previously made available various documents and photos concerning the VUM Mathematics Tower on Oxford Road.

Now I have scanned five documents that provided information for prospective and current VUM mathematics undergraduates.


Applied Mathematics Workflow

Image courtesy of Stuart Miles at

This blog, which is almost three years old, is titled “Applied mathematics, software and workflow”. Workflow refers to everything involved in a research activity except the actual research. It’s about how to do many different things: edit and typeset a document, store and access your bibliographic references, carry out reproducible numerical experiments, produce figures, back up your files, collaborate with others, and so on. These tasks all need to be done multiple times, so small gains in efficiency can have a big payoff in the long run.

My article Workflow in the The Princeton Companion to Applied Mathematics gives a brief overview of the subject and can be downloaded in pre-publication form as an EPrint.

Workflow is not just about efficiency, though, or about producing the best possible end result. It’s also about enjoying carrying out the various tasks. Don Knuth put it perfectly when he said, in The Art of Computer Programming (Volume 2, Seminumerical Algorithms),

The enjoyment of one’s tools is an essential ingredient of successful work.

A search of this blog shows that I have barely used the term “workflow” so far. But a number of posts relate to this topic, namely

In the future I will write further posts about workflow as I continue to refine my own.

What is Applied Mathematics For?

Those of us working in applied mathematics are well aware that our field has many important uses in the real world. But if we are put on the spot during a conversation and asked to give some examples it can be difficult to conjure up a convincing list.

One response is to point people to The Princeton Companion to Applied Mathematics. Its 186 articles contains a large number of examples of how applied mathematics is put to work in fields such as sport, engineering, economics, physics, biology, computer science, and finance.

Another way to convince people of the value of applied mathematics is to get them to watch the 1-minute SIAM video below. It was constructed from interviews conducted at a variety of SIAM conferences and comprises snippets of 25 mathematicians saying what they use mathematics for.

Well done to Karthika Swamy Cohen and Michelle Montgomery at SIAM, Adam Bauser and his team at Bauser Media Group, and Sonja Stark at PilotGirl Productions, for producing this great advertisement for applied mathematics!

Knuth on Knowing Your Audience

Donald Knuth has a great ability to summarize things in pithy, quotable nuggets. A good example is the following sentence from his 2001 book Things a Computer Scientist Rarely Talks About:

The amount of terror that lives in a speaker’s stomach when giving a lecture is proportional to the square of the amount he doesn’t know about his audience.

Knuth’s point is about preparation, and it brings to mind the words of Benjamin Franklin, “By failing to prepare, you are preparing to fail”.

It’s essential to find out as much as possible about your audience, not just so that you feel more confident, but also so that what you deliver is appropriate for that audience.

As academics we are used to giving seminars and conference talks for which we know that the audience will be made up of peers, and we usually just need to ascertain where to aim the talk on the axes general researcher–specialist and graduate student–experienced researcher.

For any other talk it is important to go to some effort to find out who will be in the audience, perhaps asking for a list of attendees if the event requires registration. For an after-dinner talk you may want to know whether certain key people who you are thinking of mentioning will be in the audience. For a talk to a general audience you will want to assess the base level of technical knowledge that can be assumed.

Keep these thoughts in mind when that sought-after invitation to give a “TED talk” arrives in your mailbox.

©Guy Venables. Used with permission.

A New Source of Data Errors: Scanning and Photocopying

In numerical analysis courses we discuss condition numbers as a means for measuring the sensitivity of the solution of a problem to perturbations in the data. Traditionally, we say there are three main sources of data errors:

  1. Rounding errors in storing the data on the computer. For example, the Hilbert matrix with (i,j) entry 1/(i+j-1) cannot be stored exactly in floating point arithmetic.
  2. Measurement errors. If the data comes from physical measurements or experiments then it will have inherent uncertainties, which could be quite large (perhaps of relative size 10^{-3}).
  3. Errors from an earlier computation. If the data for the given problem is the solution to another problem it will inherit errors from the previous problem.

Recently I learned of a fourth source of error: scanning and photocopying.

Traditionally, photocopiers were based on xerography, whereby electrostatic charges on a light sensitive photoreceptor are used to attract toner particles and then transfer them onto paper to form an image. Nowadays, photocopiers are more likely to comprise a combined scanner and printer, as for example in consumer all-in-one devices.

Last year, German computer scientist David Kriesel discovered that the Xerox WorkCentre 7535 and 7556 machines can jumble up different areas in a scan. In particular, he found an example where many occurrences of the digit “6” are replaced by “8” during the scanning process. See his blog post.

It seems that the Xerox scanners in question use the JBIG2 compression algorithm (a specialized version of JPEG), which segments the image into patches and uses pattern matching, and that the default parameters used were not a good choice because they can lead to these serious errors. Xerox subsequently released software patches.

One would not imagine that scanning on today’s high resolution machines could change whole blocks of pixels. Given the wide range of uses of scanners, including transmission of exam marks, financial information, and engineering specifications, as well as the ubiquitous digitizing of historic documents including journal articles, this is very disturbing.

The problem of mangled scans may not be limited to Xerox machines, as other reports show (see this post and this post).

The motto of the story is: run sanity checks on your scanned data and do not assume that scans (or the results of optical character recognition on them) are accurate!