A Collection of Invalid Correlation Matrices

invalid-correlation-collection.jpg

I’ve written before (here) about the increasingly common problem of matrices that are supposed to be correlation matrices (symmetric and positive semidefinite with ones on the diagonal) turning out to have some negative eigenvalues. This is usually bad news because it means that subsequent computations are unjustified and even dangerous. The problem occurs in a wide variety of situations. For example in portfolio optimization a consequence could be to take arbitrarily large positions in a stock, as discussed by Schmelzer and Hauser in Seven Sins in Portfolio Optimization.

Much research has been done over the last fifteen years or so on how to compute the nearest correlation matrix to a given matrix, and these techniques provide a natural way to correct an “invalid” correlation matrix. Of course, other approaches can be used, such as going back to the underlying data and massaging it appropriately, but it is clear from the literature that this is not always possible and practitioners may not have the mathematical or statistical knowledge to do it.

Nataša Strabić and I have built up a collection of invalid correlation matrices, which we used most recently in work on Bounds for the Distance to the Nearest Correlation Matrix. These are mostly real-life matrices, which makes them valuable for test purposes.

We have made our collection of invalid correlation matrices available in MATLAB form on GitHub as the repository matrices-correlation-invalid. I am delighted to be able to include, with the permission of investment company Orbis, two relatively large matrices, of dimensions 1399 and 3120, arising in finance. These were the matrices I used in my original 2002 paper.

Behind the Scenes of the Princeton Companion to Applied Mathematics

I’ve published an article Making the Princeton Companion to Applied Mathematics in Mathematics Today, the membership magazine of the The Institute of Mathematics and its Applications.

The article describes the story behind the Princeton Companion to Applied Mathematics, published in October 2015, which I edited (along with associate editors Mark Dennis, Paul Glendinning, Paul Martin, Fadil Santosa and Jared Tanner).

Among the topics covered are

  • the motivation for the book and for publishing it in hard copy (as well as in e-book form),
  • the question “What is applied mathematics?”,
  • the challenge of producing 1000 pages of high quality typeset mathematical text,
  • the design of the book.
150917-1211-00-3093.jpg

Complementary to the article is the YouTube video below, recorded and produced by George Miller, in which I talk about the project. It was filmed in and around the Alan Turing building at the University of Manchester. An interesting tidbit about George is that when he worked at Oxford University Press he set up and commissioned the Very Short Introductions series, of which Tim Gowers’s Mathematics: Very Short Introductions was one of the first volumes to be published.

Empty Matrices in MATLAB

What matrix has zero norm, unit determinant, and is its own inverse? The conventional answer would be that there is no such matrix. But the empty matrix [ ] in MATLAB satisfies these conditions:

>> A = []; norm(A), det(A), inv(A)
ans =
     0
ans =
     1
ans =
     []

While many MATLAB users will be familiar with the use of [ ] as a way of removing a row or column of a matrix (e.g., A(:,1) = []), or omitting an argument in a function call (e.g., max(A,[],2)), fewer will be aware that [ ] is just one in a whole family of empty matrices. Indeed [ ] is the 0-by-0 empty matrix

>> size([])
ans =
     0     0

Empty matrices can have dimension n-by-0 or 0-by-n for any nonnegative integer n. One way to construct them is with double.empty (or the empty method of any other MATLAB class):

>> double.empty
ans =
     []
>> double.empty(4,0)
ans =
   Empty matrix: 4-by-0

What makes empty matrices particularly useful is that they satisfy natural generalizations of the rules of matrix algebra. In particular, matrix multiplicaton is defined whenever the inner dimensions match up.

>> A = double.empty(0,5)*double.empty(5,0)
A =
     []
>> A = double.empty(2,0)*double.empty(0,4)
A =
     0     0     0     0
     0     0     0     0

As the second example shows, the product of empty matrices with positive outer dimensions has zero entries. This ensures that expressions like the following work as we would hope:

>> p = 0; A = ones(3,2); B = ones(3,p); C = ones(2,3); D = ones(p,3);
>> [A B]*[C; D]
ans =
     2     2     2
     2     2     2
     2     2     2

In examples such as this empty matrices are very convenient, as they avoid us having to program around edge cases.

Empty matrices have been in MATLAB since 1986, their inclusion having been suggested by Rod Smart and Rob Schreiber \lbrack 1 \rbrack. A 1989 MATLAB manual says

We’re not sure we’ve done it correctly, or even consistently, but we have found the idea useful.

In those days there was only one empty matrix, the 0-by-0 matrix, and this led Nett and Haddad (1993) to describe the MATLAB implementation of the empty matrix concept as “neither correct, consistent, or useful, at least not for system-theoretic applications”. Nowadays MATLAB gets it right and indeed it adheres to the rules suggested by those authors and by de Boor (1990). If you are wondering how the values for norm([]), det([]) and inv([]) given above are obtained, see de Boor’s article for an explanation in terms of linear algebra transformations.

The concept of empty matrices dates back to before MATLAB. The earliest reference I am aware of is a 1970 book by Stoer and Witzgall. As the extract below shows, these authors recognized the need to support empty matrices of varying dimension and they understood how multiplication of empty matrices should work.

stwi70-p3.jpg
From Stoer and Witzgall (1970, page 3).

Reference

  1. John N. Little, The MathWorks Newsletter, 1 (1), March 1986.

Updated Catalogue of Software for Matrix Functions

help-matfun.jpg
From “help matfun” in MATLAB.

Edvin Deadman and I have updated the catalogue of software for matrix functions that we produced in 2014 (and which was discussed in this post). The new version, which has undergone some minor reorganization, is available here. It covers what is available in languages (C++, Fortran, Java, Julia, Python), problem solving environments (GNU Octave, Maple, Mathematica, MATLAB, R, Scilab), and libraries (GNU Scientific Library, NAG Library, SLICOT).

nag-mark25.jpg
From NAG LIbrary Mark 25 News.

Here are some highlights of changes in the last two years that are reflected in the new version.

Other changes to the catalogue include these.

  • SLICOT has been added.
  • Two more R packages are included.

Suggestions for inclusion in a future revision are welcome.

The Improved MATLAB Functions Expm and Logm

pcam-p97-exp-short.jpg
Equation from the Princeton Companion to Applied Mathematics, article “Functions of Matrices” (p. 97)

The matrix exponential is a ubiquitous matrix function, important both for theory and for practical computation. The matrix logarithm, an inverse to the exponential, is also increasingly used (see my earlier post, 400 Years of Logarithms).

MATLAB R2015b introduced new versions of the expm and logm functions. The Release Notes say

The algorithms for expm, logm, and sqrtm show increased accuracy, with logm and sqrtm additionally showing improved performance.

The help text for expm and logm is essentially unchanged from the previous versions, so what’s different about the new functions? (I will discuss sqrtm in a future post.)

The answer is that both functions make use of new backward error bounds that can be much smaller than the old ones for very nonnormal matrices, and so help to avoid a phenomenon known as overscaling. The key change is that when bounding a matrix power series p(X) = a_0 I + a_1 X + a_2 X^2 + \cdots, instead of bounding the kth term a_k X^k by |a_k| \|X\|^k, a potentially smaller bound is used.

This is best illustrated by example. Suppose we want to bound \|X^{12}\| and are not willing to compute X^{12} but are willing to compute lower powers of X. We have 12 = 6 \times 2 = 4 \times 3, so \|X^{12}\| is bounded by each of the terms (\|X^2\|)^6, (\|X^3\|)^4, (\|X^4\|)^3, and (\|X^6\|)^2. But it is easy to see that (\|X^6\|)^2 \le (\|X^2\|)^6 and (\|X^6\|)^2 \le (\|X^3\|)^4, so we can discard two of the bounds, ending up with

\|X^{12}\| \le \min( \|X^4\|^3, \|X^6\|^2 ).

This argument can be generalized so that every power of X is bounded in terms of the norms of X^p for values of p up to some small, fixed value. The gains can be significant. Consider the matrix

X = \begin{bmatrix}1 & 100 \\ 0 & 1 \end{bmatrix}.

We have \|X\|^{12} \approx 10^{24}, but

X^k = \begin{bmatrix}1 & 100k \\ 0 & 1 \end{bmatrix},

so the bound above is roughly \|X^{12}\| \le 6 \times 10^{7}, which is a significant improvement.

One way to understand what is happening is to note the inequality

\rho(X) \le \| X^k\| ^{1/k} \le \|X\|,

where \rho is the spectral radius (the largest modulus of any eigenvalue). The upper bound corresponds to the usual analysis. The lower bound is something that we cannot use to bound the norm of the power series. The middle term is what we are using, and as k\to\infty the middle term converges to the lower bound, which can be arbitrarily smaller than the upper bound.

What is the effect of these bounds on the algorithms in expm and logm? Both algorithms make use of Padé approximants, which are good only for small-normed matrices, so the algorithms begin by reducing the norm of the input matrix. Backward error bounds derived by bounding a power series as above guide the norm reduction and if the bounds are weak then the norm is reduced too much, which can result in loss of accuracy in floating point arithmetic—the phenomenon of overscaling. The new bounds greatly reduce the chance of overscaling.

In his blog post A Balancing Act for the Matrix Exponential, Cleve Moler describes a badly scaled 3-by-3 matrix for which the original expm suffers from overscaling and a loss of accuracy, but notes that the new algorithm does an excellent job.

The new logm has another fundamental change: it applies inverse scaling and squaring and Padé approximation to the whole triangular Schur factor, whereas the previous logm applied this technique to the individual diagonal blocks in conjunction with the Parlett recurrence.

For more on the algorithms underlying the new codes see these papers. The details of how the norm of a matrix power series is bounded are given in Section 4 of the first paper.

ORCID: Open Researcher and Contributor ID

orcid-1p5-million-served.png

An Open Researcher and Contributor ID, or ORCID, is a unique identifier for a researcher that allows research outputs to be associated with that researcher. If you have a common name, or have moved institutions during your career, then it can be very difficult for people to determine which papers returned in a Google Scholar search (say) are by you rather than by someone else with the same name. If at some point in your career you change your name, or how you list it on papers, the difficulty of attribution may be even greater. Having your ORCID associated with your publications solves this problem.

Such an identifier scheme has existed for some time in the form of ResearcherID from Thomson Reuters, but this is commercial and linked to the Web of Science. The ORCID organization is open and not-for-profit, and its software is open source. The ORCID web site says that as well as the registry of identifiers, ORCID provides “APIs that support system-to-system communication and authentication”. This is what makes ORCID particularly interesting, as it makes it possible to have one’s list of publications generated in an automatic way, either by entering each on the ORCID site and then the list propagating elsewhere, or by ORCID automatically pulling in publication metadata and associating it with you.

The starting point for exploiting ORCIDs is for publishers to collect them at the time of submission. The Royal Society has been collecting ORCIDs with submissions since 2014, and since the turn of the year it has required authors submitting to its journals to provide an ORCID. As the Royal Society points out, “Once you have created an ORCID identifier and connected it with your publications, grants, and affiliations, your details will automatically be entered when using any compatible system”. Other publishers are following suit, as this open letter indicates.

I recently received an email saying “You have 1 new notification in your ORCID inbox”. When I went to the inbox I found a message saying that “Crossref [another not-for-profit organization] would like your permission to interact with your ORCID Record as a trusted party”. I gave permission and now when publishers send information about my new publications to Crossref they will be added to my ORCID record. For more on this important Crossref-ORCID link, see this article.

For academics used to having to repeatedly enter the same information into different systems this automatic updating of an ORCID record is great news.

The University of Manchester recently made it compulsory for an academic to have an ORCID in order to be part of its internal research assessment exercise (the university has a good page about ORCID that answers some FAQs). Research Councils UK recently joined ORCID and will soon start capturing ORCIDs on its grant systems. With these organizations actively supporting the scheme, and many other organizations members of the scheme (including the American Mathematical Society, but not yet SIAM), it seems that ORCID has a bright future. However, my impression is that many academics are unaware of ORCID. There is plenty of information about it on the web, but probably not in the places academics tend to look.

I expect interest will increase rapidly as ORCID becomes better known. I notice that Springer has started to provide links from an author name on a paper to the ORCID page for that author. This is just the sort of added value feature that will make academics want to register for an ORCID.

The Serial, or Oxford, Comma

peanuts5.jpg
© PEANUTS Worldwide

In the sentence

The great historical heroes of applied mathematics include Archimedes, Newton, Euler, and Gauss.

the comma before the “and” is known as a serial comma. Whether or not to include it is a matter of style.

The serial comma is also known as the Oxford comma, because Oxford University Press style rules require it to be present. The Chicago Manual of Style (CMS) requires the serial comma, as does SIAM, which follows the CMS recommendations and explicitly states, in the SIAM Style Manual, “Use the serial comma before the and or or in lists of three or more items.”

Other organizations, such as the New York Times, The Economist, and the University of Oxford, require that the serial comma is used only when necessary to avoid ambiguity. Consider the sentence

Three important techniques in the design of algorithms are bisection, divide and conquer, and recursion.

If the serial comma is omitted the final phrase becomes “are bisection, divide and conquer and recursion”, which will be confusing to anyone who does not know that “divide and conquer” is a technique.

Conversely, the serial comma is sometimes incorrect when it might appear to be optional. In the sentence

The results show that, unlike Algorithm 1, Algorithm 2 and the SVD-based algorithm exhibit forward stable behaviour in all the experiments.

a serial comma must not be put after “Algorithm 2” because the three algorithms do not form a list, so the sentence does not make sense with that extra comma.

Examples such as the last two, where the serial comma either must be used or must not be used, irrespective of style, are relatively infrequent, but they do arise from time to time.

For the last year or two I have been using the serial comma in my papers and books, partly because it is the style of the relevant publishers. In particular, I became accustomed to its use in The Princeton Companion to Applied Mathematics. But I also like the simplicity of the serial comma: I do not have to stop to think whether to use it every time I write a list. For informal writing, such as on this blog, I have not made up my mind which style to use. I think the serial comma would look fussy in the tagline at the top right corner of this page.

In the chapter “Commas the Serial Killer” in his book Making a Point: The Pernickity Story of English Punctuation, David Crystal notes that originally the use of the serial comma was standard, and it was only in the early twentieth century that it started to be avoided, “as part of the trend towards punctuation minimalism”. Interestingly, Crystal uses the serial comma in his book even though the style of his publisher (Profile Books) is to avoid it.

There is a large amount of material on the internet about the serial comma, of which the short post The Oxford, Comma has some good examples of where it is needed, and Wikipedia has a good entry. There is a song “Oxford Comma” by the American rock band Vampire Weekend (thanks to Sam Clark for pointing this out); a video is here, but beware the expletive in the first line of the song. The “comma queen” Mary Norris has produced an excellent video about the serial comma. The serial comma even has its own Twitter account, @IAmOxfordComma.

What better way to support the Oxford comma than by giving up some of your 140 characters for it in a Tweet!

Manchester Numerical Analysis Reports

This post is an edited and updated version of an article that I published in 2006 in the IMANA Newsletter (“Newsletter of the Numerical Analysis Group of the Institute of Mathematics and its Applications”). Very few issues of the Newsletter appear to be electronically available, so I thought it worthwhile to reproduce the article here.

narep-covers.jpg
Three different cover designs, from 1988, 1996, and 2005.

The University of Manchester Numerical Analysis (NA) Report series began in 1974. The two key movers in setting up the series were Ian Gladwell, a member of the Department of Mathematics at the University of Manchester (now retired from the Department of Mathematics at Southern Methodist University, Dallas), and Charlie Van Loan, an SERC-funded postdoctoral visitor to the department in 1974–1975 (and subsequently a professor in the Department of Computer Science at Cornell University). The first report was

Charles F. Van Loan, Least Squares Problems with Emphasis Upon Singular Value Techniques, Numerical Analysis Report No. 1, September 1974.

and Charlie wrote four of the first 10 reports. Particularly notable is

Charles F. Van Loan. A study of the matrix exponential. Numerical Analysis Report No. 10, August 1975.

This was an early version of the classic, highly-cited article “Nineteen Dubious Ways to Compute the Exponential of a Matrix” written with Cleve Moler and subsequently published in SIAM Review in 1978, with an updated reprint in SIAM Review in 2003. The report has been reissued as MIMS EPrint 2206.397. Ian’s early contributions include the often-cited

J. L. Siemieniuch and I. Gladwell On time discretization for linear time-dependent partial differential equations. Numerical Analysis Report No. 5, September 1974.

One of the main aims of the series was to provide a vehicle for pre-publication of a preliminary version of a piece of work, prior it to being submitted to a journal. Right from the start this aim was achieved, with at least 15 of the first 20 reports known to have appeared in refereed journals. Nevertheless, a number of important early reports, such as Number 5 mentioned above, were not submitted but surely would have been in today’s academic climate.

The contents of the series naturally reflect the interests of the numerical analysts at the University of Manchester and UMIST over the years. The first 125 reports (taking us up to October 1986) include contributions on stiff differential equations (George Hall, Jack Williams), complex approximation (Jack Williams), Volterra integral equations (Christopher Baker), polynomial zero-finding (Len Freeman), methods for second order ordinary differential equations (Ian Gladwell, Ruth Thomas), multigrid (Joan Walsh), numerical linear algebra (Nick Higham), and numerical analysis of partial differential equations (Ian Gladwell, David Silvester, Ron Thatcher, Joan Walsh).

As well as containing preprints of research papers, the series includes all thirteen Annual Reports of the Manchester Centre for Computational Mathematics and the proceedings of two 1982 meetings:

Ian Gladwell (ed.), Proceedings of a One-Day Colloquium On Numerical Linear Algebra and Its Applications. Numerical Analysis Report No. 78, July 1982.

George Hall and Jack Williams (eds), Proceedings of a One-Day Colloquium on the Numerical Solution of Ordinary Differential Equations. Numerical Analysis Report No. 84, December 1982.

The reports illustrate the changes in typesetting mathematics since the 1970s. Early reports were typewritten, sometimes with equations written in by hand. In the 1980s many of the reports were wordprocessed using Vuwriter—a technical wordprocessor produced by Vuman Ltd., a spin-off company of the University of Manchester, targeted at the Apricot microcomputer. I wordprocessed several reports on a Commodore 64 microcomputer using an Epson printer, with Greek letters and mathematical characters produced in the printer’s graphics mode (see this earlier post for more details).

The first TeXed reports were around 1986/1987 and by the early 1990s most reports were produced in \LaTeX, as they are today.

The printed reports retained their distinctive green card cover to the end, but a major change came in May 1993 when they were first made available over the internet—originally by anonymous ftp from vtx.ma.man.ac.uk and then from the Manchester Centre for Computational Mathematics (MCCM) web site set up in 1994. The web page from which the reports are available, now located here, was automatically created from a BibTeX bib file, the latter being maintained by hand, as was the repository of PDF and PS files.

In 2005, the NA Report series was folded into the new MIMS EPrints archive (http://eprints.ma.man.ac.uk/) which hosts research outputs of members of the School of Mathematics and associated researchers. EPrints entries are assigned an AMS subject classification and can be searched by those numbers. Reports that would have appeared in the old NA Report series can now generally be found under the classification 65 Numerical Analysis.

On a recent visit to the University of Manchester library I was pleased to find that many of the NA reports up to 2001 are still available in hard copy on the shelves. (A search of the catalogue for “numerical analysis report” reveals the details.)

Principal Values of Inverse Cosine and Related Functions

casio-fx-300es-cropped.jpg

I’ve recently been working, with Mary Aprahamian, on theory and algorithms for the matrix inverse sine and cosine and their hyperbolic counterparts. Of course, in order to treat the matrix functions we first need a good understanding of the scalar case. We found that, as regards practical computation, the literature is rather confusing. The reason can be illustrated with the logarithm of a complex number.

Consider the question of whether the equation

\log(z_1 z_2) = \log z_1 + \log z_2

is valid. In many textbooks this equation is stated as is, but with the (often easily overlooked) proviso that each occurrence of \log denotes a particular branch of the logarithm—possibly different in each case. In other words, the equation is true for the multivalued function that includes all branches.

In practice, however, we are usually interested in the principal logarithm, defined as the one for which the complex argument of \log z lies in the interval (-\pi,\pi] (or possibly some other specific branch). Now the equation is not always true. A correction term that makes the equation valid can be expressed in terms of the unwinding number introduced by Corless, Hare, and Jeffrey in 1996, which is discussed in my earlier post Making Sense of Multivalued Matrix Functions with the Matrix Unwinding Function.

The definition of principal logarithm given in the previous paragraph is standard. But for the inverse (hyperbolic) cosine and sine it is difficult to find clear definitions of principal values, especially over the complex plane. Some authors define these inverse functions in terms of the principal logarithm. Care is required here, since seemingly equivalent formulas can yield different results (one reason is that (z^2-1)^{1/2} is not equivalent to (z-1)^{1/2}(z+1)^{1/2} for complex z). This is a good way to proceed, but working out the ranges of the principal functions from these definitions is not trivial.

In our paper we give diagrams that summarize four kinds of information about the principal inverse functions acos, asin, acosh, and asinh.

  • The branch points.
  • The branch cuts, marked by solid lines.
  • The domain and range, shaded gray and extending to infinity in the obvious directions).
  • The values attained on the branch cuts: the value on the cut is the limit of the values of the function as z approaches the cut from the side without the hashes.

The figures are below. Once we know the principal values we can address questions analogous to the log question, but now for identities relevant to the four inverse functions.

For more, including an explanation of the figures in words and all the details of the matrix case—including answers to questions such as “when is \mathrm{acos}(\cos A) equal to A?”—see our recent EPrint Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms.

acosm-fig0.jpg
acosm-fig1.jpg
acosm-fig2.jpg
acosm-fig3.jpg

Typesetting Mathematics According to the ISO Standard

In The Princeton Companion to Applied Mathematics we used the conventions that the constants e (the base of the natural logarithm) and i (the imaginary unit), and the d in derivatives and integrals, are typeset in an upright font. These conventions are part of an ISO standard, ISO 80000-2:2009. The standard is little-known, though there is an excellent article about it in TUGboat by Claudio Beccari, and Kopka and Daly’s A Guide to \LaTeX has a page on the standard (in section 7.4.10 of the fourth edition and section 5.4.10 of the third edition).

pcam_p178.jpg
An extract from The Princeton Companion to Applied Mathematics (page 178) showing the upright e, d, and i in one equation.

The standard goes into great detail about how all kinds of mathematical notation should be typeset. It is unclear how the typesetting choices were made or who was on the technical committees that made them. Nevertheless the recommendations are well thought-out.

The most interesting aspects of the standard concern the use of an upright versus a sloping font, which in practice usually amounts to roman versus italic.

  1. Variables and generic functions are written in italic. This, of course, is standard practice.
  2. Mathematical constants whose values do not change are written in roman. Thus e, i, and \pi should be in roman font. However, standard \LaTeX fonts do not have upright lower case Greek letters, so an italic \pi is unavoidable.
  3. Mathematical functions with a fixed meaning, such as exp and sin, are written in roman. Of course, \LaTeX has such definitions built in for many standard functions, but it is a common error for inexperienced users to write, for example, $sin(x)$ (giving sin(x)) instead of $\sin(x)$ (giving \sin(x)). The best way to define macros for additional functions is via \DeclareMathOperator, assuming you are using the amsmath package:

    \DeclareMathOperator{\diag}{diag}
    
  4. Mathematical operators are written in roman. This includes the d in derivatives and integrals.

Although the second and fourth of these rules are not widely followed, they are appealing in that they distinguish variable quantities from fixed ones.

There are some subtleties and some dubious cases.

  • A capital delta may appear in both forms: as an operator, hence roman, as in the forward difference operator \Delta(f) = f(x+h) - f(x); and combined with a letter to denote a variable, hence italic, as in A + \mathnormal{\Delta}A (where in \LaTeX the latter delta is typed as \mathnormal{\Delta}).
  • The ISO standard explicitly says that named polynomials, such as the Chebyshev polynomials, should be written in roman: \mathrm{T}_n(x) instead of T_n(x). This certainly follows the rules above, since such polynomials have a fixed meaning, but I have never seen the upright font being used for such polynomials in practice.

I’ve started to use rules 1–4 in my recent papers, most thoroughly in this recent EPrint on matrix functions, and intend to use them in my future writing. In doing so, I am using the following \LaTeX macros, based on those suggested in Beccari’s article.

% The number `e'.
\def\eu{\ensuremath{\mathrm{e}}}
% The imaginary unit.
\def\iu{\ensuremath{\mathrm{i}}}
% The differential operator.
\def\du{\ensuremath{\mathrm{d}}}

The \ensuremath is not essential, but it means that you can type \eu, etc., outside math mode—for example, in the phrase “the limit of this sequence is \eu”. You may want to rewrite the \def commands using \newcommand, so that if the \eu command has already been defined an error will be issued:

\newcommand{\eu}{\ensuremath{\mathrm{e}}}

With these definitions the example at the start of this article is typed as

\int_C\frac{\eu^z}{z}\,\du z = 2\pi\iu.

Note that if you are using Beamer with the recommended sans serif fonts then mathrm should be replaced by \mathsf in these definitions.

Obtaining the Standard

If you wish to download the ISO standard document from the link given at the start of this post you will be charged the princely sum of around $150 for it! If the aim of the ISO is that the standard becomes adopted then this appears counterproductive. However, it is easy to find a freely downloadable version via a Google search.