Tiny Relative Errors

Let x\ne0 and y be distinct floating point numbers. How small can the relative difference between x and y be? For IEEE double precision arithmetic the answer is u = 2^{-53} \approx 1.1 \times 10^{-16}, which is called the unit roundoff.

What if we now let x and y be vectors and define the relative difference as d(x,y) = \|x-y\|_{\infty}/\|x\|_{\infty}, where the infinity norm is \|x\|_{\infty} = \max_i |x_i|? It does not seem to be well known that d(x,y) can be much less than u. I have observed this phenomenon numerous times over the years but had not previously stopped to wonder how it could happen. An example is

x = \begin{bmatrix}1 \\ 10^{-22} \end{bmatrix}, \quad  y = \begin{bmatrix}1 \\ 2\times 10^{-22} \end{bmatrix}, \quad d(x,y) = 10^{-22}.

Notice that the largest element(s) in magnitude of x agrees with the corresponding element(s) of y. This is, in fact, the only way that d(x,y) < u is possible.

Theorem (Dingle & Higham, 2013).
Let x\ne0 and y be n-vectors of normalized floating point numbers.
If d(x,y) < u then x_k = y_k for all k such that |x_k| = \|x\|_{\infty}.

This result, and the scalar case mentioned above, are proved in

Nicholas J. Higham and Nicholas J. Dingle, Reducing the Influence of Tiny Normwise Relative Errors on Performance Profiles, MIMS EPrint 2011.90; to appear in ACM Trans. Math. Soft.

Performance profiles, introduced by Dolan and Moré in 2002, are a popular way to compare competing algorithms according to particular measures of performance, such as relative error or execution time. We show in the paper above that tiny relative errors can result in misleading performance profiles and show how a simple transformation of the data can lead to much more useful profiles. For more, see the paper or the talk Numerical Issues in Testing Linear Algebra Algorithms that I gave at the SIAM Conference on Computational Science and Engineering in Boston last week.

SIAM Conference on Computational Science and Engineering 2013 Preview

I’ll be attending the SIAM CS&E meeting in Boston next week. This will be the first time I’ve attended this now biennial meeting, which has been running since 2000. The meeting is run by the SIAM Activity Group on Computational Science and Engineering, which is far and away SIAM’s largest activity group, with over 2000 members. The Boston conference is shaping up to be SIAM’s largest ever meeting (apart from ICIAM) with well over 1000 attendees.

The program, spread over five full days, shows the vitality of the field. I’ll be speaking on the first morning in the minisymposium Numerical Accuracy and Reliability Issues in High Performance Computing organized by Marc Baboulin and Ilse Ipsen. Unfortunately there are at least two other minisymposia that I’d like to attend at the same time!

If you are not able to attend you can get a feel for what’s going on by following the hashtag #SIAMCSE13 on Twitter. Also watch out for some live blogging of minisymposia from Paul Constantine and David Gleich; I may even have a go at this myself.

The Nearest Correlation Matrix

A correlation matrix is a symmetric matrix with unit diagonal and nonnegative eigenvalues. In 2000 I was approached by a London fund management company who wanted to find the nearest correlation matrix (NCM) in the Frobenius norm to an almost correlation matrix: a symmetric matrix having a significant number of (small) negative eigenvalues. This problem arises when the data from which the correlations are constructed is asynchronous or incomplete, or when models are stress-tested by artificially adjusting individual correlations. Solving the NCM problem (or obtaining a true correlation matrix some other way) is important in order to avoid subsequent calculations breaking down due to negative variances or volatilities, for example.

Algorithms

The convexity properties of the problem mean that there is a unique nearest correlation matrix, which is hence a global minimizer. In the 1990s several algorithms had been proposed for computing it, but none was guaranteed to work. Prompted by the approach from the company, I investigated the problem. I proved some results characterizing the solution and derived an alternating projections algorithm for computing it 1. The algorithm repeatedly projects onto the set of matrices with unit diagonal and the cone of symmetric positive semidefinite matrices. It is guaranteed to converge to the minimum, but does so at a linear rate. An important feature of the algorithm is that other projections can be added on. Thus, for example, if we want to leave the trailing principal submatrix of order three unchanged, we simply restore it at the end of each iteration 2, 3.

The alternating projections algorithm is widely used, but can be slow to converge, especially for large matrices 4. In 2006, Qi and Sun 5 derived a Newton method for the NCM problem. They work with the dual of the original problem, which is unconstrained. The objective function of the dual is not twice continuously differentiable, but by using the theory of strongly semismooth matrix functions Qi and Sun show that Newton’s method nevertheless has global quadratic convergence.

Ruediger Borsdorf and I, building on work in his M.Sc. thesis 3, built an algorithm that solves the Newton equations using minres with a Jacobi preconditioner (a nontrivial task since the coefficient matrix is not explicitly available), and has some other refinements described in 6. This algorithm has been implemented in the NAG Library 7.

In subsequent work, Borsdorf, Marcos Raydan and I 8 , 9 used the spectral projected gradient method (SPGM) to solve the k-factor NCM, in which the correlation matrix is constrained to have the form of a diagonal matrix plus a rank-k matrix. This problem variant arises in multifactor normal copula models, collateralized debt obligations (CDOs), and multivariate time series. One existing previous algorithm can fail to converge or solve the problem, but the SPGM has guaranteed convergence to a stationary point. This algorithm has also been implemented in the NAG Library.

The NCM problem has proved to be of very wide interest beyond the world of finance, as indicated by the fact that 1 is now my third best cited paper on the Web of Science. Recent applications in which the problem arises include reconstructing 20th century sea levels, genetic evaluations for thoroughbred horse breeding, modelling public health data sets, modelling storm damage of buildings, and a Kriging model for reservoirs.

Software

I regularly receive emails asking for software implementing algorithms for the NCM problem. I thought it would be useful to summarize what is available. In general, the Newton method is preferred, but the alternating projections method is more flexible as regards incorporating additional constraints.

Original NCM Problem

Alternating Projections Method

Newton Method

k-Factor NCM Problem

A MATLAB Alternating Projections Function

I thought it would be useful to provide my own MATLAB function nearcorr.m implementing the alternating projections algorithm. The listing is below. To see how it compares with the NAG code g02aa.m I ran the test code

%NEARCORR_TEST  Compare g02aa and nearcorr.

rng(10)                                  % Seed random number generators.
n = 100;
A = gallery('randcorr',n);               % Random correlation matrix. 
E  = randn(n)*1e-1;  A = A + (E + E')/2; % Perturb it.
tol = 1e-10;

% A = cor1399; tol = 1e-4;

fprintf('g02aa:\n')
maxits = int64(-1);  % For linear equation solver.
maxit = int64(-1);   % For Newton iteration.
tic
[~,X1,iter1,feval,nrmgrd,ifail] = g02aa(A,'errtol',tol,'maxits',maxits, ...
                                          'maxit',maxit);
toc

fprintf('  Newton steps taken: %d\n', iter1);
fprintf('  Norm of gradient of last Newton step: %6.4f\n', nrmgrd);
if ifail > 0, fprintf('  g02aa failed with ifail = %g\n', ifail), end

fprintf('nearcorr:\n')
tic
[X2,iter2] = nearcorr(A,tol,[],[],[],[],1);
toc
fprintf('  Number of iterations: %d\n', iter2);

fprintf('  Normwise relative difference between computed solutions:')
fprintf('%9.2e\n', norm(X1-X2,1)/norm(X1,1))

Running under Windows 7 on an Ivy Bridge Core i7 processor @4.4Ghz I obtained the following results, where the “real-life” matrix is based on stock data:

Matrix Code Time (secs) Iterations
1. Random (100), tol = 1e-10 g02aa 0.023 4
nearcorr 0.052 15
2. Random (500), tol = 1e-10 g02aa 0.48 4
nearcorr 3.01 26
3. Real-life (1399), tol = 1e-4 g02aa 6.8 5
nearcorr 100.6 68

The results show that while nearcorr can be fast for small dimensions, the number of iterations, and hence its run time, tends to increase with the dimension and it can be many times slower than the Newton method. This is a stark illustration of the difference between quadratic convergence and linear (with problem-dependent constant) convergence. Here is my MATLAB function nearcorr.m.

function [X,iter] = nearcorr(A,tol,flag,maxits,n_pos_eig,w,prnt)
%NEARCORR    Nearest correlation matrix.
%   X = NEARCORR(A,TOL,FLAG,MAXITS,N_POS_EIG,W,PRNT)
%   finds the nearest correlation matrix to the symmetric matrix A.
%   TOL is a convergence tolerance, which defaults to 16*EPS.
%   If using FLAG == 1, TOL must be a 2-vector, with first component
%   the convergence tolerance and second component a tolerance
%   for defining "sufficiently positive" eigenvalues.
%   FLAG = 0: solve using full eigendecomposition (EIG).
%   FLAG = 1: treat as "highly non-positive definite A" and solve
%             using partial eigendecomposition (EIGS).
%   MAXITS is the maximum number of iterations (default 100, but may
%   need to be increased).
%   N_POS_EIG (optional) is the known number of positive eigenvalues of A.
%   W is a vector defining a diagonal weight matrix diag(W).
%   PRNT = 1 for display of intermediate output.

%   By N. J. Higham, 13/6/01, updated 30/1/13, 15/11/14, 07/06/15.
%   Reference:  N. J. Higham, Computing the nearest correlation
%   matrix---A problem from finance. IMA J. Numer. Anal.,
%   22(3):329-343, 2002.

if ~isequal(A,A'), error('A must by symmetric.'), end
if nargin < 2 || isempty(tol), tol = length(A)*eps*[1 1]; end
if nargin < 3 || isempty(flag), flag = 0; end
if nargin < 4 || isempty(maxits), maxits = 100; end
if nargin < 6 || isempty(w), w = ones(length(A),1); end
if nargin < 7, prnt = 1; end

n = length(A);
if flag >= 1
   if nargin < 5 || isempty(n_pos_eig)
      [V,D] = eig(A); d = diag(D);
      n_pos_eig = sum(d >= tol(2)*d(n));
   end
   if prnt, fprintf('n = %g, n_pos_eig = %g\n', n, n_pos_eig), end
end

X = A; Y = A;
iter = 0;
rel_diffX = inf; rel_diffY = inf; rel_diffXY = inf;
dS = zeros(size(A));

w = w(:); Whalf = sqrt(w*w');

while max([rel_diffX rel_diffY rel_diffXY]) > tol(1)

   Xold = X;
   R = Y - dS;
   R_wtd = Whalf.*R;
   if flag == 0
      X = proj_spd(R_wtd);
   elseif flag == 1
      [X,np] = proj_spd_eigs(R_wtd,n_pos_eig,tol(2));
   end
   X = X ./ Whalf;
   dS = X - R;
   Yold = Y;
   Y = proj_unitdiag(X);
   rel_diffX = norm(X-Xold,'fro')/norm(X,'fro');
   rel_diffY = norm(Y-Yold,'fro')/norm(Y,'fro');
   rel_diffXY = norm(Y-X,'fro')/norm(Y,'fro');
   iter = iter + 1;
   if prnt
      fprintf('%2.0f:  %9.2e  %9.2e  %9.2e', ...
               iter, rel_diffX, rel_diffY, rel_diffXY)
      if flag >= 1, fprintf('  np = %g\n',np), else fprintf('\n'), end
   end
   if iter > maxits
       error(['Stopped after ' num2str(maxits) ' its. Try increasing MAXITS.'])
   end

end

%%%%%%%%%%%%%%%%%%%%%%%%
function A = proj_spd(A)
%PROJ_SPD

if ~isequal(A,A'), error('Not symmetric!'), end
[V,D] = eig(A);
A = V*diag(max(diag(D),0))*V';
A = (A+A')/2; % Ensure symmetry.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [A,n_pos_eig_found] = proj_spd_eigs(A,n_pos_eig,tol)
%PROJ_SPD_EIGS

if ~isequal(A,A'), error('Not symmetric!'), end
k = n_pos_eig + 10; % 10 is safety factor.
if k > length(A), k = n_pos_eig; end
opts.disp = 0;
[V,D] = eigs(A,k,'LA',opts); d = diag(D);
j = (d > tol*max(d));
n_pos_eig_found = sum(j);
A = V(:,j)*D(j,j)*V(:,j)';  % Build using only the selected eigenpairs.
A = (A+A')/2; % Ensure symmetry.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function A = proj_unitdiag(A)
%PROJ_SPD

n = length(A);
A(1:n+1:n^2) = 1;

Updates

  • Links updated August 4, 2014.
  • nearcorr.m corrected November 15, 2014: iter was incorrectly initialized (thanks to Mike Croucher for pointing this out).
  • Added link to Mike Croucher’s Python alternating directions code, November 17, 2014.
  • Corrected an error in the convergence test, June 7, 2015. Effect on performance will be minimal (thanks to Nataša Strabić for pointing this out).

Footnotes:

1

Nicholas J. Higham, Computing the Nearest Correlation Matrix—A Problem from Finance, IMA J. Numer. Anal. 22, 329–343, 2002.

2

Craig Lucas, Computing Nearest Covariance and Correlation Matrices, M.Sc. Thesis, University of Manchester, 2001.

3

Ruediger Borsdorf, A Newton Algorithm for the Nearest Correlation Matrix, M.Sc. Thesis, University of Manchester, 2007.

4

Rene Escalante and Marcos Raydan, Alternating Projection Methods, SIAM, 2011.

5

Hou-Duo Qi and Defeng Sun, A Quadratically Convergent Newton Method for Computing the Nearest Correlation Matrix, SIAM J. Matrix Anal. Appl. 28, 360-385, 2006

6

Ruediger Borsdorf and Nicholas J. Higham, A Preconditioned Newton Algorithm for the Nearest Correlation Matrix, IMA J. Numer. Anal. 30, 94-107, 2010.

8

Ruediger Borsdorf, Nicholas Higham and Marcos Raydan, Computing a Nearest Correlation Matrix with Factor Structure, SIAM J. Matrix Anal., Appl. 31, 2603-2622, 2010

9

Ruediger Borsdorf, Structured Matrix Nearness Problems: Theory and Algorithms, Ph.D. Thesis, University of Manchester, 2012.

The Lanczos Tapes

file://d:/dropbox/org/images/lanczos07.jpg
Charcoal sketch of Cornelius Lanczos by John Chirnside of UMIST.

Cornelius Lanczos (1893-1974) gave lectures at UMIST (a predecessor institution of The University of Manchester) in the late 1960s and early 1970s, while he was a Professor at the Dublin Institute for Advanced Study. In 1972, UMIST Audio Visual Services made three video recordings lasting almost three hours of Lanczos talking about mathematics, his life, and Einstein. In two of the tapes he is speaking in a group discussion, while in the other he speaks eloquently about his life for 50 minutes, directly to camera and apparently without notes. The topics he covers include his experiences as

  • student of Eötvös and Fejér in Hungary,
  • theoretical physicist,
  • assistant of Albert Einstein in Germany,
  • numerical analyst and inventor of the tau method,
  • (re-)discoverer of the fast Fourier transform and singular value decomposition,
  • inventor of the Lanczos algorithm while working at the US National Bureau of Standards, and
  • head of the Theoretical Physics Department at the Dublin Institute for Advanced Study.

The charcoal sketch above hung for many years in the office of the administrator of the mathematics department at UMIST and now has pride of place on the wall in my office in the Alan Turing Building.

My colleague Stefan Güttel has produced a version of the videos with bookmarks linking to the main topics of discussion. We are pleased to make the videos available online on the occasion of the 120th anniversary of Cornelius Lanczos’s birth (February 2, 1893).

file://d:/dropbox/org/images/lanczos01.jpg
Cornelius Lanczos

Arthur Buchheim (1859-1888)

The new second edition of Horn and Johnson’s Matrix Analysis, about which I wrote in a previous post, includes in Problem 2.4.P2 a proof of the Cayley-Hamilton theorem that is valid for matrices with elements from a commutative ring and does not rely on the existence of eigenvalues. The proof is attributed to an 1883 paper by Arthur Buchheim.

A few years ago Arthur Buchheim’s work came up in my own investigations into the history of matrix functions and I discovered that he was a mathematics teacher at Manchester Grammar School, which is located a couple of miles south of the University of Manchester, where I work.

In 1884 Buchheim gave a derivation of Sylvester’s polynomial interpolation formula for matrix functions. The original formula was valid only for matrices with distinct eigenvalues, but in 1886 Buchheim generalized it to handle multiple eigenvalues using Hermite interpolation.

file://d:/dropbox/org/images/buch86-title.jpg

Appropriately, Rinehart, in his 1955 paper The Equivalence of Definitions of a Matric Function, cited Buchheim when he wrote

“there have been proposed in the literature since 1880 eight distinct definitions of a matric function, by Weyr, Sylvester and Buchheim, Giorgi, Cartan, Fantappiè;, Cipolla, Schwerdtfeger and Richter … All of the definitions except those of Weyr and Cipolla are essentially equivalent.”

Buchheim studied at New College, Oxford, under the Savilian Professor of Geometry, Henry Smith, and then at Leipzig under Felix Klein. Then he spent five years at Manchester Grammar School, from which he resigned due to ill-health the year before his death.

In addition to his work on matrix functions and the Cayley-Hamilton theorem, Buchheim published a series of papers promoting Grassmann’s methods. In his A History of Mathematics (1909), Cajori notes that

“Arthur Buchheim of Manchester (1859-1888), showed that Grassmann’s Ausdehnungslehre supplies all the necessary materials for a simple calculus of screws in elliptic space.”

He goes on to say that

“Horace Lamb applied the theory of screws to the question of the steady motion of any solid in a fluid.”

thus bringing in another, much more famous, Manchester mathematician about whom I recently wrote.

Sylvester wrote an obituary in Nature in which he stated “I … know and value highly his contributions to the great subject which engaged the principal part of my own attention during the transition period between my residence in Baltimore and at Oxford”.

The best source of information on Buchheim is an article

Jim Tattersall, Arthur Buchheim: Mathematician of Great Promise, in Proceedings of the Canadian Society for History and Philosophy of Mathematics Thirty-first Annual Meeting, Antonella Cupillari, ed, 18 (2005), 200-208.

which lists lists 24 papers that Buchheim published in his short life of 29 years.

Second Edition (2013) of Matrix Analysis by Horn and Johnson

file://d:/dropbox/org/images/hojo13-cover.jpg

Horn and Johnson’s 1985 book Matrix Analysis is the standard reference for the subject, along with the companion volume Topics in Matrix Analysis (1991). This second edition, published 28 years after the first, is long-awaited. It’s a major revision: 643 pages up from 561 and with much more on each page thanks to pages that are wider and taller. The number of problems and the number of index entries have both increased, by 60% and a factor 3, respectively, according to the preface. Hints for solutions of the problems are now given in an appendix.

The number of chapters is unchanged and their titles are essentially the same. New material has been added, such as the CS decomposition; existing material has been reorganized, with the singular value decomposition appearing much earlier now; and the roles of block matrices and left eigenvectors have been expanded.

Unlike the first edition, the book has been typeset in LaTeX (in Times Roman) and it’s been beautifully done, except for the too-large solid five-pointed star used in some displays. Moreover, the print quality is superb. Oddly, equations are not punctuated! (The same is true of the first edition, though I must admit I had not noticed.)

The new edition is clearly a must-have for anyone seriously interested in matrix analysis.

Note, however, that this book is not, and cannot be without greatly increasing its size, a comprehensive research monograph. Thus exhaustive references to the literature are not given (as stated in the preface to the original edition). Also, in some cases a story is partly told in the main text and completed in the Problems, or in the Notes and Further Reading. For example, Theorem 3.2.11.1 on page 184 compares the Jordan structure of the nonzero eigenvalues of AB and BA (previously a Problem in the first edition), but the comparison for zero eigenvalues is only mentioned in the Notes and Further Reading seven pages later and is not signposted in the main text.

The 37-page index is extremely comprehensive and covers the Problems as well as the main text. It’s not perfect: Sylvester equation is missing (or rather, is hidden as the subentry Sylvester’s theorem, linear matrix equations).

A final point: the References (bibliography) contains several books that are out of print from the indicated publisher but are available in reprints from other publishers, notably in the SIAM Classics in Applied Mathematics series. They are:

  • Rajendra Bhatia, Perturbation Bounds for Matrix Eigenvalues, SIAM, 2007: hard copy, ebook.
  • Françoise Chatelin, Eigenvalues of Matrices, SIAM, 2012: ebook, hard copy
  • Charles Cullen, Matrices and Linear Transformations, Second edition, Dover, 1990: Google Books.
  • Israel Gohberg, Peter Lancaster & Leiba Rodman, Matrix Polynomials, SIAM, 2009: hard copy, ebook.
  • Israel Gohberg, Peter Lancaster & Leiba Rodman, Indefinite Linear Algebra and Applications, Birkhauser, 2005: ebook.
  • Marvin Marcus & Henryk Minc, A Survey of Matrix Theory and Matrix Inequalities, Dover, 1992: Google Books.
  • Stephen Campbell & Carl Meyer, Generalized Inverses of Linear Transformations, SIAM, 2009: hard copy, ebook.

Talk on Accuracy and Stability at Cardiff

My previous post was about the launch meeting of SIAM Student Chapter at Cardiff, at which I gave the opening talk. My talk was titled Accuracy and Stability of Numerical Algorithms and covered rounding of (floating point) numbers, the interplay between precision and accuracy, higher precision computations, and the effect of tiny relative errors on performance profiles.

Here I describe four examples that I gave where rounding, or the choice of rounding mode, can have interesting or surprising (to some) effects.

  • In 2006 Justin Gatlin was credited with a new world record of 9.76 seconds for the 100m. Almost a week after the race, the time was changed to 9.77 seconds, meaning that he had merely equalled the existing record held by Asafa Powell. The reason for the change was that his recorded time of 9.766 has incorrectly been rounded down to the nearest hundredth of a second instead of up as the IAAF rules require.
  • In 2008 the Mail on Sunday got agitated by the possibility that whether or not the UK inflation target of 3% would be exceeded (and it was exactly 3% at the time) could depend on a change of one thousandth of a percent. They realized that since the inflation rate is published to one decimal place, a rate of 3.049 would round down to 3.0% but 3.050 would round up to 3.1% (since ties are rounded up in UK government calculations) and mean the target had been missed.
  • In 1983 the Vancouver stock exchange found that its index had halved over the year since it had been founded. It turned out that the index had been rounded down after every calculation. When the index was recomputed (presumably with round to nearest, though my reference doesn’t say) it doubled.
  • My telephone and cable provider, Virgin Media, wrote to me in 2007 with news about pricing. They had decreased the cost of my cable and line rental package. They had also changed the way calls charges are calculated by “rounding up to the next minute” instead of “charging to the nearest second” as before. They gave the example that “a call that lasts 4 minutes 50 seconds will be rounded up to 5 minutes”. What they didn’t mention is that a call that lasts 4 minutes 1 second will also be rounded up to 5 minutes!

The talk can be downloaded from my website: Accuracy and Stability of Numerical Algorithms.

First Meeting of Cardiff SIAM Student Chapter

One of SIAM’s newest chapters, its 104th, based at Cardiff University, held its inaugural meeting, the SIAM Chapter Day, on January 21st, 2013.

file://d:/dropbox/org/images/130121-1100-26-1342.jpg
Angela Mihai

Student Chapters of SIAM (The Society for Industrial and Applied Mathematics) are groups based at universities and colleges with the aim of promoting applied mathematics and computational science to young mathematicians. Chapters organize a wide range of activities, including conferences, guest lectures, visits to industry, and social events. They have been an area of growing activity for SIAM in recent years and there are now 108 student chapters worldwide, including 23 outside the United States.

I attended the Cardiff meeting and was one of the speakers, along with Simon Cox (Aberystwyth), Alain Goriely (Oxford) and Matthew Gilbert (Sheffield). The lecture theatre was close to full, with an audience of 70 or so. A significant portion of the audience, and the poster presenters, was from the School of Engineering, reflecting the strong links that exist between mathematics and engineering at Cardiff.

file://d:/dropbox/org/images/130121-1359-33-1378.jpg
Poster session

Angela Mihai is the Chapter’s faculty representative, and also the driving force behind the Chapter being formed. In her opening remarks she mentioned that although this is the Chapter’s launch event, Chapter members have already participated in several events organized by other UK Chapters and the UK & Republic of Ireland SIAM Section. All the signs are that this, the first SIAM Chapter in Wales, will be a great success.

Why and How to Set up a SIAM Student Chapter

SIAM provides funding to support activities, awarding over $36,000 to 74 Student Chapters in the 2012-13 academic year, and it offers support for a Chapter representative to attend the SIAM Annual Meeting and meet with the SIAM leadership. Chapters often collaborate in organizing events, such as the SIAM National Student Chapter Conference held in Manchester in 2012, and these provide an excellent opportunity for networking with like-minded students off campus.

If there is not a SIAM chapter at your institution, it is worth considering setting one up. The process is simple, requiring a letter of intent and petition signed by at least 12 regular or student members of SIAM. If you have trouble getting the signatures I will be happy to help (as will other SIAM officers).

For information on how to set up a Student Chapter see the Student Chapter page on the SIAM website. If you have any questions, contact SIAM’s Membership Manager, Susan Whitehouse, at the address on the web page just mentioned.

See the SIAM Student Blog for ideas on organizing chapter events.

file://d:/dropbox/org/images/130121-1453-01-1390.jpg
Simon Cox

Top 5 Beamer Tips

Here are my top 5 tips for using Beamer, the popular LaTeX package for producing slides.

1. Navigation Symbols

I’ve seen hundreds of talks with slides prepared in Beamer. I’ve yet to see anyone use the navigation symbols that Beamer puts at the bottom right-hand corner of the screen. These should be turned off with

\setbeamertemplate{navigation symbols}{}

2. Extra Slides

If you include extra slides at the end of your talk, you can stop them affecting the page count (usually shown in the footer) as follows:

\usepackage{appendixnumberbeamer} 
...
\appendix 
\begin{frame}{First Extra slide}
...

The appendixnumberbeamer package might not be included in your LaTeX distribution, in which case you can download it from CTAN.

3. References

You can include references in your slides using the following code, best placed after \appendix:

\begin{frame}[allowframebreaks]{References}
\def\newblock{}
\bibliographystyle{plain}
\bibliography{mybib}
\end{frame}

4. Sections

In a longer talk it can be useful to break the talk into sections and produce an outline slide at the start of each section. This can be done by putting in the preamble the code

\setbeamerfont{myTOC}{series=\bfseries,size=\Large}
\AtBeginSection[]{\frame{\frametitle{Outline}%
                  \usebeamerfont{myTOC}\tableofcontents[current]}}

and then typing \section[short_title]{long_title} between frames, where short_title is used for the headline (the line at the top of the screen that can contain various types of information, intended to help the audience know where you are in the talk). In this code I have increased the size of the font used for the outline and made it bold. If you don’t want the headline it can be turned off with

\setbeamertemplate{headline}{}

5. Columns

A good way to line up graphics side by side, especially if they are of different heights, is with Beamer’s columns environment:

\begin{columns}[c] % Columns centered vertically.
\column{5.5cm}     % Adjust column width to taste.
\includegraphics ...
\column{5cm}
\includegraphics ...
\end{columns}

A Black Background for More Restful PDF viewing

I find that large expanses of white on my screen can be hard on the eyes, so I’ve customized the colours of my main applications to yellow or white on black. For example, this is what my Emacs window looks like:

Emacs screen capture

I also have MATLAB (via Preferences – Colors) and Thunderbird (via various customizations) set to a black background.

I recently realized that I can get a black background in my two PDF viewers.

Acrobat Pro or Acrobat Reader (Windows or Mac)

Set preferences via

Edit - Preferences - Accessibility - Replace Document Colors 
     - Use High Contrast Colors

which gives the option for white, yellow or green on black, or you can choose a custom background and foreground color.

SumatraPDF (Windows)

Call SumatraPDF with the -invert-colors option, for example as

SumatraPDF.exe -reuse-instance -invert-colors paper.pdf

This is an excellent PDF viewer for use with LaTeX, as it refreshes the view when the PDF file changes, unlike Acrobat.

Skim (Mac)

Unfortunately, for Skim it only appears to be possible to change the colour of the border, not the background or text.