Creativity Workshop

The Numerical Analysis Group at the University of Manchester held a two-day Creativity Workshop at Shrigley Hall in the Cheshire countryside at the end of May 2013. All of the numerical analysis staff, postdocs and PhD students attended, along with two external collaborators from NAG and the Rutherford Appleton Laboratory.

After successfully piloting creativity workshops in 2010 under the Creativity@Home banner, the Engineering and Physical Sciences Research Council (EPSRC) now encourages holders of large grants to exploit creativity training.

A creativity workshop is an event in which a group of people tackle questions using a structured approach that encourages innovative ideas to be generated and carefully assessed and developed. It avoids the trap that we readily fall into of evaluating ideas too soon. Such an event needs an experienced facilitator who understands the nature of creativity and can skillfully guide the participants through the steps of tackling problems.

We were fortunate to have as our guide Dennis Sherwood, a leading expert on creativity who has worked with a wide variety of organizations including Manchester United, the National Grid and the European Commission, and who is recommended by EPSRC (indeed Dennis previously led an EPSRC-funded creativity workshop in 2010 that I attended as part of the Manchester CICADA team).

130530-1200-59-1898.jpg

Dennis provides participants with a day of creativity training before a workshop. He quotes Koestler’s law (from The Act of Creation, 1964):

The creative act is not an act of creation in the sense of the Old Testament.
It does not create something out of nothing.
It uncovers, selects, reshuffles, combines, synthesizes, already
existing facts, ideas, faculties, skills.
The more familiar the parts the more striking the new whole.

He points out a problem with Koestler’s definition: it assumes that the sub-assemblies that are selected, reshuffled, and so on, are already explicitly there. In practice they are usually there within existing patterns, and may not be so obvious. He formulates Sherwood’s Law:

Creativity is the process of forming new patterns from pre-existing
component parts. The more the resulting pattern shows emergent
properties, such as those of beauty, utility, or value,
the more powerful the corresponding idea.

So creativity does not necessarily need new ideas (in any case, we usually don’t know if an idea is novel), but is about taking existing ideas and combining them in new and unanticipated ways. Dennis’s training days and his books 1, 2 explain the principles of creativity and the workshops themselves help put them into practice.

130530-1217-35-0324.jpg

At our workshop a number of questions were addressed, including “Being a magnet for talent”, “The undergraduate curriculum”, “Software and programming languages”, “The PhD experience”, as well as strategic plans for the group and plans for future research projects and grant proposals.

By the end of an exhausting workshop many ideas had been generated and assessed and the group is now planning the next steps with the help of the detailed 94-page written report produced by Dennis.

Despite some understandable initial skepticism among some attendees new to the creativity workshop concept, everyone participated fully and enjoyed the experience. I thoroughly recommend such a workshop to other research groups.

130530-1539-08-0335.jpg 130508-1619-34-2043.jpg

Photo credits: Nick Higham (1,4), Dennis Sherwood (2,3).

Footnotes:

1

D. Sherwood, 1998, Unlock Your Mind. A Practical Guide to Deliberate and Systematic Innovation. Gower Publishing Ltd., Aldershot, Hampshire, UK.

2

D. Sherwood, 2001, Smart Things to Know about Innovation and Creativity. Capstone, Oxford, UK.

Gene Golub SIAM Summer School 2013

A two week Gene Golub SIAM summer school on matrix functions and matrix equations was held at Fudan University, Shanghai from July 22 to August 2, 2013, in conjunction with the 3rd International Summer School on Numerical Linear Algebra and the 9th Shanghai Summer School on Analysis and Numerics in Modern Sciences. This was the fourth Golub Summer School and the second devoted to numerical linear algebra.

45 PhD students attended, coming from institutions in 15 countries. The lecturers were me and Marlis Hochbruck (Karlsruhe Institute of Technology, Germany) in week 1 and Peter Benner (Max Planck Institute, Magdeburg), Ren-Cang Li (University of Texas, Arlington) and Xiaoye (Sherry) Li (Lawrence Berkeley National Laboratory, USA) in week 2.

My 10 hours of lectures were on matrix functions. The slides and exercises can be downloaded from my website.

The lectures were held in the mornings in the GuangHua Twin Tower – an impressive, marbled 30-storey building on the Fudan campus. Attendees were grateful that the lecture room was air conditioned, as the Shanghai summer was at its peak of temperature and humidity, and on the Friday of the first week a record temperature of 40.6 degrees Celsius (105 degrees Fahrenheit) was reached in the city.

130721-2231-02_2450.jpg
The GuangHua Twin Tower from my room in the Fuxuan Hotel.

Afternoons contained exercise sessions, 10-minute presentations by the students on their thesis work, and a guest seminar by Hongguo Xu (University of Kansas) in week 1 and Heike Fassbender (TU Braunschweig) in week 2. These were fully attended and it was great to see the students working so enthusiastically together and interacting with the lecturers.

For the lunches and dinners the students and lecturers sat together in randomized positions – an excellent idea on the part of the local organizers which helped ensure that people got to know each other.

Group photos at conferences can be rather shambolic. This one was the most professional I’ve ever seen. When we arrived at the designated spot the photographer had already set up three rows of metal staging and the photo was quickly taken (just as well given the scorching heat even at 8.15 am). Laminated photos were delivered to participants the same afternoon.

130722-0823-41-66.jpg

The local organizers are to be congratulated on an excellent job. In particular, Weiguo Gao and Yangfeng Su (Fudan University) and Zhaojun Bai (UC Davis) were busy every day making sure that the event ran smoothly. Daniel Szyld (Temple University) must also be mentioned for his excellent work over the last 5 years in chairing the SIAM committee that manages the Gene Golub SIAM Summer School program.

The school was generously supported by the SIAM Gene Golub Summer School fund, the Shanghai Center for the Mathematical Sciences, ISFMA (Sino French Institute of Applied Mathematics), the NSF (USA) and NAG.

Three things will stand out in my memory from the School. First, the enthusiasm of the students, among whom will no doubt be some of the future leaders of our field. (See the blog post by my PhD student Sam Relton.) Second, sitting in the plush 15th floor cafe of the GuangHua Tower chatting to other participants over a cafe latte with smooth jazz coming over the speakers. Third, the Chinese (motor) cyclists, who carry a wondrous variety of goods on their bikes and ride without any attention to the traffic signals but miraculously seem to avoid accidents. See the photos below taken on the short walk from the hotel to the department!

In summing up I can do no better than to endorse Charlie Van Loan’s words in describing the first Gene Golub Summer School in 2010: “The idea of a summer school for graduate students from around the world is the perfect way to honor Gene’s memory. It is exactly the kind of activity that Gene loved to promote.”

130722-0106-26-2457.jpg
Professor Tatsien Li making welcoming remarks.
130726-0937-22-2706.jpg
Marlis Hochbruch.
130726-1523-54-2468.jpg
PhD student Antti Koskela (Innsbruck).
130726-1653-31-2475.jpg
Zhaojun Bai addressing the school.
130725-2334-52-2649.jpg
Morning callisthenics in front of the GuangHua Tower.

130722-2356-44-2532.jpg 130725-1125-24-2589.jpg 130725-1129-27-2593.jpg 130724-1133-29-2564.jpg 130725-1128-33-2591.jpg

Notes on SIAM Annual Meeting Minisymposium on Professional Use of Social Media

In my recent post I publicized the upcoming minisymposium Establishing a Professional Presence in the Online World: Unraveling the Mysteries of Social Media and More organized by Tammy Kolda and I at the 2013 SIAM Annual Meeting in San Diego.

We had an enjoyable session. Despite being in the most far flung and hard to find room on the site, we had a good-sized audience who contributed useful questions and thoughts.

The slides for the four talks are downloadable from the previous post. Here, I summarize a few key points from each talk.

Tammy Kolda (Sandia National Labs) described how to export BibTeX entries for journal articles to html via the JabRef reference manager. The resulting html includes an abstract, keywords, and hyperlinks to the DOI, a PDF file, an expurgated BibTeX entry and a preprint version (assuming all this information is present in the entry). The idea is that the html can be used for lists of publications on a web page. I haven’t used Jabref for a while, but intend to try this export filter out. Tammy also gave a flowchart answering the question of how and where to post a publication list.

130711-1900-37-2237.jpg
David Gleich

David Gleich (Purdue University) gave his presentation using Prezi, a cloud-based presentation tool that produces “multiscale” slides that zoom in and out. He surveyed the main social media tools and classified them into categories 1-1, 1-many and many-many. He then explained how he keeps on top of information using Flipboard, Feedly and Instapaper daily.

I described reasons for mathematicians to blog or tweet and the features that characterize a good blog. I also gave tips for using WordPress and Twitter and described SIAM’s plans for a SIAM blog.

130711-1936-40-2248.jpg
Nick Higham

Finally, Karthika Muthukumaraswamy (SIAM Public Awareness Officer) gave a compelling explanation of why mathematicians and scientists should blog and how the web is changing science communication. She also explained the benefits of blog networks, in which several people contribute to a blog, and the motivation for the planned SIAM blog.

130711-2001-09-2254.jpg
Karthika Muthukumaraswamy

130711-1900-13-2230.jpg

More photos are available in my photo gallery.

Finally, I note that David Bindel has written some notes on the SIAM Annual meeting.

Emacs Org Mode Version 8: Upgrading and Some Tips

As I mentioned in my post Emacs: The Ultimate Editor?, one of the things I love about Emacs is Org mode, which provides excellent facilities for working with plain text and exporting it to a variety of other formats. Recently I’ve used Org mode to prepare a number of tables within documents that I then export to \LaTeX and compile to PDF. Key here is Org’s ability to easily add or remove rows and columns, sort rows, and even transpose a table (see below). This blog is written in Org mode and exported to WordPress using org2blog.

A couple of months ago, version 8 of Org was released. It has many improvements over earlier versions but also some changes in syntax. In particular, the export engine has been rewritten. These changes are quite likely to break older Org files. Indeed the release notes say Org 8.0 is the most disruptive major version of Org.

Here is a list of problems I’ve experienced and the fixes. I’m currently using Org 8.0.3.

  • Export to Beamer didn’t work until I added
    (require 'ox-beamer)
    

    to my .emacs.

  • org2blog was broken in Org 8. A new branch for Org 8 was released at https://github.com/ptrv/org2blog/tree/org-8-support. In my tests org2blog/wp-post-subtree did not work properly: the title was being copied as a section heading. This was quickly fixed by author Peter Vasil earlier this week and org2blog is now working fine for me with Org 8.
  • The syntax for \LaTeX table alignments has changed. In Org <8:
    #+ATTR_LaTeX: align = |l|...
    

    In Org 8:

    #+ATTR_LaTeX: :align |l|...
    

Finally, here are a couple of useful, but easy to miss, features of Org.

Table Transpose

A new command org-table-transpose-table-at-point in Org 8 provides the array transpose function. With the cursor in the table

a11 a12 a13 a14
a21 a22 a23 a24
a31 a32 a33 a34

M-x org-table-transpose-table-at-point produces

a11 a21 a31
a12 a22 a32
a13 a23 a33
a14 a24 a34

This could be particularly useful in a \LaTeX file, provided orgtbl-mode is being used, as there is no easy way to transpose a \LaTeX table.

Shortcuts

I’m not sure if this is new to ORG 8, but in any case it’s new to me. Type <s followed by tab and an empty source block magically appears:

#+BEGIN_SRC 
#+END_SRC

Very useful! The following table shows all the available expansions:

|----------+------------------|
| Sequence | Expands to       |
|----------+------------------|
| <s       | #+BEGIN_SRC      |
| <e       | #+BEGIN_EXAMPLE  |
| <q       | #+BEGIN_QUOTE    |
| <v       | #+BEGIN_VERSE    |
| <V       | #+BEGIN_VERBATIM |
| <c       | #+BEGIN_CENTER   |
| <l       | #+BEGIN_LaTeX    |
| <L       | #+LaTeX          |
| <h       | #+BEGIN_HTML     |
| <H       | #+HTML           |
| <a       | #+BEGIN_ASCII    |
| <A       | #+ASCII:         |
| <i       | #+INDEX:         |
| <I       | #+INCLUDE:       |
|----------+------------------|

SIAM Annual Meeting Minisymposium on Professional Use of Social Media

Tammy Kolda and I are organizing a minisymposium Establishing a Professional Presence in the Online World: Unraveling the Mysteries of Social Media and More at the 2013 SIAM Annual Meeting in San Diego.

This page will act as a repository for the slides of the talks, related information, and a place for discussion. It will be updated as necessary from the date of first post. (Edit: the titles now link to the final versions of the talks in PDF form.)

MS89: Thursday, July 11, 10:30 AM-12:30 PM in Garden Salon I

Abstract: I will discuss the importance of making your publications easily
available online and various ways to maintain such a list. We’ll discuss
important information to include (like DOIs), various websites that
maintain the lists for you, and tools for tracking and exporting your own
lists.

Abstract: I’ll describe my experiences using social media from the past
few years and some lessons. This will include a brief survey of the tools
out there my reasons for using Twitter and WordPress.

Abstract. I will discuss how and why social media can be useful for a
researcher, both as a consumer and a contributor, drawing on my own
experiences of using Twitter and blogging with WordPress. I will also
discuss how SIAM is using social media.

Abstract. I will discuss the importance of blogging for scientific
communication in general, and more specifically, why SIAM may be ready
for a community blog. Based on surveys SIAM has conducted, I will
discuss how a shared blog space can help address the many needs of the
SIAM community in terms of networking, collaboration, scientific
discussion, funding and outreach. More broadly, I will discuss the
importance of direct communication between scientists and the general
public, and how blogs can help achieve this.

Fourth Edition (2013) of Golub and Van Loan’s Matrix Computations

Back in 1980 there were not many up to date books on numerical linear algebra. Stewart’s Introduction to Matrix Computations (1973) was a popular textbook, and was the text for the final year undergraduate course that I took on the subject. Parlett’s The Symmetric Eigenvalue Problem (1980) was a graduate level treatment of the symmetric eigenvalue problem. And Wilkinson’s The Algebraic Eigenvalue Problem (1965) was still the bible of numerical linear algebra, albeit already somewhat out of date due the fast moving research developments since it was published.

While an MSc student, I heard about the impending publication of a new book on matrix computations by Golub and Van Loan. I pre-ordered a copy and in spring 1983 received one of the first copies in the UK. The book was a revelation. It presented a completely fresh and up to date perspective on the subject. Some of the most exciting features were

  • extensive use of pseudocode, with MATLAB-style indexing notation, to describe algorithms,
  • the use of flops to measure computational cost,
  • emphasis on the use of the SVD,
  • modern presentation of rounding error analysis, with rounding error bounds given for each algorithm,
  • systematic treatment of the conjugate gradient and Lanczos methods,
  • coverage of topics not found in earlier books, such as condition estimation, generalized SVD, and total least squares,
  • very lively writing style.

I studied the book in great detail and learned a huge amount from it.

file://d:/dropbox/org/images/gova13.jpg
Covers of first to fourth editions.

A second edition was published in 1989. It was written while Charlie Van Loan was in the UK on sabbatical and I was spending a year at Cornell (Charlie’s home university). I had the opportunity to read and comment on draft chapters. The second edition maintained all the material from the first and added new chapters on matrix multiplication (and the relevant machine architecture considerations) and parallel algorithms, and it was typeset in LaTeX for the first time. The term flop was redefined so that a+b*c represents two flops (as it does today) instead of one as in the first edition. A number of other changes were introduced to address a criticism in some reviews of the first edition that the book was rather terse and fast-paced for use as a course textbook.

A third edition followed in 1996. After a 17 year gap the fourth edition has just been published. Work on this edition began following the untimely death of Gene Golub in 2007. Some statistics indicate the development of the book:

Edition Year Number of pages Pages of master bibliography
First 1983 472 25
Second 1989 642 34
Third 1996 694 50
Fourth 2013 756 65^\dagger

\dagger The master bibliography of the fourth edition is not printed in the book but is downloadable from the book’s web page.

What is Different About the Fourth Edition?

The new edition is physically larger than its predecessors, with a text width of 13 cm versus 11.5 cm in the last edition, so the content is increased by more than the page count would suggest. Moreover, the paper is extremely high quality, and this makes the book bigger and heavier than you would expect. I bought the hardback, because I know from experience that the softback of all three previous editions did not stand up well to heavy use. The image shows the third and fourth editions along with Horn and Johnson’s Matrix Analysis (second edition, 2013) and my Accuracy and Stability of Numerical Algorithms (second edition, 2002).

file://d:/dropbox/org/images/mc4-bookpile.jpg

A number of new topics are included, of which I would pick out

  • fast transforms
  • Hamiltonian and product eigenvalue problems
  • large-scale SVD
  • multigrid
  • tensor computations

I like the statement in the preface that “References that are historically important have been retained because old ideas have a way of resurrecting themselves.” This is of course particularly true as regards methods suitable for high-performance computing.

Lists of relevant LAPACK codes at the start of each chapter have been removed, as have many of the small, illustrative numerical examples, which are replaced by MATLAB codes to be made available on the book’s web page.

The fourth edition remains the best general reference on matrix computations and a must-have for any serious researcher in the field. A big difference from 1983, when the first edition appeared, is that now a separate research monograph is available covering almost every topic in the book (and due reference is made to 28 such “Global References”). But Matrix Computations brings together and unifies a wide variety of topics in one place.

2013 has been a good year for books on matrices and approximation, with the publication of a second edition of Horn and Johnson’s Matrix Analysis, Trefethen’s Approximation Theory and Approximation Practice, and now this very welcome fourth edition of Golub and Van Loan. It is available from the usual sources as well as from SIAM. Consider the Kindle edition to save your back. You can still have it signed!

file://d:/dropbox/org/images/mc4-sign.jpg

Workshop on Matrix Functions and Matrix Equations

Last month we (Stefan Guettel, Nick Higham and Lijing Lin) organized a 2.5 day workshop Advances in Matrix Functions and Matrix Equations We had 57 attendees from around the world (see group photo): UK (19), Italy (7), USA (7), Germany (6), Canada (2), France (2), Portugal (2), South Africa(2), Saudi Arabia(2), Austria (1), Belgium (1), India (1), Ireland (1), Poland (1), Russia (1), Sweden (1), Switzerland (1).

We last organized a workshop on matrix functions in Manchester in 2008 (MIMS New Directions Workshop Functions of Matrices). The field has advanced significantly since then. Some emerging themes of this year’s workshop were as follows.

Krylov methods: Several speakers presented new results on this class of methods for the approximation of large-scale matrix functions, including a convergence analysis by Grimm of the extended Krylov subspace method taking into account smoothness properties of the starting vector, black-box parameter selection for the rational Krylov approximation of Markov matrix functions by Guettel and Knizhnerman, and an adaptive tangential interpolation strategy for MIMO model order reduction by Simoncini and Druskin.

Matrix exponential: Research continues to focus on this, the most important of all matrix functions (the inverse is excluded as being too special). We were delighted that Charlie Van Loan opened the workshop with a talk “What Isn’t There To Learn from the Matrix Exponential?”. Charlie wrote some of the key early papers on exp(A). Indeed his work on exp(A) began when he was a postdoc at Manchester in the early 1970s, and his 1975 Manchester technical report A Study of the Matrix Exponential contains ideas that later appeared in his papers and his book (with Golub) Matrix Computations. In particular, it makes the case that “anything that the Jordan decomposition can do, the Schur decomposition can do better”, and is still worth reading.

Exotic matrix functions: Two talks focused on newer, more “exotic” matrix functions and had links to Rob Corless, who was in the audience. Bruno Iannazzo discussed how to compute the Lambert W function of a matrix, which is any solution of the matrix equation X e^X = A. The scalar Lambert W function was named and popularized in a 1996 paper by Corless, Gonnet, Hare, Jeffrey and Knuth, On the Lambert W Function; it has many applications, including in delay differential equations. Bruno finished with a striking photo of the equation written in sand. images/130410-1058-06-0754.jpg Mary Aprahamian presented a new matrix function called the matrix unwinding function, defined as U(A) = (A - \log e^A )/(2\pi i), which arises from the scalar unwinding number introduced by Corless, Hare and Jeffrey in 1996. She showed that it is useful as a means for obtaining correct identities involving multivalued functions at matrix arguments, as well as being useful for argument reduction in evaluating the matrix exponential.

A special afternoon session celebrated the 70th birthday of Krystyna Zietak, who has made many contributions to numerical linear algebra and approximation theory. Krystyna gave the opening talk in which she described some highlights of her international travels and of hosting visitors in Wroclaw, well illustrated by photos.

images/130412-1255-30-2478.jpg
Happy birthday Krystyna!
Following the session we had a reception in the Living Worlds gallery of the Manchester Museum, followed by a dinner in the Fossil gallery, with Stan the Tyrannosaurus Rex looking over us.

images/130411-1726-22-09341.jpg
Dinner in the Fossil gallery

Financial support for the workshop came from the European Research Council and book displays were kindly provided by Cambridge University Press, Oxford University Press, Princeton University Press and SIAM.

Most of the talks are available in PDF format from the workshop programme page.

A gallery of photos from the workshop has been produced, combining the efforts of several photographers.

Emacs: The Ultimate Editor?

I started using Emacs about 1990 but have been using it exclusively for just two years. Prior to that my main editor was TSE Pro – a fast, customizable Windows-only editor that evolved from a 1980s DOS editor called Qedit. The motivation for switching to Emacs was that I wanted to be able to work in the same way on both Windows and Mac machines. After looking at the possibilities I settled on Emacs as the ideal solution.

Although Emacs dates from the 1980s, it seems to have enjoyed renewed popularity in the last few years, with regular new releases (currently version 24.3), many new packages appearing, several excellent Emacs blogs posting regularly, and even an Emacs conference held in London in March 2013.

For me the main advantages of Emacs are

  • Excellent LaTeX integration through AucTeX and RefTeX.
  • ORG mode, an incredibly powerful mode for working with plain text. Among its uses are
    • making notes and outlines,
    • TODO lists,
    • writing documents in a simple markup language that can be exported to LaTeX, html, and various other formats,
    • writing and managing WordPress blogs, via org2blog (this blog is produced entirely from within Emacs, apart from some tweaking in WordPress).

    In all these cases, the ability to narrow the view to certain parts of the buffer, and to reorder logical units via simple keypresses, provide tremendous usability.

  • Complete built-in documentation, with the ability to see the Emacs Lisp source code for all functions except the small number of low-level functions written in compiled C.
  • The ability to customize every aspect of Emacs, and in particular to reassign almost any keypress.
  • The use of Emacs is essentially system-independent; in particular Emacs has its own file management functions, which bypass the Windows, Mac or Linux file open/save dialog boxes.

I’ll write about some of these aspects in future posts.

For now, here are some videos that provide more information:

And for an excellent perspective on the eternal “Emacs/Vi versus the latest hot editor” debate I recommend the post Good tools by James Bennett, which appeared just as I was about to publish this post.

How Accurate Are Spreadsheets in the Cloud?

For a vector x with n elements the sample variance is s_n^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2, where the sample mean is \overline{x} = \frac{1}{n} \sum_{i=1}^n x_i. An alternative formula often given in textbooks is s_n^2 = \frac{1}{n-1} \left( \sum_{i=1}^n x_i^2 - \frac{1}{n} \left(\sum_{i=1}^n x_i \right)^2 \, \right). This second formula has the advantage that it can be computed with just one pass through the data, whereas the first formula requires two passes. However, the one-pass formula can suffer damaging subtractive cancellation, making it numerically unstable. When I wrote my book Accuracy and Stability of Numerical Algorithms I found that several pocket calculators appeared to use the one-pass formula.

How do spreadsheets apps available in web browsers and hosted in the cloud fare on computations such as this? I used Google Sheets to compute the standard deviation of vectors of the form x = [m, m+1, m+2] (Google Sheets does not seem to have a built-in function for the sample variance; the standard deviation is the square root of the sample variance). Here is what I found. (The spreadsheet that produced these results is available as this xlsx file. Note that if you click on that link it will probably load into Excel and display the correct result.)

m Exact standard deviation Google’s result
10^7 1 1
10^8 1 0

The incorrect result 0 for m=10^8 is what I would expect from the one-pass formula in IEEE double precision arithmetic, which has the equivalent of about 16 significant decimal digits of precision, since \sum_{i=1}^n x_i^2 and \frac{1}{n} \left(\sum_{i=1}^n x_i\right)^2 are both about 10^{16} and so there is not enough precision to retain the difference (which is equal to 2). A computation in MATLAB verifies that the one-pass formula returns 0 in IEEE double precision arithmetic.

It seems that Google Sheets is using IEEE double precision arithmetic internally, because the expression 3\times (4/3-1)-1 evaluates to 2.2E-16. So it appears that Google may be using the one-pass formula.

This use of the unstable formula is deeply unsatisfactory, but it is just the tip of the iceberg. In a recent paper Spreadsheets in the Cloud—Not Ready Yet, Bruce McCullough and Talha Yalta show that Google Sheets, Excel Web App and Zoho Sheet all fail on various members of a set of “sanity tests”. This might not be too surprising if you are aware of McCullough’s earlier work in which he found errors in several versions of Microsoft Excel.

However, spreadsheets in the cloud bring further complications, as noted by McCullough and Yalta:

  • These spreadsheets apps do not carry version information and the software can be changed by the provider at any time without announcement. It is therefore impossible to reproduce results computed previously.
  • The hardware and software environment on which the software is running is not specified, which adds another level of irreproducibility.
  • McCullough and Yalta found that the Excel Web App could produce different output from Excel 2010. Anyone moving a spreadsheet between the two applications could be in for a surprise.

The conclusion: use spreadsheets in the cloud at your peril! In fact, I avoid spreadsheets altogether. Anything I want to do can be done better in MATLAB, LaTeX or Emacs ORG mode.

SIAM Conference on Computational Science and Engineering 2013

As predicted in my my preview post, this conference, held on the Boston waterfront, proved to be SIAM’s largest ever, with 1378 attendees. Over 1000 presentations were given in up to 20 parallel minisymposia at a time, but this did mean that there was at least one talk (and usually several) of interest to me in almost every time slot.

One thing I learned from the conference is how widely Python is being used in computational science, especially for solving real world problems involving large amounts of data. This is partly due to its ability to act as the glue between codes written in other languages and web applications. The IPython environment, with its notebook interface, was featured in a number of talks, in some of which the slides were displayed using the notebook.

The following highly selective photos will give a flavour of the conference.

file://d:/dropbox/org/images/130225-0726-35-1605.jpg The conference venue. Note the residual snow, which fortunately did not fall in any serious amounts during the conference.

file://d:/dropbox/org/images/130226-2124-20-1630.jpg The poster session of about 65 posters was preceded by a poster blitz (1 minute presentations) and was accompanied by an excellent dessert. This photo shows Edvin Deadman (University of Manchester and NAG Ltd.) discussing his poster on Matrix Functions and the NAG Library with Cleve Moler and Charlie Van Loan (authors of the classic Nineteen Dubious Ways to Compute the Exponential of a Matrix paper). For some thoughts on poster sessions by one of the conference attendees see Please, no posters! by David Gleich.

file://d:/dropbox/org/images/130227-1327-55-1690.jpg Josh Bloom’s (UC Berkeley) invited presentation Automated Astrophysics in the Big Data Era contained a fascinating mix of observational astronomy, machine learning, robotic telescopes, numerical linear algebra, and Python, with a focus on classifying stars.

file://d:/dropbox/org/images/130228-1629-26-1706.jpg It was interesting to see MapReduce being used to implement numerical algorithms, notably in the minisymposium Is MapReduce Good for Science and Simulation Data? organized by Paul Constantine (Stanford; standing) and David Gleich (Purdue; sitting, with pointer).

file://d:/dropbox/org/images/130227-1157-03-1678.jpg Here is the lunchtime panel Big Data Meets Big Models being videod. Highlights from this panel and some of the invited plenary talks will be available in due course on the SIAM Presents YouTube channel.

If you weren’t at the conference perhaps you can make it to the next one in two year’s time (date and location to be announced). In the meantime a good way to keep up with events is to join the SIAM Activity Group on Computational Science and Engineering, which organizes the conference.