As an undergraduate and postgraduate student in the early 1980s I owned a Commodore Pet microcomputer and then a Commodore 64. Both came with Basic built into ROM. On booting the machines you were presented with a flashing cursor and could type in programs to be executed by the Basic interpreter or load programs from cassette or disk.
I used the machines in my research on matrix computations, writing many programs in Basic and Comal (a more structured, Pascal-like version of Basic, originating in Denmark).
Recently, I was looking for information about the microprocessors used in the early microcomputers. I could not find what I wanted, but remembered that some relevant information is contained in the appendices of a technical report Matrix Computations in Basic on a Microcomputer that I wrote in 1985 (the published version 1 omits the appendices). Specifically, the appendices contain
- specifications of the Commodore 64, the BBC Microcomputer Model B and Model B with Torch Z-80 second processor, and the Amstrad CPC-64,
- examples of 6502 assembly language programs for the Commodore 64 and the BBC Model B,
- Basic and Comal programs for the above machines.
As these are of some historical interest I have scanned the technical report and made it available as a MIMS EPrint. I still have the original hard copy, which was produced on the Commodore 64 itself using a wordprocessor called Vizawrite 64 and printed on an Epson FX-80 dot matrix printer, taking advantage of the printer’s ability to produce subscripts. These were the days before TeX was widely available, and looking back I am surprised that I was able to produce such a neatly formatted document, with tables and program listings. Just printing it must have taken a few hours. Vizawrite was the last wordprocessor I have used seriously, and adapting Tony Hoare’s quote about Algol I would say that it was “so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors”.
The purpose of the report was to convert the LINPACK Fortran linear equation solvers SGEFA/SGESL to Basic and run them on the four machines mentioned above. I found that the cost of the computations was dominated by subscripting calculations and not the floating point arithmetic. I therefore translated the Basic Linear Algebra Subprograms (BLAS) that are called by the codes into assembly language for the Commodore and BBC machines and obtained significant speedups, due to removal of the subscripting overheads.
Writing in assembly language is very different from writing in a high-level language, because the available operations are so simple: load the contents of a memory location into the accumulator, increment or decrement by one the value stored in a memory location, and so on. Fortunately, when I started this project I already had experience of writing 6502 assembly language as I had used it in my Music Master program for the Commodore 64 published by Supersoft. And I had the excellent Mikro Assembler cartridge for the Commodore 64 that made developing assembly code as easy and enjoyable as it could be.
The LINPACK routines that I translated are the ones originally used for the LINPACK benchmark that has been used since the 1980s to measure the speed of the world’s fastest computers. Based on the timings in my article extrapolated to 100 by 100 matrices, here is a table comparing the speed in megaflops of the Commodore and BBC machines with that of two recent low-end computers:
Machine | Year | Megaflops |
---|---|---|
Commodore 64 (Basic + machine code) | 1985 | 0.0005 |
BBC Model B (Basic + machine code) | 1985 | 0.0008 |
iPad 2 (data from Jack Dongarra) | 2011 | 620 |
Raspberry Pi | 2013 | 42 |
Footnotes:
N. J. Higham. Matrix computations in Basic on a microcomputer. IMA Bulletin, 22:13-20, 1986.
Just after I published this post I noticed a post by a colleague in the School of Computer Science who gives more background on the BBC machine and the Raspberry Pi. See http://roguepointer.net/pi-to-the-power-of-ug/
This post has made me *very* happy! Good quality retro geekage 🙂
Reblogged this on Pink Iguana.
Brings back memories of implementing (in assembly language) a multi-lattice 2d Ising model Monte Carlo simulation on the BBC Model B (using the display pixels as memory for visualisation). It ran 8 lattices in parallel, one in each bit of the byte, and the RNG did look like static on the monitor!
Thank you for posting this, information like this isn’t easy to find these days!
This is a lovely paper, touching on difficult points of cross-system interpreter benchmarking. You really squeezed as much as could out of Vizawrite and the FX-80, too.