What’s New in MATLAB R2021a?

In this post I discuss some of the new features in MATLAB R2021a. As usual in this series, I focus on a few of the features most relevant to my interests. See the release notes for a detailed list of the many changes in MATLAB and its toolboxes.

Name=Value Syntax

In function calls that accept “name, value” pairs, separated by a comma, the values can now be specified with an equals sign. Example:

x = linspace(0,2*pi,100); y = tan(x);

% Existing syntax
plot(x,y,'Color','red','LineWidth',2)
plot(x,y,"Color","red","LineWidth",2)

% New syntax
plot(x,y,Color = "red",LineWidth = 2)
lw = 2; plot(x,y,Color = "red",LineWidth = lw) 

Note that the string can be given as a character vector in single quotes or as a string array in double quotes (string arrays were introduced in R2016b).

There are some limitations, including that all name=value arguments must appear after any comma separated pairs and after any positional arguments (arguments that must be passed to a function in a specific order).

Eigensystem of Skew-Symmetric Matrix

For skew-symmetric and skew-Hermitian matrices, the eig function now guarantees that the matrix of eigenvectors is unitary (to machine precision) and that the computed eigenvalues are pure imaginary. The code

rng(2); n = 5; A = gallery('randsvd',n,-1e3,2); A = 1i*A; 
[V,D] = eig(A); 
unitary_test = norm(V'*V-eye(n),1)
norm_real_part = norm(real(D),1)

produces

% R2020b
unitary_test =
   9.6705e-01
norm_real_part =
   8.3267e-17

% R2021a
unitary_test =
   1.9498e-15
norm_real_part =
     0

For this matrix MATLAB R2020b produces an eigenvector matrix that is far from being unitary and eigenvalues with a nonzero (but tiny) real part, whereas MATLAB R2021a produces real eigenvalues and eigenvectors that are unitary to machine precision.

Performance Improvements

Among the reported performance improvements are faster matrix multiplication for large sparse matrices and faster solution of multiple right-hand systems with a sparse coefficient matrix, both resulting from added support for multithreading.

Symbolic Math Toolbox

An interesting addition to the Symbolic Math Toolbox is the symmatrix class, which represents a symbolic matrix. An example of usage is

>> A = symmatrix('A',[2 2]); B = symmatrix('B',[2 2]); whos A B
  Name      Size            Bytes  Class        Attributes

  A         2x2                 8  symmatrix              
  B         2x2                 8  symmatrix              

>> X = A*B, Y = symmatrix2sym(X), whos X Y
X =
A*B
Y =
[A1_1*B1_1 + A1_2*B2_1, A1_1*B1_2 + A1_2*B2_2]
[A2_1*B1_1 + A2_2*B2_1, A2_1*B1_2 + A2_2*B2_2]
  Name      Size            Bytes  Class        Attributes

  X         2x2                 8  symmatrix              
  Y         2x2                 8  sym    

The range of functions that can be applied to a symmatrix is as follows:

>> methods symmatrix

Methods for class symmatrix:

adjoint         horzcat         mldivide        symmatrix       
cat             isempty         mpower          symmatrix2sym   
conj            isequal         mrdivide        tan             
cos             isequaln        mtimes          times           
ctranspose      kron            norm            trace           
det             latex           plus            transpose       
diff            ldivide         power           uminus          
disp            length          pretty          uplus           
display         log             rdivide         vertcat         
eq              matlabFunction  sin             
exp             minus           size            

Static methods:

empty         

In order to invert A*B in this example, or find its eigenvalues, use inv(Y) or eig(Y).

What’s New in MATLAB R2020a and R2020b?

In this post I discuss new features in MATLAB R2020a and R2020b. As usual in this series, I focus on a few of the features most relevant to my work. See the release notes for a detailed list of the many changes in MATLAB and its toolboxes.

Exportgraphics (R2020a)

The exportgraphics function is very useful for saving to a file a tightly cropped version of a figure with the border white instead of gray. Simple usages are

exportgraphics(gca,'image.pdf')
exportgraphics(gca,'image.jpg','Resolution',200)

I have previously used the export_fig function, which is not built into MATLAB but is available from File Exchange; I think I will be using exportgraphics instead from now on.

Svdsketch (R2020b)

The new svdsketch function computes the singular value decomposition (SVD) USV^T of a low rank approximation to a matrix (U and V orthogonal, S diagonal with nonnegative diagonal entries). It is mainly intended for use with matrices that are close to having low rank, as is the case in various applications.

This function uses a randomized algorithm that computes a sketch of the given m-by-n matrix A, which is essentially a product Q^TA, where Q is an orthonormal basis for the product A\Omega, where \Omega is a random n-by-k matrix. The value of k is chosen automatically to achieve \|USV^T-A\|_F \le \mathrm{tol}\|A\|_F, where \mathrm{tol} is a tolerance that defaults to \epsilon^{1/4} and must not be less than \epsilon^{1/2}, where \epsilon is the machine epsilon (2\times 10^{-16} for double precision). The algorithm includes a power method iteration that refines the sketch before computing the SVD.

The output of the function is an SVD in which U and V are numerically orthogonal and the singular values in S of size \mathrm{tol} or larger are good approximations to singular values of A, but smaller singular values in S may not be good approximations to singular values of A.

Here is an example. The code

n = 8; rng(1); 8; A = gallery('randsvd',n,1e8,3);
[U,S,V] = svdsketch(A,1e-3);
rel_res = norm(A-U*S*V')/norm(A)
singular_values = [svd(A) [diag(S); zeros(n-length(S),1)]]

produces the following output, with the exact singular values in the first column and the approximate ones in the second column:

rel_res =
   1.9308e-06
singular_values =
   1.0000e+00   1.0000e+00
   7.1969e-02   7.1969e-02
   5.1795e-03   5.1795e-03
   3.7276e-04   3.7276e-04
   2.6827e-05   2.6827e-05
   1.9307e-06            0
   1.3895e-07            0
   1.0000e-08            0

The approximate singular values are correct down to around 10^{-5}, which is more than the 10^{-3} requested. This is a difficult matrix for svdsketch because there is no clear gap in the singular values of A.

Axis Padding (R2020b)

The padding property of an axis puts some padding between the axis limits and the surrounding box. The code

x = linspace(0,2*pi,50); plot(x,tan(x),'linewidth',1.4)
title('Original axis')
axis padded, title('Padded axis')

produces the output

padded_1a.jpg

padded_2a.jpg

Turbo Colormap (2020b)

The default colormap changed from jet (the rainbow color map) to parula in R2014b (with a tweak in R2017a), because parula is more perceptually uniform and maintains information when printed in monochrome. The new turbo colormap is a more perceptually uniform version of jet, as these examples show. Notice that turbo has a longer transition through the greens and yellows. If you can’t give up on jet, use turbo instead.

Turbo:

turbo.jpg

Jet:

colormap_jet.jpg

Parula:

colormap_parula.jpg

ND Arrays (R2020b)

The new pagemtimes function performs matrix multiplication on pages of n-dimensional arrays, while pagetranspose and pagectranspose carry out the transpose and conjugate transpose, respectively, on pages of n-dimensional arrays.

Performance

Both releases report significantly improved speed of certain functions, including some of the ODE solvers.

Randsvd Matrices with Large Growth Factors

Sixty years ago James Wilkinson published his backward error analysis of Gaussian elimination for solving a linear system Ax = b, where A is a nonsingular n\times n matrix. He showed that in floating-point arithmetic the computed solution \widehat{x} satisfies

(A+\Delta A) \widehat{x} = b, \qquad      \|\Delta A\|_{\infty} \le  p(n) \rho_n  u \|A\|_{\infty},

where u is the unit roundoff and p is a low degree polynomial. The term \rho_n is the growth factor, defined by

\rho_n = \displaystyle\frac{\max_{i,j,k} |a_{ij}^{(k)}|}               {\max_{i,j}|a_{ij}|} \ge 1,

where the a_{ij}^{(k)} are the elements at the kth stage of Gaussian elimination. The growth factor measures how much elements grow during the elimination. We would like the product p(n)\rho_n to be of order 1, so that \Delta A is a small relative perturbation of A. We therefore need \rho_n not to be too large.

With partial pivoting, in which row interchanges are used to ensure that at each stage the pivot element is the largest in its column, Wilkinson showed that \rho_n \le 2^{n-1} and that equality is possible. Such exponential growth implies a large \Delta A (unless we are lucky), meaning a severe loss of numerical stability. However, seventy years of digital computing experience have shown that \rho_n is usually of modest size in practice. Explaining why this is the case is one of the outstanding problems in numerical analysis.

It is easy to experiment with growth factors in MATLAB. I will use the function

function g = gf(A)
%GF     Approximate growth factor.
%   g = GF(A) is an approximation to the
%   growth factor for LU factorization
%   with partial pivoting.
[~,U] = lu(A);
g = max(abs(U),[],'all')/max(abs(A),[],'all');

It computes a lower bound on the growth factor (since it only considers k=n in the numerator in the definition), but it is entirely adequate for our purposes here. Let’s compute the growth factor for a random matrix of order 10,000 with elements from the standard normal distribution (mean 0, variance 1):

>> rng(1); n = 10000; gf(randn(n))
ans =
6.1335e+01

Growth of 61 is unremarkable for a matrix of this size. Now we try a matrix of the same size generated by the gallery('randsvd') function:

>> A = gallery('randsvd',n,1e6,2,[],[],1);
>> gf(A)
ans =
9.7544e+02

This function generates an n\times n matrix with known singular value distribution and with singular vector matrices that are random orthogonal matrices from the Haar distribution. The parameter 1e6 specifies the 2-norm condition number, while the 2 (the mode parameter) specifies that there is only one small singular value, so the singular values are 1 repeated n-1 times and 1e-6. Growth of 975 is exceptional! These matrices have been in MATLAB since the 1990s, but this large growth property has apparently not been noticed before.

It turns out that mode 2 randsvd matrices generate with high probability growth factors of size at least n/(4 \log n) for any condition number and for any pivoting strategy, not just partial pivoting. One way to check this is to randomly permute the columns of A before doing the LU factorization with partial pivoting:

>> gf(A(:,randperm(n)))
ans =
7.8395e+02

Here is a plot showing the maximum over 12 randsvd matrices for each n of the growth factors for three different pivoting strategies, along with the maximum growth factors for partial pivoting for rand and randn matrices. The black curve is n/(4 \log n). This plot emphasizes the unusually large growth for mode 2 randsvd matrices.

growth_randsvd_max_1e6.jpg

What is the explanation for this large growth? It stems from three facts.

  • Haar distributed orthogonal matrices have the property that that their elements are fairly small with high probability, as shown by Jiang in 2005.
  • If the largest entries in magnitude of A and A^{-1} are both small, in the sense that their product is \theta \ll 1, then A will produce a growth factor of at least 1/\theta for any pivoting strategy. This was proved by Des Higham and I in the paper Large Growth Factors in Gaussian Elimination with Pivoting.
  • If W is an orthogonal matrix generating large growth then a rank-1 perturbation of 2-norm at most 1 tends to preserve the large growth.

For full details see the new EPrint Random Matrices Generating Large Growth in LU Factorization with Pivoting by Des Higham, Srikara Pranesh and me.

Is growth of order n a problem in practice? It can be for two reasons.

  • The largest dense linear systems Ax = b solved today are of dimension n = 10^7. If we work in single precision then nu \approx 1 and so LU factorization can potentially be completely unstable if there is growth of order n.
  • For IEEE half precision arithmetic growth of order n will cause overflow once n exceeds 10^5 / \max_{i,j} |a_{ij}|. It was overflow in half precision LU factorization on randsvd matrices that alerted us to the large growth.

Simulating Low Precision Floating-Point Arithmetics in MATLAB

At least five floating-point arithmetics are available in mainstream hardware:

  • the IEEE double precision (fp64), single precision (fp32), and half precision (fp16) formats,
  • bfloat16, and
  • tf32, introduced in the recently announced NVIDIA A100, which uses the NVIDIA Ampere GPU architecture.

Only fp32 and fp64 are available on current Intel processors and most programming environments support only these two precisions. For many of us it is therefore necessary to simulate other precisions if we wish to explore their properties and the behavior of algorithms when run at these precisions.

chop-ex.jpg

Srikara Pranesh and I have written a MATLAB function chop that allows arithmetic of arbitrary precision and exponent range (both lower than fp64) to be simulated. The idea is very simple: variables are stored in fp32 or fp64 and then rounded to the required precision and exponent range after each operation by calling chop. The precision and range, as well as various other parameters, can be changed at any point in the computation.

Our approach can be compared with defining a new data type and storage format and converting to and from single precision or double precision in order to carry out the computations. Our approach avoids the overheads of repeatedly converting to and from the special storage format. It gains speed and generality at the cost of requiring chop calls to be inserted in the code.

Chop is available from this GitHub repository and from MathWorks File Exchange. The motivation behind chop is described in this open access paper in SIAM. J. Sci. Comput. We have made use of chop to simulate fp16 and bfloat16 in several recent papers, including on mixed precision algorithms, probabilistic rounding error analysis, and fast and accurate summation.

How to Use Chop

A call to chop has the form chop(X,options), where the structure options spceifies how to round the elements of the real array X to a lower precision arithmetic. X should be a single precision or double precision array and the output will have the same type. The structure options controls various aspects of the rounding, as we now explain.

  1. The arithmetic format is specified by options.format, which is one of
    • ‘b’, ‘bfloat16’: bfloat16,
    • ‘h’, ‘half’, ‘fp16’: IEEE half precision (the default),
    • ‘s’, ‘single’, ‘fp32’: IEEE single precision,
    • ‘d’, ‘double’, ‘fp64’: IEEE double precision,
    • ‘c’, ‘custom’: custom format.

    In the last case the (base 2) format is defined by options.params, which is a 2-vector [t,emax], where t is the number of bits in the significand (including the hidden bit) and emax is the maximum value of the exponent.

  2. options.subnormal specifies whether subnormal numbers are supported (if they are not, subnormals are flushed to zero):
    • 0 = do not support subnormals (the default for bfloat16),
    • 1 = support subnormals (the default for fp16, fp32, and fp64).
  3. The form of rounding is specified by options.round:
    • 1: round to nearest using round to even last bit to break ties (the default);
    • 2: round towards plus infinity (round up);
    • 3: round towards minus infinity (round down);
    • 4: round towards zero;
    • 5: stochastic rounding—round to the next larger or next smaller floating-point number with probability proportional to 1 minus the distance to those floating-point numbers;
    • 6: stochastic rounding—round to the next larger or next smaller floating-point number with equal probability.

    For stochastic rounding, exact floating-point numbers are not changed.

  4. If options.flip = 1 (default 0) then each element of the rounded result has, with probability options.p (default 0.5), a randomly chosen bit in its significand flipped. This option is useful for simulating soft errors.
  5. If options.explim = 0 (default 1) then emax (the maximal exponent) for the specified arithmetic is ignored, so overflow, underflow, or subnormal numbers will be produced only if necessary for the data type of X. This option is useful for exploring low precisions independent of range limitations.

To avoid repeatedly passing the options argument, one can issue a call chop([],options) and the subsequent calls will use the previously given options parameters. The value of options can be inspected with [~,options] = chop.

In the rest of this post we give examples of the use of chop.

Fp16 versus Bfloat

The code

x = 1/3;
opt_h.format = 'h'; xh = chop(x,opt_h); err_h = (x - xh)/x
opt_b.format = 'b'; xb = chop(x,opt_b); err_b = (x - xb)/x
x = 70000;
xh = chop(x,opt_h)
xb = chop(x,opt_b)

produces the output

err_h =
2.4414e-04
err_b =
-1.9531e-03
xh =
Inf
xb =
70144

which illustrates the higher precison but smaller range of fp16 compared with bfloat16.

Rounding Modes

The code

opt.format = 'h'; x = 0.1; rng(2)
for k = 1:6
opt.round = k;
y(k) = chop(x,opt);
end
errors = y - x
diff_y = diff(y)

rounds 0.1 to fp16 using six different rounding modes. The output is

errors =
-2.4414e-05   3.6621e-05  -2.4414e-05  -2.4414e-05  -2.4414e-05
3.6621e-05
diff_y =
6.1035e-05  -6.1035e-05            0            0   6.1035e-05

Rounding maps a number to the next larger or next smaller floating-point number, so there are only two possibilties for the error, and they have opposite signs. The diff_y values are consistent with the spacing of the fp16 floating-point numbers around 0.1, which is 2^{-11}/8.

Matrix Multiplication

The overheads of chop can be minimized by choosing a suitable implementation of a computation. Matrix multiplication provides a good example. The multiplication C = A*B of n-by-n matrices involves 2n^3 floating-point operations, each of which needs to be chopped. The following code uses only 2n calls to chop, because it processes an outer product in a single call, taking advantage of the fact that chop can take a matrix argument.

A = rand(m,n); B = rand(n,p);
C = zeros(m,p);
for i = 1:n
C = chop(C + chop(A(:,i)*B(i,:)));
end

Tf32 Arithmetic

The recently announced NVIDIA A100 GPU supports a new floating-point format called tf32 in its tensor cores. Tf32 is a 19-bit format that has 11 bits in the significand (including the hidden bit) and 8 bits in the exponent. So it has the precision of fp16 and the range of bfloat16, effectively combining the best features of these two formats. We can simulate tf32 using the custom format of chop. The code

opt.format = 'custom';
% tf32. Significand: 10 bits plus 1 hidden, exponent: 8 bits.
t = 11; emax = 127;
opt.params = [t,emax];
x = 1/3;
xt = chop(x,opt); err_t = (x - xt)/x
x = 70000;
xt = chop(x,opt)

produces the output

err_t =
2.4414e-04
xt =
70016

The error is the same as for fp16 but the number 70000 now rounds to a finite floating-point number.

Singular Values of Rank-1 Perturbations of an Orthogonal Matrix

What effect does a rank-1 perturbation of norm 1 to an n\times n orthogonal matrix have on the extremal singular values of the matrix? Here, and throughout this post, the norm is the 2-norm. The largest singular value of the perturbed matrix is bounded by 2, as can be seen by taking norms, so let us concentrate on the smallest singular value.

Consider first a perturbation of the identity matrix: B = I + xy^T, for unit norm x and y. The matrix B has eigenvalues 1 (repeated n-1 times) and 1 + y^Tx. The matrix is singular—and hence has a zero singular value—precisely when y^Tx = -1, which is the smallest value that the inner product y^Tx can take.

Another example is B = A + yy^T, where A = I - 2yy^T and y has unit norm, so that A is a Householder matrix. Here, B = I - yy^T is singular with null vector y, so it has a zero singular value,

Let’s take a random orthogonal matrix and perturb it with a random rank-1 matrix of unit norm. We use the following MATLAB code.

n = 100; rng(1)
A = gallery('qmult',n); % Random Haar distrib. orthogonal matrix.
x = randn(n,1); y = randn(n,1);
x = x/norm(x); y = y/norm(y);
B = A + x*y';
svd_B = svd(B);
max_svd_B = max(svd_B), min_svd_B = min(svd_B)

The output is

max_svd_B =
1.6065e+00
min_svd_B =
6.0649e-01

We started with a matrix having singular values all equal to 1 and now have a matrix with largest singular value a little larger than 1 and smallest singular value a little smaller than 1. If we keep running this code the extremal singular values of B do not change much; for example, the next run gives

max_svd_B =
1.5921e+00
min_svd_B =
5.9213e-01

A rank-1 perturbation of unit norm could make A singular, as we saw above, but our random perturbations are producing a well conditioned matrix.

What is the explanation? First, note that a rank-1 perturbation to an orthogonal matrix A can only change two of the singular values, because the singular values are the square roots of the eigenvalues of A^T A, which is the identity plus a rank-2 matrix. So n-2 singular values remain at 1.

A result of Benaych-Georges and Nadakuditi (2012) says that for large n the largest and smallest singular values of B tend to (1+\sqrt{5})/2 = 1.618\dots and (-1+\sqrt{5})/2 = 0.618\dots respectively! As our example shows, n does not have to be large for these limits to be approximations correct to roughly the first digit.

The result in question requires the original orthogonal matrix to be from the Haar distribution, and such matrices can be generated by A = gallery('qmult',n) or by the construction

[Q,R] = qr(randn(n));
Q = Q*diag(sign(diag(R)));

(See What Is a Random Orthogonal Matrix?.) The result also requires u and v to be unit-norm random vectors with independent entries from the same distribution.

However, as the next example shows, the perturbed singular values can be close to the values that the Benaych-Georges and Nadakuditi result predicts even when the conditions of the result are violated:

n = 100; rng(1)
A = gallery('orthog',n);   % Random orthogonal matrix (not Haar).
x = rand(n,1); y = (1:n)'; % Non-random y.
x = x/norm(x); y = y/norm(y);
B = A + x*y';
svd_B = svd(B);
max_svd_B = max(svd_B), min_svd_B = min(svd_B)
max_svd_B =
1.6069e+00
min_svd_B =
6.0687e-01

The question of the conditioning of a rank-1 perturbation of an orthogonal matrix arises in the recent EPrint Random Matrices Generating Large Growth in LU Factorization with Pivoting.

Update of Catalogue of Software for Matrix Functions

funm_ex.jpg

Edvin Hopkins and I have updated to version 3.0 the catalogue of software for matrix functions that we originally produced in 2014 and updated in 2016. It covers what is available in various languages (C++, Fortran, Java, Julia, Python, Rust), problem solving environments (GNU Octave, Maple, Mathematica, MATLAB and associated toolboxes, R, Scilab), and libraries (Armadillo, GNU Scientific Library, NAG Library, SLEPc, SLICOT).

Here are some highlights of changes in the last four years that are reflected in the new version.

  • Several new MATLAB third-party functions have been written, by various authors, notably for f(A)b and for arbitrary precision evaluation of the exponential and logarithm.
  • Matrix function support in Julia has been expanded.
  • Armadillo, Rust, SLEPc, and Tensorflow are included in new entries.

In addition, all URLs and references have been updated.

Suggestions for inclusion in a future revision are welcome.

What’s New in MATLAB R2019a and R2019b?

I didn’t have time earlier this year to write about the first MATLAB release of 2019, so in this post I will discuss R2019a and R2019b together. As usual in this series, I concentrate on the features most relevant to my work.

Modified Akima Cubic Hermite Interpolation (R2019b)

makima_example.jpg

A new function makima computes a piecewise cubic interpolant with continuous first-order derivatives. It produces fewer undulations than the existing spline function, with less flattening than pchip. This form of interpolant has been an option in interp1, interp2, interp3, interpn, and griddedInterpolant since R2017b and is now also a standalone function. For more details see Cleve Moler’s blog post about makima, from which the code used to generate the image above is taken.

Linear Assignment Problem and Equilibration (R2019a)

The matchpairs function solves the linear assignment problem, which requires each row of a matrix to be assigned to a column in such a way that the total cost of the assignments (given by the “sum of assigned elements” of the matrix) is minimized (or maximized). This problem can also be described as finding a minimum-weight matching in a weighted bipartite graph. The problem arises in sparse matrix computations.

The equilibrate function take as input a matrix A (dense or sparse) and returns a permutation matrix P and nonsingular diagonal matrices R and C such that R*P*A*C has diagonal entries of magnitude 1 and off-diagonal entries of magnitude at most 1. The outputs P, R, and C are sparse. This matrix equilibration can improve conditioning and can be a useful preprocessing step both in computing incomplete LU preconditioners and in iterative solvers. See, for example, the recent paper Max-Balanced Hungarian Scalings by Hook, Pestana, Tisseur, and Hogg.

The scaling produced by equilibrate is known as Hungarian scaling, and its computation involves solving a linear assignment problem.

Function Argument Validation (R2019b)

A powerful new feature allows a function to check that its input arguments have the desired class, size, and other aspects. Previously, this could be done by calling the function validateattributes with each argument in turn or by using the inputParser object (see Loren Shure’s comparison of the different approaches).

The new feature provides a more structured way of achieving the same effect. It also allows default values for arguments to be specified. Function argument validation contains many aspects and has a long help page. We just illustrate it with a simple example that should be self-explanatory.

function fav_example(a,b,f,t)
  arguments
    a (1,1) double               % Double scalar.
    b {mustBeNumeric,mustBeReal} % Numeric type and real.
    f double {mustBeMember(f,[-1,0,1])} = 0 % -1, 0 (default), or 1.
    t string = "-"               % String, with default value.
  end
  % Body of function.
end

The functions mustBeNumeric and mustBeMember are just two of a long list of validation functions. Here is an example call:

>> a = 0; b = 2; f = 11; fav_example(a,b,f)
Error using fav_example
Invalid input argument at position 3. Value must be a member of ...
this set:
    -1
    0
    1 

One caveat is that a specification within the arguments block

v (1,:) double    

allows not just 1-by-n vectors (for any n) but also n-by-1 vectors, because of the convention that in many situations MATLAB does not distinguish between row and column vectors.

For more extensive examples, see New with R2019b – Function Argument Validation and Spider Plots and More Argument Validation by Jiro Doke at MathWorks Blogs.

Dot Indexing (R2019b)

One can now dot index the result of a function call. For example, consider

>> optimset('Display','iter').Display
ans =
    'iter'

The output of the dot call can even be indexed:

>> optimset('OutputFcn',{@outfun1,@outfun2}).OutputFcn{2}
ans =
  function_handle with value:
    @outfun2

Of course, these are contrived examples, but in real codes such indexing can be useful. This syntax was not allowed in R2019a and earlier.

Hexadecimal and Binary Values (R2019b)

One can now specify hexadecimal and binary values using the prefixes 0x or 0X for hexadecimal (with upper or lower case for A to F) and 0b or 0B for binary. By default the results have integer type.

>> 0xA
ans =
  uint8
   10
>> 0b101
ans =
  uint8
   5

Changes to Function Precedence Order (R2019b)

The Release Notes say “Starting in R2019b, MATLAB changes the rules for name resolution, impacting the precedence order of variables, nested functions, local functions, and external functions. The new rules simplify and standardize name resolution”. A number of examples are given where the meaning of code is different in R2019b, and they could cause an error to be generated for codes that ran without error in earlier versions. As far as I can see, though, these changes are not likely to affect well-written MATLAB code.

Program File Size (R2019b)

Program files larger than “approximately 128 MB” will no longer open or run. This is an interesting change because I cannot imagine writing an M-file of that size. Good programming practice dictates splitting a code into separate functions and not including all of them in one program file. Presumably, some MATLAB users have produced such long files—perhaps automatically generated.

For a modern introduction and reference to MATLAB, see MATLAB Guide (third edition, 2017)

mg3-front-cover.jpg

What’s New in MATLAB R2018b?

spy_new.jpg
spy; text(2,80,”What’s New?”,’FontSize’,28)

The MATLAB R2018b release notes report a number of performance improvements, including faster startup and faster calls to built-in functions. I pick out here a few other highlights from the release (excluding the toolboxes) that are of interest from the numerical analysis point of view.

The new xline and yline functions add vertical or horizontal lines to a plot—something that I have needed from time to time and have previously had to produce with a line or a plot command, together with hold on. For example, xline(pi) plots a vertical line at x = pi.

The stackedplot function is mainly of interest for plotting multiple variables from a table in a single plot, but it can also plot the columns of a matrix. In this example, A is the symmetric eigenvector matrix for the second difference matrix:

A = gallery('orthog',10,1); stackedplot(A);

The resulting plot clearly shows that the number of sign changes increases with the column index.

stackedplot_ex.jpg

String arrays, introduced in R2016b, are now supported throughout MATLAB. In the previous example I could have typed A = gallery("orthog",10,1).

A new option 'all' to the functions all, any, max, min, prod, sum (and a few others) makes them operate on all the dimensions of the array. For example:

>> A = pascal(3)
A =
     1     1     1
     1     2     3
     1     3     6
>> max(A,[],'all')
ans =
     6
>> [prod(A,'all'), sum(A,'all')]
ans =
   108    19

The empty second argument in the max call is needed because max(x,y) is also a supported usage. This is a useful addition. I have often written norm(A(:),inf) to avoid max(max(abs(A))) (which does not work for arrays with more than two dimensions), but now I can write max(abs(A),[],'all') without incurring the overhead of forming A(:).

New functions sinpi and cospi plot the sine and cosine functions at the specified multiple of \pi. Thus sinpi(x) is the same as sin(pi*x) except that it does not explicitly compute pi*x and so returns exact answers for integer arguments:

>> [sin(pi)         sinpi(1)]
ans =
   1.2246e-16            0

What’s New in MATLAB R2018a?

MATLAB R2018a was released in March 2018. With each biannual release I try to give a brief overview of the changes in MATLAB (not the toolboxes) that are of most interest to me. These are not comprehensive summaries of what’s new and you should check the release notes for full details.

matlabr2018highlights.jpg
From the MATLAB “R2018a at a Glance” page.

Complex empty arrays, such as that generated with complex([]) now have an (empty) imaginary part instead of being real.

% R2017b
>> complex([])
ans =
     []

% R2018a
>> complex([])
ans =
  0×0 empty complex double matrix

This is a consequence of a change in the way MATLAB stores complex arrays. Prior to R2018a it stored the real parts together and the imaginary parts together. In R2081 it uses an interleaved format in which the real and imaginary parts of a number are stored together; this is the storage format used in the C and C++ languages. For more details see MATLAB Support for Interleaved Complex API in C MEX Functions. Unless you write MEX files or create complex empty arrays, this change should have no effect on you.

The Live Editor continues to gain improved functionality, including embedded sliders and drop-down menus.

Legends can now have multiple columns, specified with the NumColumns property for Legend objects. I must admit that I had not realized that is was already possible to arrange legends in a horizontal (1-by-n) rather than vertical orientation (n-by-1) using the Orientation property.

Tall arrays have several new capabilities, including the ability to compute a least squares solution to an overdetermined linear system Ax = b with a tall A, which is done by QR factorization.

MATLAB starts up faster and has further optimizations in the execution engine.

The GraphPlot Object has some additional options for the 'force', 'force3', and 'circle' layouts.

I noticed that the recent Windows 10 Spring Creators Update broke the Symbolic Toolbox. An update to fix this (and various other issues) is available at this page (MathWorks login required).

For a concise but wide-ranging introduction and reference to MATLAB, see MATLAB Guide (third edition, 2017)

mg3-front-cover.jpg

The Strange Case of the Determinant of a Matrix of 1s and -1s

By Nick Higham and Alan Edelman (MIT)

In a 2005 talk the second author noted that the MATLAB det function returns an odd integer for a certain 27-by-27 matrix composed of 1s and -1s:

>> A = edelman; % Set up the matrix.
>> format long g, format compact, det(A)
ans =
           839466457497601

However, the determinant is, from its definition, a sum of an even number (27 factorial) of odd numbers, so is even. Indeed the correct determinant is 839466457497600.

At first sight, this example is rather troubling, since while MATLAB returns an integer, as expected, it is out by 1. The determinant is computed as the product of the diagonal entries of the U factor in the LU factorization with partial pivoting of A, and these entries are not all integers. Standard rounding error analysis shows that the relative error from forming that product is bounded by nu/(1-nu), with n=27, where u \approx 1.1 \times 10^{-16} is the unit roundoff, and this is comfortably larger than the actual relative error (which also includes the errors in computing U) of 6 \times 10^{-16}. Therefore the computed determinant is well within the bounds of roundoff, and if the exact result had not been an integer the incorrect last decimal digit would hardly merit discussion.

However, this matrix has more up its sleeve. Let us compute the determinant using a different implementation of Gaussian elimination with partial pivoting, namely the function gep from the Matrix Computation Toolbox:

>> [Lp,Up,Pp] = gep(A,'p'); det(Pp)*det(Up)
ans =
           839466457497600

Now we get the correct answer! To see what is happening, we can directly form the products of the diagonal elements of the U factors:

>> [L,U,P] = lu(A); 
>> d = diag(U); dp = diag(Up); 
>> rel_diff_U_diags = norm((dp - d)./d,inf)
rel_diff_U_diags =
      7.37206353875273e-16
>> [prod(d), prod(dp)]
ans =
          -839466457497601          -839466457497600
>> [prod(d(end:-1:1)), prod(dp(end:-1:1))]
ans =
          -839466457497600          -839466457497600

We see that even though the diagonals of the two U factors differ by a small multiple of the unit roundoff, the computed products differ in the last decimal digit. If the product of the diagonal elements of U is accumulated in the reverse order then the exact answer is obtained in both cases. Once again, while this behaviour might seem surprising, it is within the error bounds of a rounding error analysis.

The moral of this example is that we should not be misled by the integer nature of a result; in floating-point arithmetic it is relative error that should be judged.

Finally, we note that numerical evaluation of the determinant offers other types of interesting behaviour. Consider the Frank matrix: a matrix of integers that has determinant 1. What goes wrong here in the step from dimension 24 to 25?

>> A = gallery('frank',24); det(A)
ans =
         0.999999999999996
>> A = gallery('frank',25); det(A)
ans =
          143507521.082525

The Edelman matrix is available in the MATLAB function available in this gist, which is embedded below. A Julia notebook exploring the Edelman matrix is available here.

function A = edelman
%EDELMAN Alan Edelman's matrix for which det is computed as the wrong integer.
% A = EDELMAN is a 27-by-27 matrix of 1s and -1s for which the
% MATLAB det function returns an odd integer, though the exact
% determinant is an even integer.
A = [%
1 1 1 1 -1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1
1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 1 1 1
-1 1 -1 -1 1 1 1 1 1 -1 1 -1 1 1 1 1 -1 -1 1 -1 -1 1 1 -1 1 1 -1
-1 -1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 1 -1 -1 1 1 -1 -1 1 -1 -1 -1 -1 -1
-1 1 1 -1 -1 -1 -1 -1 1 -1 -1 1 1 1 -1 -1 -1 -1 -1 1 1 1 -1 1 1 1 -1
1 1 1 1 1 1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 1 1 1 1 -1 1 -1
-1 -1 -1 -1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 -1 1 -1 1 -1 -1 1 -1
1 1 1 -1 1 1 -1 -1 1 1 -1 -1 1 1 -1 1 -1 1 1 1 1 1 -1 -1 1 1 -1
-1 1 -1 1 1 1 -1 -1 1 -1 1 1 1 1 -1 1 1 1 -1 1 1 -1 1 1 1 -1 -1
-1 1 -1 1 1 -1 -1 -1 -1 1 1 1 1 1 -1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1
-1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 -1 1 -1 -1 1 1 1 1 -1 1 1 1 -1 -1 1
1 1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 1 1 -1 1 1 1 -1 -1
-1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 1 -1 -1
1 1 -1 1 1 -1 1 1 -1 -1 -1 1 1 1 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1
-1 1 -1 1 -1 1 1 1 -1 -1 1 1 -1 -1 -1 -1 -1 1 1 1 1 1 -1 1 1 -1 -1
-1 -1 1 -1 1 -1 1 -1 -1 1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1
1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 1 -1 -1 1 1 -1 1 -1 1 1 -1 1 1 -1
1 1 -1 1 1 -1 1 1 1 -1 1 -1 1 -1 1 1 1 1 -1 -1 -1 1 1 1 -1 1 1
1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 1 1 -1 -1 1 1 -1 -1 -1 1 -1 1 1 1 1 -1
-1 -1 -1 1 1 -1 1 -1 1 -1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 1 -1 -1 -1
1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 -1 -1 1 -1 -1 1 -1 -1 1 1 -1 -1 -1 1
-1 -1 1 1 -1 -1 1 -1 -1 -1 1 -1 1 -1 -1 1 1 -1 1 -1 1 1 -1 1 -1 1 -1
-1 1 -1 1 1 1 1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 1 1 1 1 -1 -1 -1 1
-1 -1 1 1 1 -1 -1 -1 1 -1 1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 1 1 1 1 1 -1
1 1 1 1 1 -1 1 -1 1 -1 -1 1 1 -1 -1 1 1 1 -1 1 -1 -1 1 1 -1 1 1
1 -1 1 -1 1 -1 1 1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 -1 1 1 1 -1 1 -1 -1
1 -1 1 1 -1 -1 1 1 1 -1 1 1 -1 1 1 1 1 1 -1 1 -1 1 -1 1 1 -1 1];
view raw edelman.m hosted with ❤ by GitHub