What effect does a rank-1 perturbation of norm 1 to an orthogonal matrix have on the extremal singular values of the matrix? Here, and throughout this post, the norm is the 2-norm. The largest singular value of the perturbed matrix is bounded by , as can be seen by taking norms, so let us concentrate on the smallest singular value.

Consider first a perturbation of the identity matrix: , for unit norm and . The matrix has eigenvalues 1 (repeated times) and . The matrix is singularâ€”and hence has a zero singular valueâ€”precisely when , which is the smallest value that the inner product can take.

Another example is , where and has unit norm, so that is a Householder matrix. Here, is singular with null vector , so it has a zero singular value,

Let’s take a random orthogonal matrix and perturb it with a random rank-1 matrix of unit norm. We use the following MATLAB code.

n = 100; rng(1) A = gallery('qmult',n); % Random Haar distrib. orthogonal matrix. x = randn(n,1); y = randn(n,1); x = x/norm(x); y = y/norm(y); B = A + x*y'; svd_B = svd(B); max_svd_B = max(svd_B), min_svd_B = min(svd_B)

The output is

max_svd_B = 1.6065e+00 min_svd_B = 6.0649e-01

We started with a matrix having singular values all equal to 1 and now have a matrix with largest singular value a little larger than 1 and smallest singular value a little smaller than 1. If we keep running this code the extremal singular values of do not change much; for example, the next run gives

max_svd_B = 1.5921e+00 min_svd_B = 5.9213e-01

A rank-1 perturbation of unit norm could make singular, as we saw above, but our random perturbations are producing a well conditioned matrix.

What is the explanation? First, note that a rank-1 perturbation to an orthogonal matrix can only change two of the singular values, because the singular values are the square roots of the eigenvalues of , which is the identity plus a rank- matrix. So singular values remain at 1.

A result of Benaych-Georges and Nadakuditi (2012) says that for large the largest and smallest singular values of tend to and respectively! As our example shows, does not have to be large for these limits to be approximations correct to roughly the first digit.

The result in question requires the original orthogonal matrix to be from the Haar distribution, and such matrices can be generated by `A = gallery('qmult',n)`

or by the construction

[Q,R] = qr(randn(n)); Q = Q*diag(sign(diag(R)));

(See What Is a Random Orthogonal Matrix?.) The result also requires and to be unit-norm random vectors with independent entries from the same distribution.

However, as the next example shows, the perturbed singular values can be close to the values that the Benaych-Georges and Nadakuditi result predicts even when the conditions of the result are violated:

n = 100; rng(1) A = gallery('orthog',n); % Random orthogonal matrix (not Haar). x = rand(n,1); y = (1:n)'; % Non-random y. x = x/norm(x); y = y/norm(y); B = A + x*y'; svd_B = svd(B); max_svd_B = max(svd_B), min_svd_B = min(svd_B)

max_svd_B = 1.6069e+00 min_svd_B = 6.0687e-01

The question of the conditioning of a rank-1 perturbation of an orthogonal matrix arises in the recent EPrint Random Matrices Generating Large Growth in LU Factorization with Pivoting.