# Making Sense of Multivalued Matrix Functions with the Matrix Unwinding Function

Try the following quiz. Let $A$ be an $n\times n$ real or complex matrix. Consider the principal logarithm—the one for which $\log z$ has imaginary part in $(-\pi,\pi]$—and define $z^{t} = e^{t \log z}$ for $t\in\mathbb{C}$ (an important special case being $t = 1/p$ for an integer $p$) .

True or false:

1. $\log e^A = A$ for all $A$, in other words passing $A$ through the exponential then the logarithm takes us on a round trip.
2. $(I-A^2)^{1/2} = (I-A)^{1/2}(I+A)^{1/2}$ for all $A$.
3. $(AB)^{t} = A^{t}B^{t}$ whenever $A$ and $B$ commute.

1. False. Yet $e^{\log A} = A$ is always true.
2. True. Yet the similar identity $(A^2-I)^{1/2}=(A-I)^{1/2}(A+I)^{1/2}$ is false.
3. False.

At first sight these results may seem rather strange. How can we understand them? If you take the viewpoint that each occurrence of $\log$ and a power $t$ in the above expressions stands for the families of all possible logarithms and powers then the identities are all true. But from a computational viewpoint we are usually concerned with a particular branch of each function, the principal branch, so equality cannot be taken for granted.

An excellent tool for understanding these identities is a new matrix function called the matrix unwinding function. This function is defined for any square matrix $A$ by $U(A) = (A - \log e^A )/(2\pi i)$, and it arises from the scalar unwinding number introduced by Corless, Hare and Jeffrey in 1996 1, 2. There is nothing special about $A$ and $B$ being matrices in this quiz; the answers are the same if they are scalars. But the matrix unwinding function neatly handles the extra subtleties of the matrix case. Mary’s talk at the 2014 SIAM Annual Meeting in San Diego.

From the definition we have $\log e^A = A + 2\pi i U(A)$, so the relation in the first quiz question is clearly valid when $U(A) = 0$, which is the case when the eigenvalues of $A$ have imaginary parts lying on the interval $(-\pi,\pi]$. Each of the above identities can be understood by deriving an exact relation in which the unwinding function provides the discrepancy between the left and right-hand sides. For example, $(AB)^t = A^t B^t e^{-2\pi t i U(\log A + \log B)}.$

Mary Aprahamian and I have recently published the paper The Matrix Unwinding Function, with an Application to Computing the Matrix Exponential, (SIAM J. Matrix. Anal. Appl., 35, 88-109, 2014), in which we introduce the matrix unwinding function and develop its many interesting properties. We analyze the identities discussed above, along with various others. Thanks to the University of Manchester’s Open Access funds, that paper is available for anyone to download from the SIAM website, using the given link.

The matrix unwinding function has another use. Note that $e^A=e^{\log e^A}=e^{A-2\pi i U(A)}$ and the matrix $A-2\pi i U(A)$ has eigenvalues with imaginary parts in $(-\pi,\pi]$. The scaling and squaring method for computing the matrix exponential is at its most efficient when $A$ has norm of order 1, and this argument reduction operation tends to reduce the norm of $A$ when $A$ has eigenvalues with large imaginary part. In the paper we develop this argument reduction and show that it can lead to substantial computational savings.

How can we compute $U(A)$? The following incomplete MATLAB code implements the Schur algorithm developed in the paper. The full code is available.

function U = unwindm(A,flag)
%UNWINDM  Matrix unwinding function.
%   UNWINDM(A) is the matrix unwinding function of the square matrix A.

%   Reference: M. Aprahamian and N. J. Higham.
%   The matrix unwinding function, with an application to computing the
%   matrix exponential.  SIAM J. Matrix Anal. Appl., 35(1):88-109, 2014.

%   Mary Aprahamian and Nicholas J. Higham, 2013.

if nargin < 2, flag = 1; end
[Q,T] = schur(A,'complex');

ord = blocking(T);
[ord, ind] = swapping(ord);  % Gives the blocking.
ord = max(ord)-ord+1;        % Since ORDSCHUR puts highest index top left.
[Q,T] = ordschur(Q,T,ord);
U = Q * unwindm_tri(T) * Q';

%%%%%%%%%%%%%%%%%%%%%%%%%%%
function F = unwindm_tri(T)
%UNWINDM_tri   Unwinding matrix of upper triangular matrix.

n = length(T);
F = diag( unwind( diag(T) ) );

% Compute off-diagonal of F by scalar Parlett recurrence.
for j=2:n
for i = j-1:-1:1
if F(i,i) == F(j,j)
F(i,j) = 0;        % We're within a diagonal block.
else
s = T(i,j)*(F(i,i)-F(j,j));
if j-i >= 2
k = i+1:j-1;
s = s + F(i,k)*T(k,j) - T(i,k)*F(k,j);
end
F(i,j) = s/(T(i,i)-T(j,j));
end
end
end

%%%%%%%%%%%%%%%%%%%%%%
function u = unwind(z)
%UNWIND  Unwinding number.
%   UNWIND(A) is the (scalar) unwinding number.

u = ceil( (imag(z) - pi)/(2*pi) );

... Other subfunctions omitted


Here is an example. As it illustrates, the unwinding matrix of a real matrix is usually pure imaginary.

>> A = [1 4; -1 1]*4, U = unwindm(A)
A =
4    16
-4     4
U =
0.0000 + 0.0000i   0.0000 - 2.0000i
0.0000 + 0.5000i  -0.0000 + 0.0000i
>> residual = A - logm(expm(A))
residual =
-0.0000   12.5664
-3.1416   -0.0000
>> residual - 2*pi*i*U
ans =
1.0e-15 *
-0.8882 + 0.0000i   0.0000 + 0.0000i
-0.8882 + 0.0000i  -0.8882 + 0.3488i


## Footnotes:

1

Robert Corless and David Jeffrey, The Unwinding Number, SIGSAM Bull 30, 28-35, 1996

2

David Jeffrey, D. E. G. Hare and Robert Corless, Unwinding the Branches of the Lambert W Function, Math. Scientist 21, 1-7, 1996