# What Is the Matrix Unwinding Function?

Care is needed when dealing with multivalued functions because identities that hold for positive scalars can fail in the complex plane. For example, it is not always true that $(z-1)^{1/2}(z+1)^{1/2} = (z^2-1)^{1/2}$ or $\log(z_1 z_2) = \log z_1 + \log z_2$ for all $z,z_1,z_2\in\mathbb{C}$. Here, the square root is the principal square root (the one lying in the right half-plane) and the logarithm is the principal logarithm (the one with imaginary part in $(-\pi,\pi]$).

A powerful tool for dealing with multivalued complex functions is the unwinding number, defined for $z\in\mathbb{C}$ by

$\notag \mathcal{U}(z) = \displaystyle\frac{z - \log \mathrm{e}^z}{2\pi \mathrm{i}}.$

The unwinding number provides a correction term for the putative identity $\log \mathrm{e}^z = z$, in that

$\notag \qquad\qquad\qquad\qquad z = \log \mathrm{e}^z + 2\pi \mathrm{i}\, \mathcal{U}(z) \qquad\qquad\qquad\qquad (*)$

for all $z$.

A useful formula for the unwinding number is

$\notag \mathcal{U}(z) = \left\lceil\displaystyle\frac{\mathrm{Im} z - \pi}{2\pi}\right\rceil,$

where $\lceil\cdot\rceil$ is the ceiling function, which returns the smallest integer greater than or equal to its argument. It follows that $\mathcal{U}(z) = 0$ if and only if $\mathrm{Im} z \in (-\pi, \pi]$. Hence $\log \mathrm{e}^z = z$ if and only if $\mathrm{Im} z \in (-\pi, \pi]$.

The unwinding number provides correction terms for various identities. For example, for $z_1,z_2\in\mathbb{C}$, replacing $z$ by $\log z_1 \pm \log z_2$ in $(*$), we have

\notag \begin{aligned} \log z_1 \pm \log z_2 &= \log \bigl( \mathrm{e}^{\log z_1 \pm \log z_2} \bigr) + 2\pi\mathrm{i} \, \mathcal(\log z_1 \pm \log z_2)\\ &= \log \bigl(\mathrm{e}^{\log z_1} \mathrm{e}^{\pm \log z_2} \bigr) + 2\pi\mathrm{i} \, \mathcal(\log z_1 \pm \log z_2)\\ &= \log \bigl( z_1 z_2^{\pm 1} \bigr) + 2\pi\mathrm{i} \, \mathcal(\log z_1 \pm \log z_2). \end{aligned}

This gives the identities

\notag \begin{aligned} \log (z_1 z_2) &= \log z_1 + \log z_2 - 2\pi \mathrm{i}\, \mathcal{U}(\log z_1 +\log z_2), \\ \qquad\qquad\qquad \log (z_1/z_2) &= \log z_1 - \log z_2 - 2\pi \mathrm{i}\,\mathcal{U}(\log z_1 - \log z_2). \qquad\qquad\qquad (\#) \end{aligned}

Note that in textbooks one can find identities such as $\log (z_1 z_2) = \log z_1 + \log z_2$, in which each occurrence of $\log$ is interpreted as a possibly different branch. For computational purposes we want formulas that contain a specific branch, usually the principal branch.

An application of the unwinding number to matrix functions is in computing the logarithm of a $2\times 2$ upper triangular matrix. For any function $f$, we have

$\notag f\left( \begin{bmatrix} \lambda_1 & t_{12} \\ 0 & \lambda_2 \end{bmatrix} \right) = \begin{bmatrix} f(\lambda_1) & t_{12} f[\lambda_1,\lambda_2] \\ 0 & f(\lambda_2) \end{bmatrix},$

where the divided difference

$\notag f[\lambda_1,\lambda_2] = \begin{cases} \displaystyle\frac{f(\lambda_2)-f(\lambda_1)}{\lambda_2-\lambda_1}, & \lambda_1 \ne \lambda_2, \\ f'(\lambda_1), & \lambda_1 = \lambda_2. \end{cases}$

When $\lambda_1 \approx \lambda_2$ this formula suffers from numerical cancellation. For the logarithm, we can rewrite it, using $(\#)$, as

\notag \begin{aligned} \log\lambda_2 - \log\lambda_1 &= \log \left(\frac{\lambda_2}{\lambda_1}\right) + 2\pi \mathrm{i} \, \mathcal{U}(\log \lambda_2 - \log \lambda_1) \\ &= \log \left(\frac{1+z}{1-z}\right) + 2\pi \mathrm{i}\, \mathcal{U}(\log \lambda_2 - \log \lambda_1), \end{aligned}

where $z = (\lambda_2-\lambda_1)/(\lambda_2+\lambda_1)$. Using the hyperbolic arc tangent, defined by

$\notag \mathrm{atanh}(z) = \frac{1}{2}\log\left( \displaystyle\frac{1+z}{1-z} \right),$

we obtain

$\notag f[\lambda_1,\lambda_2] = \displaystyle\frac{2\mathrm{atanh}(z) + 2\pi \mathrm{i}\, \mathcal{U}(\log \lambda_2 - \log \lambda_1)}{\lambda_2-\lambda_1}, \quad \lambda_1 \ne \lambda_2.$

Assuming that we have an accurate $\mathrm{atanh}$ function this formula will provide an accurate value of $f[\lambda_1,\lambda_2]$ provided that $z$ is not too close to $\pm 1$ (the singularities of $\mathrm{atanh}$) and not too large. This formula is used in the MATLAB function logm.

## Matrix Unwinding Function

The unwinding number leads to the matrix unwinding function, defined for $A\in\mathbb{C}^{n\times n}$ by

$\notag \mathcal{U}(A) = \displaystyle\frac{A - \log \mathrm{e}^A}{2\pi \mathrm{i}}.$

Here, $\log$ is the principal matrix logarithm, defined by the property that its eigenvalues have imaginary parts in the interval $(-\pi,\pi]$. It can be shown that $\mathcal{U}(A) = 0$ if and only if the imaginary parts of all the eigenvalues of $A$ lie in the interval $(-\pi, \pi]$. Furthermore, $\mathcal{U}(A)$ is a diagonalizable matrix with integer eigenvalues.

As an example, the matrix

$\notag A = \left[\begin{array}{rrrr} 3 & 1 & -1 & -9\\ -1 & 3 & 9 & -1\\ -1 & -9 & 3 & 1\\ 9 & -1 & -1 & 3 \end{array}\right], \quad \Lambda(A) = \{ 2\pm 8\mathrm{i}, 4 \pm 10\mathrm{i} \}$

has unwinding matrix function

$\notag X = \mathcal{U}(A) = \mathrm{i} \left[\begin{array}{rrrr} 0 & -\frac{1}{2} & 0 & \frac{3}{2}\\ \frac{1}{2} & 0 & -\frac{3}{2} & 0\\ 0 & \frac{3}{2} & 0 & -\frac{1}{2}\\ -\frac{3}{2} & 0 & \frac{1}{2} & 0 \end{array}\right], \quad \Lambda(X) = \{ \pm 1, \pm 2 \}.$

In general, if $A$ is real then $\mathcal{U}(A)$ is pure imaginary as long as $A$ has no eigenvalue with imaginary part that is an odd multiple of $\pi$.

The matrix unwinding function is useful for providing correction terms for matrix identities involving multivalued functions. Here are four useful matrix identities, along with cases in which the correction term vanishes. See Aprahamian and Higham (2014) for proofs.

• For nonsingular $A$ and $\alpha,\beta\in\mathbb{C}$,

$\notag (A^\alpha)^{\beta} = A^{\alpha\beta} \mathrm{e}^{-2\beta\pi \mathrm{i}\, \mathcal{U}(\alpha\log A)}.$

If $\beta$ is an integer then the correction term is $I$. If $\alpha\in(-1,1]$ and $\beta = 1/\alpha$ then $\mathcal{U}(\alpha\log A) = 0$ and so

$\notag (A^\alpha)^{1/\alpha} = A, \quad \alpha \in [-1,1],$

and this equation is obviously true for $\alpha = -1$, too.

• If $A$ and $B$ are nonsingular and $AB = BA$ then

$\notag \log(AB^{\pm1}) = \log A \pm \log B - 2\pi \mathrm{i}\, \mathcal{U}(\log A \pm \log B).$

If $\arg\lambda_i + \arg\mu_i \in(-\pi,\pi]$ for every eigenvalue $\lambda_i$ of $A$ and the corresponding eigenvalue $\mu_i$ of $B$ (there is a correspondence because of the commutativity of $A$ and $B$, which implies that they are simultaneously unitarily diagonalizable), then $\log(AB^{\pm 1}) = \log A \pm \log B$.

• If $A$ and $B$ are nonsingular and $AB = BA$ then for any $\alpha\in\mathbb{C}$,

$\notag (AB)^\alpha = A^\alpha B^\alpha \mathrm{e}^{-2\pi\, \alpha \mathrm{i}\, \mathcal{U}(\log A + \log B)}.$

If $\alpha$ is an integer or the eigenvalues of $A$ and $B$ have arguments in $(-\pi/2,\pi/2]$ then $(AB)^\alpha = A^\alpha B^\alpha$.

• For nonsingular $A$ and $\alpha\in\mathbb{C}$.

$\notag \log A^{\alpha} = \alpha \log A - 2\pi \mathrm{i}\, \mathcal{U}(\alpha \log A).$

If $\alpha\in(-1,1]$ then $\log A^{\alpha} = \alpha \log A$, and this equation also holds for $\alpha = -1$ if $A$ has no negative eigenvalues.

The matrix unwinding function can be computed by an adaptation of the Schur–Parlett algorithm. The algorithm computes a Schur decomposition and re-orders it into a block form with eigenvalues having the same unwinding number in the same diagonal block. The unwinding function of each diagonal block is then a multiple of the identity and the off-diagonal blocks are obtained by the block Parlett recurrence. This approach gives more accurate results than directly evaluating $\mathcal{U}(A)$ from its definition in terms of the matrix logarithm and natrix exponential. A MATLAB code unwindm is available at https://github.com/higham/unwinding

## Matrix Argument Reduction

The matrix unwinding function can be of use computationally for reducing the size of the imaginary parts of the eigenvalues of a matrix. The function

$\notag \mathrm{mod}(A) = A - 2\pi\mathrm{i}\, \mathcal{U}(A)$

has eigenvalues $\lambda$ with $\mathrm{Im} \lambda \in(-\pi,\pi]$. Using $\mathrm{mod}(A)$ in place of $A$ can be useful in computing the matrix exponential by the scaling and squaring algorithm because $\mathrm{e}^A = \mathrm{e}^{\mathrm{mod}(A)}$ and $\mathrm{mod}(A)$ can have a much smaller norm than $A$, giving potential reductions in cost. Combined with the Schur decomposition-based algorithm for $\mathcal{U}(A)$ mentioned above, this idea leads to a numerical method for $\mathrm{e}^A$. See Aprahamian and Higham (2014) for details.

## Round Trip Relations

If you apply a matrix function and then its inverse do you get back to where you started, that is, is $f^{-1}(f(z)) = z$? In principle yes, but if the inverse is multivalued the answer is not immediate. The matrix unwinding function is useful for analyzing such round trip relations. As an example, if $A$ has no eigenvalues with real part of the form $k \pi$ for an integer $k$, then

$\notag \mathrm{acos}(\cos A) = \bigl(A-2\pi\,\mathcal{U} (\mathrm{i} A)\bigr)\mathrm{sign} \bigl(A-2\pi\mathcal{U}( \mathrm{i} A)\bigr).$

Here, $\mathrm{acos}$ is the principal arc cosine defined in Aprahamian and Higham (2016), where this result and analogous results for the arc sine, arc hyperbolic cosine, and arc hyperbolic sine are derived; and $\mathrm{sign}$ is the matrix sign function.

## History

The unwinding number was introduced by Corless, Hare and Jeffrey in 1996, to help implement computer algebra over the complex numbers. It was generalized to the matrix case by Aprahamian and Higham (2014).

## References

This is a minimal set of references, which contain further useful references within.