# What Is a Correlation Matrix?

In linear algebra terms, a correlation matrix is a symmetric positive semidefinite matrix with unit diagonal. In other words, it is a symmetric matrix with ones on the diagonal whose eigenvalues are all nonnegative.

The term comes from statistics. If $x_1, x_2, \dots, x_n$ are column vectors with $m$ elements, each vector containing samples of a random variable, then the corresponding $n\times n$ covariance matrix $V$ has $(i,j)$ element

$v_{ij} = \mathrm{cov}(x_i,x_j) = \displaystyle\frac{1}{n-1} (x_i - \overline{x}_i)^T (x_j - \overline{x}_j),$

where $\overline{x}_i$ is the mean of the elements in $x_i$. If $v$ has nonzero diagonal elements then we can scale the diagonal to 1 to obtain the corresponding correlation matrix

$C = D^{-1/2} V D^{-1/2},$

where $D = \mathrm{diag}(v_{ii})$. The $(i,j)$ element $c_{ij} = v_{ii}^{-1/2} v_{ij} v_{jj}^{-1/2}$ is the correlation between the variables $x_i$ and $x_j$.

Here are a few facts.

• The elements of a correlation matrix lie on the interval $[-1, 1]$.
• The eigenvalues of a correlation matrix lie on the interval $[0,n]$.
• The eigenvalues of a correlation matrix sum to $n$ (since the eigenvalues of a matrix sum to its trace).
• The maximal possible determinant of a correlation matrix is $1$.

It is usually not easy to tell whether a given matrix is a correlation matrix. For example, the matrix

$A = \begin{bmatrix} 1 & 1 & 0\\ 1 & 1 & 1\\ 0 & 1 & 1 \end{bmatrix}$

is not a correlation matrix: it has eigenvalues $-0.4142$, $1.0000$, $2.4142$. The only value of $a_{13}$ and $a_{31}$ that makes $A$ a correlation matrix is $1$.

A particularly simple class of correlation matrices is the one-parameter class $A_n$ with every off-diagonal element equal to $w$, illustrated for $n = 3$ by

$A_3 = \begin{bmatrix} 1 & w & w\\ w & 1 & w\\ w & w & 1 \end{bmatrix}.$

The matrix $A_n$ is a correlation matrix for $-1/(n-1) \le w \le 1$.

In some applications it is required to generate random correlation matrices, for example in Monte-Carlo simulations in finance. A method for generating random correlation matrices with a specified eigenvalue distribution was proposed by Bendel and Mickey (1978); Davies and Higham (2000) give improvements to the method. This method is implemented in the MATLAB function gallery('randcorr').

Obtaining or estimating correlations can be difficult in practice. In finance, market data is often missing or stale; different assets may be sampled at different time points (e.g., some daily and others weekly); and the matrices may be generated from different parametrized models that are not consistent. Similar problems arise in many other applications. As a result, correlation matrices obtained in practice may not be positive semidefinite, which can lead to undesirable consequences such as an investment portfolio with negative risk.

In risk management and insurance, matrix entries may be estimated, prescribed by regulations or assigned by expert judgement, but some entries may be unknown.

Two problems therefore commonly arise in connection with correlation matrices.

## Nearest Correlation Matrix

Here, we have an approximate correlation matrix $A$ that has some negative eigenvalues and we wish to replace it by the nearest correlation matrix. The natural choice of norm is the Frobenius norm, $\|A\|_F = \bigl(\sum_{i,j} a_{ij}^2\bigr)^{1/2}$, so we solve the problem

$\min \{ \, \|A-C\|_F: C~\textrm{is a correlation matrix} \,\}.$

We may also have a requirement that certain elements of $C$ remain fixed. And we may want to weight some elements more than others, by using a weighted Frobenius norm. These are convex optimization problems and have a unique solution that can be computed using the alternating projections method (Higham, 2002) or a Newton algorithm (Qi and Sun, 2006; Borsdorf and Higham, 2010).

Another variation requires $C$ to have factor structure, which means that the off-diagonal agrees with that of a rank-$k$ matrix for some given $k$ (Borsdorf, Higham, and Raydan, 2010). Yet another variation imposes a constraint that $C$ has a certain rank or a rank no larger than a certain value. These problems are non-convex, because of the objective function and the rank constraint, respectively.

Another approach that can be used for restoring definiteness, although it does not in general produce the nearest correlation matrix, is shrinking, which constructs a convex linear combination $A = \alpha C + (1-\alpha)M$, where $M$ is a target correlation matrix (Higham, Strabić, and Šego, 2016). Shrinking can readily incorporate fixed blocks and weighting.

## Correlation Matrix Completion

Here, we have a partially specified matrix and we wish to complete it, that is, fill in the missing elements in order to obtain a correlation matrix. It is known that a completion is possible for any set of specified entries if the associate graph is chordal (Grone et al., 1994). In general, if there is one completion there are many, but there is a unique one of maximal determinant, which is elegantly characterized by the property that the inverse contains zeros in the positions of the unspecified entries.

## References

This is a minimal set of references, and they cite further useful references.