The second difference matrix is the tridiagonal matrix with diagonal elements and sub- and superdiagonal elements :

It arises when a second derivative is approximated by the central second difference . (Accordingly, the second difference matrix is sometimes defined as .) In MATLAB, can be generated by `gallery('tridiag',n)`

, which is returned as a aparse matrix.

This is Gil Strang’s favorite matrix. The photo, from his home page, shows a birthday cake representation of the matrix.

The second difference matrix is symmetric positive definite. The easiest way to see this is to define the full rank rectangular matrix

and note that . The factorization corresponds to representing a central difference as the product of a forward difference and a backward difference. Other properties of the second difference matrix are that it is diagonally dominant, a Toeplitz matrix, and an -matrix.

## Cholesky Factorization

In an LU factorization the pivots are , , , …, . Hence the pivots form a decreasing sequence tending to 1 as . The diagonal of the Cholesky factor contains the square roots of the pivots. This means that in the Cholesky factorization with upper bidiagonal, and as .

## Determinant, Inverse, Condition Number

Since the determinant is the product of the pivots, .

The inverse of is full, with element for . For example,

The -norm condition number satisfies (as follows from the formula (1) below for the eigenvalues).

## Eigenvalues and Eigenvectors

The eigenvalues of are

where , with corresponding eigenvector

The matrix with

is therefore an eigenvector matrix for : .

## Variations

Various modifications of the second difference matrix arise and similar results can be derived. For example, consider the matrix obtained by changing the element to :

It can be shown that has element and eigenvalues , .

If we perturb the of to , we obtain a singular matrix, but suppose we perturb the element to :

The inverse is , where with element is a matrix of Givens, and the eigenvalues are , .

## Notes

The factorization is noted by Strang (2012).

For a derivation of the eigenvalues and eigenvectors see Todd (1977, p. 155 ff.). For the eigenvalues of see Fortiana and Cuadras (1997). Givens’s matrix is described by Newman and Todd (1958) and Todd (1977).

## References

This is a minimal set of references, which contain further useful references within.

- J. Fortiana and C. N. Cuadras, A Family of Matrices, the Discretized Brownian Bridge, and Distance-Based Regression, Linear Algebra Appl. 264, 173–188, 1997.
- Morris Newman and John Todd, The Evaluation of Matrix Inversion Programs, J. Soc. Indust. Appl. Math. 6(4), 466–476, 1958.
- Gilbert Strang, Essays in Linear Algebra, Wellesley-Cambridge Press, Wellesley, MA, USA, 2012. Chapter A.3: “My Favorite Matrix”.
- John Todd, Basic Numerical Mathematics, Vol. : Numerical Algebra, Birkhäuser, Basel, and Academic Press, New York, 1977.

## Related Blog Posts

- What Is a Diagonally Dominant Matrix? (2021)
- What Is a Sparse Matrix? (2020)
- What Is a Symmetric Positive Definite Matrix? (2020)
- What Is a Tridiagonal Matrix? (2022)
- What Is an M-Matrix? (2021)

This article is part of the “What Is” series, available from https://nhigham.com/index-of-what-is-articles/ and in PDF form from the GitHub repository https://github.com/higham/what-is.