Let and be Banach spaces (complete normed vector spaces). The Fréchet derivative of a function at is a linear mapping such that

for all . The notation should be read as “the Fréchet derivative of at in the direction ”. The Fréchet derivative may not exist, but if it does exist then it is unique. When , the Fréchet derivative is just the usual derivative of a scalar function: .

As a simple example, consider and . From the expansion

we deduce that , the first order part of the expansion. If commutes with then .

More generally, it can be shown that if has the power series expansion with radius of convergence then for with , the Fréchet derivative is

An explicit formula for the Fréchet derivative of the matrix exponential, , is

Like the scalar derivative, the Fréchet derivative satisfies sum and product rules: if and are Fréchet differentiable at then

A key requirement of the definition of Fréchet derivative is that must satisfy the defining equation for all . This is what makes the Fréchet derivative different from the Gâteaux derivative (or directional derivative), which is the mapping given by

Here, the limit only needs to exist in the particular direction . If the Fréchet derivative exists at then it is equal to the Gâteaux derivative, but the converse is not true.

A natural definition of condition number of is

and it can be shown that is given in terms of the Fréchet derivative by

where

For matrix functions, the Fréchet derivative has a number of interesting properties, one of which is that the eigenvalues of are the divided differences

for , where the are the eigenvalues of . We can check this formula in the case . Let be an eigenpair of and an eigenpair of , so that and , and let . Then

So is an eigenvector of with eigenvalue . But (whether or not and are distinct).

## References

This is a minimal set of references, which contain further useful references within.

- Kendall Atkinson and Weimin Han, Theoretical Numerical Analysis: A Functional Analysis Framework, Springer-Verlag, New York, 2009. (Section 5.3).
- Nicholas J. Higham, Functions of Matrices: Theory and Computation, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2008. (Chapter 3).
- James Ortega and Werner Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2000. (Section 3.1).

## Related Blog Posts

- What Is a Condition Number? (2020).
- What is a Matrix Function? (2020)

This article is part of the “What Is” series, available from https://nhigham.com/category/what-is and in PDF form from the GitHub repository https://github.com/higham/what-is.