The softmax function takes as input an -vector
and returns a vector
with elements
The elements of are all between
and
and they sum to 1, so
can be regarded as a vector of probabilities. Softmax is a key function in machine learning algorithms.
Softmax is the gradient vector of the log-sum-exp function
This function is an approximation to the largest element, of the vector
, as it lies between
and
.
A problem with numerical evaluation of log-sum-exp and softmax is that overflow is likely even for quite modest values of because of the exponentials, even though
cannot overflow and
is very unlikely to do so.
A standard solution it to incorporate a shift, , and use the formulas
and
where is usually set to
.
Another formula for softmax is obtained by moving the denominator into the numerator:
This formulas is used in various codes, including in the SciPy 1.4.1 function softmax.
How accurate are these formulas when evaluated in floating-point arithmetic? To my knowledge, this question has not been addressed in the literature, but it is particularly important given the growing use of low precision arithmetic in machine learning. Two questions arise. First, is there any difference between the accuracy of the formulas (2) and (3) for ? Second, in (1) and (3),
is added to a nonnegative log term, so when
is negative can there be damaging subtractive cancellation?
In a recent EPrint with Pierre Blanchard and Des Higham I have investigated these questions using rounding error analysis and analysis of the conditioning of the log-sum-exp and softmax problems. In a nutshell, our findings are that while cancellation can happen, it is not a problem: the shifted formulas (1) and (2) can be safely used.
However, the alternative softmax formula (3) is not recommended, as its rounding error bounds are larger than for (2) and we have found it to produce larger errors in practice.
Here is an example from training an artificial neural network using the MATLAB Deep Learning Toolbox. The network is trained to classify handwritten digits from the widely used MNIST data set. The following figure shows the sum of the computed elements of the softmax vector for 2000 vectors extracted from the training data, where
was computed in IEEE half precision arithmetic. The sum should be 1. The red circles are for formula (2) and the blue crosses are for the division-free formula (3). Clearly, (2) gives a better approximation to a vector of probabilities (in the sense of respecting the constraint that probabilities sum to unity); the actual errors in each vector component are also smaller for (2).