Hi Rich. What you are describing sounds like minus a transition intensity matrix, which has zero row sums and which comes up as a generator for a Markov chain.

]]>Nice post. There is an important class of diagonally dominant (DD) matrices that just miss being M-matrices. I’ll refer to them as Q matrices, the name bestowed upon them by probabilists in their study of continuous-time Markov chains. Like M-matrices, the diagonal elements are positive and the off-diagonal elements are non-positive. But they are singular. Are you aware of a specific name for this class of DD matrices outside of Q matrices?

]]>Two variants that I would find interesting:

(1) if we want to approximate $\|x\|_\infty$, then we can use $f(x) = \log( \sum_i exp^{x_i} + exp^{-x_i} )$. In this case, we’d pull out $max |x_i|$ instead of $max x_i$, but I’m not sure we’d want to do log1p

(2) as for the logsumexp and its relationship to softmax (its derivative), many times each $x_i$ is parameterized by a vector $\theta$ and we want the gradient with respect to $\theta$, so then we have to modify the softmax formula to include the gradients of the $x_i$ terms. I’m thinking the naive implementation is not stable, but there ought to be similar tricks.

]]>A well known relationship between an invertible H-matrix and its comparison matrix due to Ostrowski [10] states that, for an invertible H-matrix A ∈ C^{n×n}, the inequality |A^{−1}| ≤ (M(A))^{−1} holds.

I am confused to prove this statement. Any advice?

]]>For an M-matrix, M(A) = A so that bound is trivial. The bound always holds for triangular A, as noted above, but does not always hold in general.

]]>I would like to read from you about “totally positive matrices” (a subject related to some of your classical contributions; I recall my comment in your post “Conference in Honour of Walter Gautschi”).

Thank you for your work. ]]>

Ok, sorry. I think I went too quickly.

]]>