>>> import mpmath

>>> mpmath.harmonic(65)

mpf(‘4.7592755190903846’)

>>> mpmath.harmonic(513)

mpf(‘6.8184658522885142’)

>>> mpmath.harmonic(2097152)

mpf(‘15.133306695078945’)

>>> mpmath.harmonic(2.81e14)

mpf(‘33.846591450163828’)

The Malone paper you reference discusses the errors for float64 in great detail.

]]>I am glad to see a new post dealing with totally nonnegative matrices (related to your previous post “What Is a Totally Nonnegative Matrix”). Your paragraph explaining the bidiagonal factorization of a Pascal matrix (only ones) suggests my new comment: the matrix containing the nontrivial entries of that factorization (called BD(A) by Plamen Koev) is constructed in MATLAB by means of the instruction

>> B = ones(n,n).

Starting from it (the exact bidiagonal factorization), and using the algorithms included in the package TNTool of P. Koev, we can compute with high relative accuracy the Pascal matrix, the eigenvalues and the inverse (this last task by using an algorithm written by Ana Marco and mself, and included in that package):

>> B=ones(5,5)

B =

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

>> A=TNExpand(B)

A =

1 1 1 1 1

1 2 3 4 5

1 3 6 10 15

1 4 10 20 35

1 5 15 35 70

>>

>> vp=TNEigenValues(B)

vp =

92.290434830153131

5.517487909311952

1.000000000000000

0.181241901466115

0.010835359068796

>>

>> AI=TNInverseExpand(B)

AI =

5 -10 10 -5 1

-10 30 -35 19 -4

10 -35 46 -27 6

-5 19 -27 17 -4

1 -4 6 -4 1

>>

Thank you always for your great work.

]]>Yes, that’s also an important factorization. It’s mentioned in the article https://nhigham.com/2020/12/22/what-is-a-modified-cholesky-factorization/

]]>his isn’t normally a problem. If the backward error is so large that a perturbation expansion is not valid then that usually means we have a very poor approximate solution. In any case, there is often a computable expression for \Delta x and so we can check its size

]]>Thank you so much for your blog. Using the condition number cond$(f,x)$ of the function $f$ at $x$ , we can find the bound for relative forward error $\frac{\|f(x+\Delta x)-f(x)\|}{\|f(x)\|}$ if the backward error $\Delta x$ belongs to some neighborhood $N(x)$ of $x$. In genearl we don’t know the size of $N(x)$. Let $\Delta x$ be a backward error of an approximated solution $f(x+\Delta x)$. How can one check that $\Delta x$ belongs to $N(x)$ or not ?

Thank you,

Kannan R.

very cool topic!

learn something more!

thanks for your sharing. ]]>

Thanks – I’ve reworded the last section to clarify this.

]]>In the last paragraph, maybe the correct name for the title (if you write about the 2-norm) is “spectral norm”, and not “Frobenius norm”.

Than you for your work. ]]>