Reputation: 147
Let H(n)
be a Hilbert matrix of order n
.
Let e = (0,...,0,1)
- unit vector.
Let e_im := H(n) * e
.
I solve H(n) * x_1 = e_im
by using some computer algebra system.
Let r = (r_1,...,r_n)
be small random vector.
Let (e + r)_im := H(n) * (e + r)
.
I solve H(n) * x_2 = (e + r)_im
by using some computer algebra system.
Why is residial || H(n) * x_1 - e_im ||
so small, but || H(n) * x_2 - (e + r)_im||
so very big?
I use numpy and scipy.linalg, my code:
H = scipy.linalg.hilbert(500)
e = numpy.zeros((500, 1))
e[499] = 1
e_im = H.dot(e)
x_1 = scipy.linalg.solve(H, e_im)
r = 0.0001 * numpy.random.rand(500, 1)
e_plusr_im = e + r
x_2 = scipy.linalg.solve(H, e_plusr_im)
Residials = [scipy.linalg.norm(H.dot(x_1) - b_1, 2), scipy.linalg.norm(H.dot(x_2) - b_2, 2)]
Upvotes: 1
Views: 579
Reputation: 35155
This is well-known numerical linear algebra phenomenon, (cf. most linear algebra course materials).
The condition number kappa(A) = ||A|| ||A^-1||
of a matrix tells how much matrix inversion amplifies errors in the generic case. Here:
>>> import scipy.linalg
>>> import numpy as np
>>> H = scipy.linalg.hilbert(500)
>>> np.linalg.cond(H)
4.6335026663215786e+20
As a rule of thumb, the (deterministic) floating point error has relative magnitude 1e-16 for 64-bit floats. The condition number is so big that the resulting error has relative magnitude > 1, so you can lose all precision from that alone unless you get lucky and the solution and intermediate calculations have exact floating point representations. If you manually add noise, it will also be amplified.
Upvotes: 3