kqwyf
kqwyf

Reputation: 94

Why is np.linalg.norm(x,2) slower than solving it directly?

Example code:

import numpy as np
import math
import time

x=np.ones((2000,2000))

start = time.time()
print(np.linalg.norm(x, 2))
end = time.time()
print("time 1: " + str(end - start))

start = time.time()
print(math.sqrt(np.sum(x*x)))
end = time.time()
print("time 2: " + str(end - start))

The output (on my machine) is:

1999.999999999991
time 1: 3.216777801513672
2000.0
time 2: 0.015042781829833984

It shows that np.linalg.norm() takes more than 3s to solve it, while the direct solution takes just 0.01s. Why is np.linalg.norm() so slow?

Upvotes: 0

Views: 1377

Answers (2)

B. M.
B. M.

Reputation: 18628

What is comparable is :

In [10]: %timeit sum(x*x,axis=1)**.5
36.4 ms ± 6.11 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [11]: %timeit norm(x,axis=1)
32.3 ms ± 3.94 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Neither np.linalg.norm(x, 2) nor sum(x*x)**.5 are the same thing.

Upvotes: 0

Eric
Eric

Reputation: 97571

np.linalg.norm(x, 2) computes the 2-norm, taking the largest singular value

math.sqrt(np.sum(x*x)) computes the frobenius norm

These operations are different, so it should be no surprise that they take different amounts of time. What is the difference between the Frobenius norm and the 2-norm of a matrix? on math.SO may be of interest.

Upvotes: 2

Related Questions