Reputation: 6779
I am using torch.norm
to calculate Lp norms with relatively large values for p
(in the range of 10-50). The vectors I do this for have relatively small values and I notice that the result incorrectly becomes 0
. In the example above, this already happens at p=9
!
import torch
lb = 1e-6 # Lower bound
ub = 1e-5 # Upper bound
# Construct vector
v = torch.rand(100) * (ub - lb) + lb
# Calculate Lp norms
for p in range(1,20):
print(p, torch.norm(v, p=p))
# It should approach the maximum value for p -> inf
print(torch.max(v))
Is there a way to circumvent this issue? Or is it inherent to machine precision and its associated rounding errors? Ideally, I would like a solution that maintains the graph. But I am also iterested in numerical approximations.
Upvotes: 0
Views: 177
Reputation: 6779
One way to mitigate this is to normalize the vector v
with its mean:
import torch
lb = 1e-6 # Lower bound
ub = 1e-5 # Upper bound
# Construct vector
v = torch.rand(100) * (ub - lb) + lb
v_mean = torch.mean(v)
# Calculate Lp norms
for p in range(1,50):
print(p, torch.norm(v, p=p), torch.norm(v/v_mean, p=p) * v_mean)
# It should approach the maximum value for p -> inf
print(torch.max(v))
But I am not sure how well this works when there is a large difference in magnitudes in the vector elements.
Upvotes: 0