Reputation: 1758
How to calculate goemetric mean along a dimension using Pytorch? Some numbers can be negative. The function must be differentiable.
Upvotes: 4
Views: 2375
Reputation: 21
I know lambda functions are prohibited by PEP8, but when I locally need it, I sometimes use
torch_geom = lambda t, *a, **kw: t.log().mean(*a, **kw).exp() # Geometric mean
torch_harm = lambda t, *a, **kw: 1 / (1 / t).mean(*a, **kw) # Harmonic mean
...
I find that it is a neat little way of getting what I want with a syntax matching that of pre-existing functions like torch.mean
etc...
Upvotes: 0
Reputation: 13601
A known (reasonably) numerically-stable version of the geometric mean is:
import torch
def gmean(input_x, dim):
log_x = torch.log(input_x)
return torch.exp(torch.mean(log_x, dim=dim))
x = torch.Tensor([2.0] * 1000).requires_grad_(True)
print(gmean(x, dim=0))
# tensor(2.0000, grad_fn=<ExpBackward>)
This kind of implementation can be found, for example, in SciPy (see here), which is a quite stable lib.
The implementation above does not handle zeros and negative numbers. Some will argue that the geometric mean with negative numbers is not well-defined, at least when not all of them are negative.
Upvotes: 11
Reputation: 1680
torch.prod() helps:
import torch
x = torch.FloatTensor(3).uniform_().requires_grad_(True)
print(x)
y = x.prod() ** (1.0/x.shape[0])
print(y)
y.backward()
print(x.grad)
# tensor([0.5692, 0.7495, 0.1702], requires_grad=True)
# tensor(0.4172, grad_fn=<PowBackward0>)
# tensor([0.2443, 0.1856, 0.8169])
EDIT: ?what about
y = (x.abs() ** (1.0/x.shape[0]) * x.sign() ).prod()
Upvotes: 0