Reputation: 353
I am doing some computations and would like to optimise the parameters of this by using Pytorch. I am NOT defining a neural network, so no layers and stuff like that. Just a simple sequence of computations. I use the torch.nn.Module to be able to use Pytorch's optimisers.
My class looks something like this:
class XTransformer(torch.nn.Module):
def __init__(self, x):
super(ReLUTransformer, self).__init__()
self.x = x
def funky_function(self, m, c):
# do some computations
m = self.x * 2 - m + c
return m, c
def forward(self, m, c):
m, c = self.funky_function(m, c)
return m, c
Later on I define and try to optimise this parameter x like so:
x = torch.autograd.Variable(x, requires_grad=True)
model = XTransformer(x)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
m, c = smt
loss = loss_func()
for t in range(100):
m , c = model(m, c)
l = loss(m, true)
optimizer.zero_grad()
l.backward()
optimizer.step()
I don't know what to do. I get the "ValueError: optimizer got an empty parameter list" error. when I just give [x] as an argument to the optimizer, It doesn't update and change x for me. What should I do?
Upvotes: 1
Views: 471
Reputation: 22214
You need to register x
as a parameter to let PyTorch know this should be a trainable parameter. This can be done by defining it as a nn.Parameter
during init
def __init__(self, x):
super(ReLUTransformer, self).__init__()
self.x = torch.nn.Parameter(x)
Upvotes: 1