Reputation: 95
I tried to find a similar topic in stackoverflow, but did not find any. My problem is the following: try to approximate the inverse of a set of filters using a neural network. I'm using pytorch and I have 1D data.
The approach is as follows: The network is defined and I do the usual forward / backward steps. The forward is done as:
# Zero the gradients
optimizer.zero_grad()
# Perform forward pass
outputs = mlp(inputs)
Then I insert my filters:
outputs = myFilters(outputs)
And compute the loss, trying to fit the output with the input:
# Compute loss
loss = loss_function(outputs, inputs)
loss.backward()
optimizer.step()
But I get an error in the loss.backward()
:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
As I understand it, it says that Pytorch cannot backpropagate the error through the filter before passing through the network. How can I do that ?
Thanks for your help.
Upvotes: 0
Views: 94
Reputation: 154
Pytorch relies on automatic differentiation to optimize neural networks. That is implemented by having the grad function as a property of the tensors. When you perform your filtering, you remove that property and in addition perform a new set of functions that might or might not have a derivative. In any case, since you do not have the grad function, pytorch doesn't know how to optimize your neural network.
In the general case you can rewrite your filters using pytorch functions to ensure that you have a gradient function. In your particular case I advice against it.
Your real problem is the following: You have a dataset y, which is a function of your filters f and the original data that we will call x. So you have y = f(x). What you want to do is to find an inverse function g, such that it transforms the filtered data to the original: g(y) = x. That g is your neural network. You give it the filtered data y as an input, and expect that it will revert them to your ground truth, original data x.
You don't need any processing. Just put your data in that order.
Upvotes: 0