JVE999
JVE999

Reputation: 3517

How do I mutate the input using gradient descent in PyTorch?

I'm new to PyTorch. I learned it uses autograd to automatically calculate the gradients for the gradient descent function.

Instead of adjusting the weights, I would like to mutate the input to achieve a desired output, using gradient descent. So, instead of the weights of neurons changing, I want to keep all of the weights the same and just change the input to minimize the loss.

For example. The network is a trained image classifier with the numbers 0-9. I input random noise, and I want to morph it so that the network considers it a 3 with 60% confidence. I would like to utilize gradient descent to adjust the values of the input (originally noise) until the network considers the input to be a 3, with 60% confidence.

Is there a way to do this?

Upvotes: 2

Views: 984

Answers (1)

hkchengrex
hkchengrex

Reputation: 4826

I assume you know how to do regular training with gradient descent. You only need to change the parameters to be optimized by the optimizer. Something like

# ... Setup your network, load the input
# ...

# Set proper requires_grad -> We train the input, not the parameters
input.requires_grad = True
for p in network.parameters():
    p.requires_grad = False

# Setup the optimizer
# Previously we should have SomeOptimizer(net.parameters())
optim = SomeOptimizer([input])

output_that_you_want = ...
actual_output = net(input)
some_loss = SomeLossFunction(output_that_you_want, actual_output)
# ...
# Back-prop and optim.step() as usual

Upvotes: 3

Related Questions