CSH
CSH

Reputation: 537

Why does renewing an optimizer give a bad result?

I tried to change my optimizer, but first of all, I want to check whether the following two codes give the same results:

optimizer = optim.Adam(params, lr)
for epoch in range(500):
    ....
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
for epoch in range(500):
    ....
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

If I insert the same optimizer between 'for loops',

optimizer = optim.Adam(params, lr)
for epoch in range(500):
    ....
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

optimizer = optim.Adam(params, lr)
for epoch in range(500):
    ....
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

The result become bad. Why does this happens? Doesn't optimizer just receive gradients from loss and operate gradiet descent like steps?

Upvotes: 2

Views: 98

Answers (1)

Shai
Shai

Reputation: 114796

Different optimizers may have some "memory".
For instance, Adam updates rule tracks the first and second moments of the gradients of each parameter and uses them to calculate the step size for each parameter.
Therefore, if you initialize your optimizer you erase this information and consequently make the optimizer "less informed" resulting with sub optimal choises for step sizes.

Upvotes: 3

Related Questions