Asjad Murtaza
Asjad Murtaza

Reputation: 101

Why some people chain the parameters of two different networks and train them with same optimizer?

I was looking at CycleGAN's official pytorch implementation and there, author chained the parameters of both networks and used a single optimizer for both network. How does this work? Is it better than using two different optimizers for two different networks ?

all_params = chain(module_a.parameters(), module_b.parameters())
optimizer = torch.optim.Adam(all_params)

Upvotes: 8

Views: 3476

Answers (3)

skywalker
skywalker

Reputation: 318

It makes sense to optimize both generators together (and adding both losses), because of the "cycle". The cycle loss uses both generators - G_B(G_A(A)) and G_A(G_B(B)). I think, if you would use separate optimizers you would need to call backward() on both losses before calling step() to achieve the same effect (this not have to be true for all optimization algorithms).

In official code, parameters of discriminator are also chained, but you could easily use separate optimizers (again, it not have to be true for other optimizations algorithms) because loss of D_A do not depend on D_B.

Upvotes: 0

Zabir Al Nazi Nabil
Zabir Al Nazi Nabil

Reputation: 11198

From chain documentation: https://docs.python.org/3/library/itertools.html#itertools.chain

itertools.chain(*iterables)

    Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted.

As parameters() gives you an iterable, you can use the optimizer to simultaneously optimize parameters for both of the networks. So, same optimizer states will be used for both models (Modules), if you use two different optimizers, the parameters will be optimized separately.

If you have a composite network, it becomes necessary to optimize the parameters (of all) at the same time, hence using a single optimizer for all of them is the way to go.

Upvotes: 5

Related Questions