deKeijzer
deKeijzer

Reputation: 510

Training (DC)GAN, D(G(z)) goes to 0.5 while D(x) stays 0.9 and G(z) becomes corrupt

I'm currently training a DCGAN for 1x32x32 (channel, height, width) images. Quite soon in training G(z) becomes reasonably realistic apart from a problem with the 'chessboard' artifacts being visible, but this should go away after lots of training? However, after a long training session D(G(z)) goes to 0.5000 (and no longer changes) while D(x) stays between 0.8 and 0.9. Whenever D(G(z)) goes to 0.5 it also starts outputting fully black & white images. Hence, the generator no longer produces anything that looks close to what's in the training dataset. G(z) just becomes a black or white square.

The network used is from the original DCGAN paper, adapter for 1x32x32 images. With relu already replaced to leaky relu.

Upvotes: 0

Views: 133

Answers (1)

deKeijzer
deKeijzer

Reputation: 510

Solved the problem by switching to WGAN-GP (https://arxiv.org/abs/1704.00028).
Turns out it is more stable while training.

Upvotes: 1

Related Questions