You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I've got the code running in Python 3.6/Keras 2.0 (Tensorflow backend). So far (epoch ~5000 out of the first 6000) there seems to be no convergence whatsoever, at least not of the generator loss. Discriminator loss is stuck on 0; generator loss is stuck on 16 (the maximum, no doubt), and the generated images are noise. Is this due to the face I've used tensorflow instead of theano? Or is this just some random draw effect?
The text was updated successfully, but these errors were encountered:
There's incosistency in the batch sizes and learning rates between the notebook MNIST_CNN_GAN.ipynb
MNIST_CNN_GAN_v2.ipynb which is the latest and the py file mnist_gan.py (which fits the graphic results). Which one is right? Does the genrrator have higher learning rate or vice versa?
Hi,
I've got the code running in Python 3.6/Keras 2.0 (Tensorflow backend). So far (epoch ~5000 out of the first 6000) there seems to be no convergence whatsoever, at least not of the generator loss. Discriminator loss is stuck on 0; generator loss is stuck on 16 (the maximum, no doubt), and the generated images are noise. Is this due to the face I've used tensorflow instead of theano? Or is this just some random draw effect?
The text was updated successfully, but these errors were encountered: