-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have some problem. #1
Comments
Hi @mostoo45 How do I reproduce the issue? |
Hi abdulfatir ValueError: Variable encoder/conv_1/conv2d/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at: File "", line 3, in conv_block so I added the following line. |
@mostoo45 , I just ran this on my server and it worked flawless. FYI, I use tf 1.6 |
I have problems reproducing results too.. tried to reproduce these results but the loss value doesnt change. I thought it could be a initializer issue and tried a few different initializers but no dice. Also tried tf 1.3 and tf 1.6 neither of them converge. |
Instead of ipynb, I use py. |
Hi - are you CPU or GPUs .. I just tried by converting the code to a py file. below are the losses.. as you see they are not changing at all I dont see anything wrong with the code so will keep looking..
|
I use both of them , tf 1.3 and python3 |
@mostoo45 Restarting the kernel with clearing an output worked for me |
Same problem to me.. same accuracy for every episode. |
@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model |
Many people are facing this issue. Can someone look into it? |
@guohan950106 Hello~ this is the student who faced this issue #1. |
@abdulfatir [[epoch 1/20, episode 50/100] => loss: 4.09434, acc: 0.01667 I found that problem is that, gradient is not flowing backward. It is zero at each step. Did you find any solution? Any sugestion? |
awesome job! |
I found that what @ylfzr mentioned is the issue. I was getting the same numbers. It turns out that managing the folders in Colab can be a little messy and if you don't pay attention you can miss the right data location (it was my case). |
I also faced the problem with acc and loss unchanged |
Yes, after managed the the place of the data. the acc and loss changes |
Hi @wdayang |
If your acc and loss did not change at all after multiple episodes, it is most likely due to your dataset being misplaced. The correct location should be: prototypical-networks-tensorflow-master\data\omniglot\data\Alphabet_of_the_Magi, ,,,,, blah blah blah [epoch 1/20, episode 5/100] => loss: 3.60291, acc: 0.43667 |
I currently use your code about ProtoNet-Omniglot.ipynb
I have not changed the code, but the accuracy and loss value is not changed.
I use tensorflow 1.3
The text was updated successfully, but these errors were encountered: