You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why should FNet,and SRNet be trained respectively(I mean using two Optimizers)?
because the function tfa.image.dense_image_warp cannot be used for gradient back propagation?
The text was updated successfully, but these errors were encountered:
The pro is that you can fine-tine two networks separately, however, in the source code, no different hyper-parameters are applied.
Therefore, I think respective training is not a must.
If I am wrong, please correct me.
Why should FNet,and SRNet be trained respectively(I mean using two Optimizers)?
because the function
tfa.image.dense_image_warp
cannot be used for gradient back propagation?The text was updated successfully, but these errors were encountered: