We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_recurrent-modern/seq2seq.html http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/bahdanau-attention.html http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/transformer.html
We need to tune performance for MXNet & TensorFlow to obtain similar performance of PyTorch for each section, such as learning rate & max_epochs.
The text was updated successfully, but these errors were encountered:
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/bahdanau-attention.html was fixed in #2104. seq2seq and transformers for TF still need fixing.
Sorry, something went wrong.
AnirudhDagar
Successfully merging a pull request may close this issue.
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_recurrent-modern/seq2seq.html
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/bahdanau-attention.html
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/transformer.html
We need to tune performance for MXNet & TensorFlow to obtain similar performance of PyTorch for each section, such as learning rate & max_epochs.
The text was updated successfully, but these errors were encountered: