Skip to content

Commit

Permalink
Tune Learning Rate bahdanau-attention
Browse files Browse the repository at this point in the history
  • Loading branch information
AnirudhDagar committed May 16, 2022
1 parent 9e4fbb1 commit 8b02c61
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions chapter_attention-and-transformers/bahdanau-attention.md
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ if tab.selected('mxnet', 'pytorch'):
decoder = Seq2SeqAttentionDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
model = d2l.Seq2Seq(encoder, decoder, tgt_pad=data.tgt_vocab['<pad>'],
lr=0.001)
lr=0.005)
trainer = d2l.Trainer(max_epochs=50, gradient_clip_val=1, num_gpus=1)
if tab.selected('tensorflow'):
with d2l.try_gpu():
Expand All @@ -357,7 +357,7 @@ if tab.selected('tensorflow'):
decoder = Seq2SeqAttentionDecoder(
len(data.tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
model = d2l.Seq2Seq(encoder, decoder, tgt_pad=data.tgt_vocab['<pad>'],
lr=0.001)
lr=0.005)
trainer = d2l.Trainer(max_epochs=50, gradient_clip_val=1)
trainer.fit(model, data)
```
Expand Down

0 comments on commit 8b02c61

Please sign in to comment.