From 9f55f69f6cd343cf411baa74328d9aec6a2f6146 Mon Sep 17 00:00:00 2001 From: Jiri Podivin Date: Sun, 5 May 2024 17:29:23 +0200 Subject: [PATCH] Changing tokenized_dataset to tokenized_datasets Signed-off-by: Jiri Podivin --- chapters/en/chapter7/6.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/chapters/en/chapter7/6.mdx b/chapters/en/chapter7/6.mdx index 6c6418c75..44551f15d 100644 --- a/chapters/en/chapter7/6.mdx +++ b/chapters/en/chapter7/6.mdx @@ -383,13 +383,13 @@ Now we can use the `prepare_tf_dataset()` method to convert our datasets to Tens ```python tf_train_dataset = model.prepare_tf_dataset( - tokenized_dataset["train"], + tokenized_datasets["train"], collate_fn=data_collator, shuffle=True, batch_size=32, ) tf_eval_dataset = model.prepare_tf_dataset( - tokenized_dataset["valid"], + tokenized_datasets["valid"], collate_fn=data_collator, shuffle=False, batch_size=32, @@ -726,9 +726,9 @@ Let's start with the dataloaders. We only need to set the dataset's format to `" ```py from torch.utils.data.dataloader import DataLoader -tokenized_dataset.set_format("torch") -train_dataloader = DataLoader(tokenized_dataset["train"], batch_size=32, shuffle=True) -eval_dataloader = DataLoader(tokenized_dataset["valid"], batch_size=32) +tokenized_datasets.set_format("torch") +train_dataloader = DataLoader(tokenized_datasets["train"], batch_size=32, shuffle=True) +eval_dataloader = DataLoader(tokenized_datasets["valid"], batch_size=32) ``` Next, we group the parameters so that the optimizer knows which ones will get an additional weight decay. Usually, all bias and LayerNorm weights terms are exempt from this; here's how we can do this: