diff --git a/00_pytorch_fundamentals.ipynb b/00_pytorch_fundamentals.ipynb index 04b06806..892cd772 100644 --- a/00_pytorch_fundamentals.ipynb +++ b/00_pytorch_fundamentals.ipynb @@ -984,7 +984,7 @@ "\n", "Some are specific for CPU and some are better for GPU.\n", "\n", - "Getting to know which is which can take some time.\n", + "Getting to know which one can take some time.\n", "\n", "Generally if you see `torch.cuda` anywhere, the tensor is being used for GPU (since Nvidia GPUs use a computing toolkit called CUDA).\n", "\n", @@ -1901,7 +1901,7 @@ "id": "bXKozI4T0hFi" }, "source": [ - "Without the transpose, the rules of matrix mulitplication aren't fulfilled and we get an error like above.\n", + "Without the transpose, the rules of matrix multiplication aren't fulfilled and we get an error like above.\n", "\n", "How about a visual? \n", "\n", @@ -1988,7 +1988,7 @@ "id": "zIGrP5j1pN7j" }, "source": [ - "> **Question:** What happens if you change `in_features` from 2 to 3 above? Does it error? How could you change the shape of the input (`x`) to accomodate to the error? Hint: what did we have to do to `tensor_B` above?" + "> **Question:** What happens if you change `in_features` from 2 to 3 above? Does it error? How could you change the shape of the input (`x`) to accommodate to the error? Hint: what did we have to do to `tensor_B` above?" ] }, { @@ -2188,7 +2188,7 @@ "\n", "You can change the datatypes of tensors using [`torch.Tensor.type(dtype=None)`](https://pytorch.org/docs/stable/generated/torch.Tensor.type.html) where the `dtype` parameter is the datatype you'd like to use.\n", "\n", - "First we'll create a tensor and check it's datatype (the default is `torch.float32`)." + "First we'll create a tensor and check its datatype (the default is `torch.float32`)." ] }, { @@ -2289,7 +2289,7 @@ } ], "source": [ - "# Create a int8 tensor\n", + "# Create an int8 tensor\n", "tensor_int8 = tensor.type(torch.int8)\n", "tensor_int8" ] @@ -3139,7 +3139,7 @@ "source": [ "Just as you might've expected, the tensors come out with different values.\n", "\n", - "But what if you wanted to created two random tensors with the *same* values.\n", + "But what if you wanted to create two random tensors with the *same* values.\n", "\n", "As in, the tensors would still contain random values but they would be of the same flavour.\n", "\n", @@ -3220,7 +3220,7 @@ "It looks like setting the seed worked. \n", "\n", "> **Resource:** What we've just covered only scratches the surface of reproducibility in PyTorch. For more, on reproducibility in general and random seeds, I'd checkout:\n", - "> * [The PyTorch reproducibility documentation](https://pytorch.org/docs/stable/notes/randomness.html) (a good exericse would be to read through this for 10-minutes and even if you don't understand it now, being aware of it is important).\n", + "> * [The PyTorch reproducibility documentation](https://pytorch.org/docs/stable/notes/randomness.html) (a good exercise would be to read through this for 10-minutes and even if you don't understand it now, being aware of it is important).\n", "> * [The Wikipedia random seed page](https://en.wikipedia.org/wiki/Random_seed) (this'll give a good overview of random seeds and pseudorandomness in general)." ] },