Skip to content

Commit

Permalink
all typos done
Browse files Browse the repository at this point in the history
  • Loading branch information
pritesh2000 committed Sep 4, 2024
1 parent dd689e9 commit 20a51fd
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions 00_pytorch_fundamentals.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -984,7 +984,7 @@
"\n",
"Some are specific for CPU and some are better for GPU.\n",
"\n",
"Getting to know which is which can take some time.\n",
"Getting to know which one can take some time.\n",
"\n",
"Generally if you see `torch.cuda` anywhere, the tensor is being used for GPU (since Nvidia GPUs use a computing toolkit called CUDA).\n",
"\n",
Expand Down Expand Up @@ -1901,7 +1901,7 @@
"id": "bXKozI4T0hFi"
},
"source": [
"Without the transpose, the rules of matrix mulitplication aren't fulfilled and we get an error like above.\n",
"Without the transpose, the rules of matrix multiplication aren't fulfilled and we get an error like above.\n",
"\n",
"How about a visual? \n",
"\n",
Expand Down Expand Up @@ -1988,7 +1988,7 @@
"id": "zIGrP5j1pN7j"
},
"source": [
"> **Question:** What happens if you change `in_features` from 2 to 3 above? Does it error? How could you change the shape of the input (`x`) to accomodate to the error? Hint: what did we have to do to `tensor_B` above?"
"> **Question:** What happens if you change `in_features` from 2 to 3 above? Does it error? How could you change the shape of the input (`x`) to accommodate to the error? Hint: what did we have to do to `tensor_B` above?"
]
},
{
Expand Down Expand Up @@ -2188,7 +2188,7 @@
"\n",
"You can change the datatypes of tensors using [`torch.Tensor.type(dtype=None)`](https://pytorch.org/docs/stable/generated/torch.Tensor.type.html) where the `dtype` parameter is the datatype you'd like to use.\n",
"\n",
"First we'll create a tensor and check it's datatype (the default is `torch.float32`)."
"First we'll create a tensor and check its datatype (the default is `torch.float32`)."
]
},
{
Expand Down Expand Up @@ -2289,7 +2289,7 @@
}
],
"source": [
"# Create a int8 tensor\n",
"# Create an int8 tensor\n",
"tensor_int8 = tensor.type(torch.int8)\n",
"tensor_int8"
]
Expand Down Expand Up @@ -3139,7 +3139,7 @@
"source": [
"Just as you might've expected, the tensors come out with different values.\n",
"\n",
"But what if you wanted to created two random tensors with the *same* values.\n",
"But what if you wanted to create two random tensors with the *same* values.\n",
"\n",
"As in, the tensors would still contain random values but they would be of the same flavour.\n",
"\n",
Expand Down Expand Up @@ -3220,7 +3220,7 @@
"It looks like setting the seed worked. \n",
"\n",
"> **Resource:** What we've just covered only scratches the surface of reproducibility in PyTorch. For more, on reproducibility in general and random seeds, I'd checkout:\n",
"> * [The PyTorch reproducibility documentation](https://pytorch.org/docs/stable/notes/randomness.html) (a good exericse would be to read through this for 10-minutes and even if you don't understand it now, being aware of it is important).\n",
"> * [The PyTorch reproducibility documentation](https://pytorch.org/docs/stable/notes/randomness.html) (a good exercise would be to read through this for 10-minutes and even if you don't understand it now, being aware of it is important).\n",
"> * [The Wikipedia random seed page](https://en.wikipedia.org/wiki/Random_seed) (this'll give a good overview of random seeds and pseudorandomness in general)."
]
},
Expand Down

0 comments on commit 20a51fd

Please sign in to comment.