Skip to content

Commit

Permalink
Merge pull request #1073 from pritesh2000/gram-1/01
Browse files Browse the repository at this point in the history
01_pytorch_workflow.ipynb
  • Loading branch information
mrdbourke authored Sep 12, 2024
2 parents 5fbccf6 + 2ad8d00 commit a127b23
Showing 1 changed file with 15 additions and 15 deletions.
30 changes: 15 additions & 15 deletions 01_pytorch_workflow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
"source": [
"# 01. PyTorch Workflow Fundamentals\n",
"\n",
"The essence of machine learning and deep learning is to take some data from the past, build an algorithm (like a neural network) to discover patterns in it and use the discoverd patterns to predict the future.\n",
"The essence of machine learning and deep learning is to take some data from the past, build an algorithm (like a neural network) to discover patterns in it and use the discovered patterns to predict the future.\n",
"\n",
"There are many ways to do this and many new ways are being discovered all the time.\n",
"\n",
Expand Down Expand Up @@ -260,7 +260,7 @@
"\n",
"We can create them by splitting our `X` and `y` tensors.\n",
"\n",
"> **Note:** When dealing with real-world data, this step is typically done right at the start of a project (the test set should always be kept separate from all other data). We want our model to learn on training data and then evaluate it on test data to get an indication of how well it **generalizes** to unseen examples.\n"
"> **Note:** When dealing with real-world data, this step is typically done right at the start of a project (the test set should always be kept separate from all other data). We want our model to learn from training data and then evaluate it on test data to get an indication of how well it **generalizes** to unseen examples.\n"
]
},
{
Expand Down Expand Up @@ -470,7 +470,7 @@
"![a pytorch linear model with annotations](https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/01-pytorch-linear-model-annotated.png)\n",
"*Basic building blocks of creating a PyTorch model by subclassing `nn.Module`. For objects that subclass `nn.Module`, the `forward()` method must be defined.*\n",
"\n",
"> **Resource:** See more of these essential modules and their uses cases in the [PyTorch Cheat Sheet](https://pytorch.org/tutorials/beginner/ptcheat.html). \n"
"> **Resource:** See more of these essential modules and their use cases in the [PyTorch Cheat Sheet](https://pytorch.org/tutorials/beginner/ptcheat.html). \n"
]
},
{
Expand Down Expand Up @@ -750,7 +750,7 @@
"source": [
"Woah! Those predictions look pretty bad...\n",
"\n",
"This make sense though when you remember our model is just using random parameter values to make predictions.\n",
"This makes sense though, when you remember our model is just using random parameter values to make predictions.\n",
"\n",
"It hasn't even looked at the blue dots to try to predict the green dots.\n",
"\n",
Expand Down Expand Up @@ -793,7 +793,7 @@
"\n",
"| Function | What does it do? | Where does it live in PyTorch? | Common values |\n",
"| ----- | ----- | ----- | ----- |\n",
"| **Loss function** | Measures how wrong your models predictions (e.g. `y_preds`) are compared to the truth labels (e.g. `y_test`). Lower the better. | PyTorch has plenty of built-in loss functions in [`torch.nn`](https://pytorch.org/docs/stable/nn.html#loss-functions). | Mean absolute error (MAE) for regression problems ([`torch.nn.L1Loss()`](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html)). Binary cross entropy for binary classification problems ([`torch.nn.BCELoss()`](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html)). |\n",
"| **Loss function** | Measures how wrong your model's predictions (e.g. `y_preds`) are compared to the truth labels (e.g. `y_test`). Lower the better. | PyTorch has plenty of built-in loss functions in [`torch.nn`](https://pytorch.org/docs/stable/nn.html#loss-functions). | Mean absolute error (MAE) for regression problems ([`torch.nn.L1Loss()`](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html)). Binary cross entropy for binary classification problems ([`torch.nn.BCELoss()`](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html)). |\n",
"| **Optimizer** | Tells your model how to update its internal parameters to best lower the loss. | You can find various optimization function implementations in [`torch.optim`](https://pytorch.org/docs/stable/optim.html). | Stochastic gradient descent ([`torch.optim.SGD()`](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html#torch.optim.SGD)). Adam optimizer ([`torch.optim.Adam()`](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam)). | \n",
"\n",
"Let's create a loss function and an optimizer we can use to help improve our model.\n",
Expand Down Expand Up @@ -843,14 +843,14 @@
"\n",
"The training loop involves the model going through the training data and learning the relationships between the `features` and `labels`.\n",
"\n",
"The testing loop involves going through the testing data and evaluating how good the patterns are that the model learned on the training data (the model never see's the testing data during training).\n",
"The testing loop involves going through the testing data and evaluating how good the patterns are that the model learned on the training data (the model never sees the testing data during training).\n",
"\n",
"Each of these is called a \"loop\" because we want our model to look (loop through) at each sample in each dataset.\n",
"\n",
"To create these we're going to write a Python `for` loop in the theme of the [unofficial PyTorch optimization loop song](https://twitter.com/mrdbourke/status/1450977868406673410?s=20) (there's a [video version too](https://youtu.be/Nutpusq_AFw)).\n",
"\n",
"![the unofficial pytorch optimization loop song](https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/01-pytorch-optimization-loop-song.png)\n",
"*The unoffical PyTorch optimization loops song, a fun way to remember the steps in a PyTorch training (and testing) loop.*\n",
"*The unofficial PyTorch optimization loops song, a fun way to remember the steps in a PyTorch training (and testing) loop.*\n",
"\n",
"There will be a fair bit of code but nothing we can't handle.\n"
]
Expand Down Expand Up @@ -901,7 +901,7 @@
"| ----- | ----- | ----- | ----- |\n",
"| 1 | Forward pass | The model goes through all of the testing data once, performing its `forward()` function calculations. | `model(x_test)` |\n",
"| 2 | Calculate the loss | The model's outputs (predictions) are compared to the ground truth and evaluated to see how wrong they are. | `loss = loss_fn(y_pred, y_test)` | \n",
"| 3 | Calulate evaluation metrics (optional) | Alongisde the loss value you may want to calculate other evaluation metrics such as accuracy on the test set. | Custom functions |\n",
"| 3 | Calulate evaluation metrics (optional) | Alongside the loss value you may want to calculate other evaluation metrics such as accuracy on the test set. | Custom functions |\n",
"\n",
"Notice the testing loop doesn't contain performing backpropagation (`loss.backward()`) or stepping the optimizer (`optimizer.step()`), this is because no parameters in the model are being changed during testing, they've already been calculated. For testing, we're only interested in the output of the forward pass through the model.\n",
"\n",
Expand Down Expand Up @@ -1047,7 +1047,7 @@
"\n",
"Well, thanks to our loss function and optimizer, the model's internal parameters (`weights` and `bias`) were updated to better reflect the underlying patterns in the data.\n",
"\n",
"Let's inspect our model's [`.state_dict()`](https://pytorch.org/tutorials/recipes/recipes/what_is_state_dict.html) to see see how close our model gets to the original values we set for weights and bias.\n",
"Let's inspect our model's [`.state_dict()`](https://pytorch.org/tutorials/recipes/recipes/what_is_state_dict.html) to see how close our model gets to the original values we set for weights and bias.\n",
"\n"
]
},
Expand Down Expand Up @@ -1090,7 +1090,7 @@
"source": [
"Wow! How cool is that?\n",
"\n",
"Our model got very close to calculate the exact original values for `weight` and `bias` (and it would probably get even closer if we trained it for longer).\n",
"Our model got very close to calculating the exact original values for `weight` and `bias` (and it would probably get even closer if we trained it for longer).\n",
"\n",
"> **Exercise:** Try changing the `epochs` value above to 200, what happens to the loss curves and the weights and bias parameter values of the model?\n",
"\n",
Expand Down Expand Up @@ -1212,7 +1212,7 @@
"source": [
"Woohoo! Those red dots are looking far closer than they were before!\n",
"\n",
"Let's get onto saving an reloading a model in PyTorch."
"Let's get onto saving and reloading a model in PyTorch."
]
},
{
Expand Down Expand Up @@ -1343,7 +1343,7 @@
"\n",
"So instead, we're using the flexible method of saving and loading just the `state_dict()`, which again is basically a dictionary of model parameters.\n",
"\n",
"Let's test it out by created another instance of `LinearRegressionModel()`, which is a subclass of `torch.nn.Module` and will hence have the in-built method `load_state_dict()`."
"Let's test it out by creating another instance of `LinearRegressionModel()`, which is a subclass of `torch.nn.Module` and will hence have the in-built method `load_state_dict()`."
]
},
{
Expand Down Expand Up @@ -1797,7 +1797,7 @@
" def forward(self, x: torch.Tensor) -> torch.Tensor:\n",
" return self.linear_layer(x)\n",
"\n",
"# Set the manual seed when creating the model (this isn't always need but is used for demonstrative purposes, try commenting it out and seeing what happens)\n",
"# Set the manual seed when creating the model (this isn't always needed but is used for demonstrative purposes, try commenting it out and seeing what happens)\n",
"torch.manual_seed(42)\n",
"model_1 = LinearRegressionModelV2()\n",
"model_1, model_1.state_dict()"
Expand Down Expand Up @@ -1879,7 +1879,7 @@
}
],
"source": [
"# Set model to GPU if it's availalble, otherwise it'll default to CPU\n",
"# Set model to GPU if it's available, otherwise it'll default to CPU\n",
"model_1.to(device) # the device variable was set above to be \"cuda\" if available or \"cpu\" if not\n",
"next(model_1.parameters()).device"
]
Expand Down Expand Up @@ -2434,7 +2434,7 @@
"* Read [What is `torch.nn`, really?](https://pytorch.org/tutorials/beginner/nn_tutorial.html) by Jeremy Howard for a deeper understanding of how one of the most important modules in PyTorch works. \n",
"* Spend 10-minutes scrolling through and checking out the [PyTorch documentation cheatsheet](https://pytorch.org/tutorials/beginner/ptcheat.html) for all of the different PyTorch modules you might come across.\n",
"* Spend 10-minutes reading the [loading and saving documentation on the PyTorch website](https://pytorch.org/tutorials/beginner/saving_loading_models.html) to become more familiar with the different saving and loading options in PyTorch. \n",
"* Spend 1-2 hours read/watching the following for an overview of the internals of gradient descent and backpropagation, the two main algorithms that have been working in the background to help our model learn. \n",
"* Spend 1-2 hours reading/watching the following for an overview of the internals of gradient descent and backpropagation, the two main algorithms that have been working in the background to help our model learn. \n",
" * [Wikipedia page for gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)\n",
" * [Gradient Descent Algorithm — a deep dive](https://towardsdatascience.com/gradient-descent-algorithm-a-deep-dive-cf04e8115f21) by Robert Kwiatkowski\n",
" * [Gradient descent, how neural networks learn video](https://youtu.be/IHZwWFHWa-w) by 3Blue1Brown\n",
Expand Down

0 comments on commit a127b23

Please sign in to comment.