Skip to content

Commit

Permalink
Merge pull request #1065 from pritesh2000/gram-1/07
Browse files Browse the repository at this point in the history
07_pytorch_experiment_tracking.ipynb
  • Loading branch information
mrdbourke authored Sep 5, 2024
2 parents 344c834 + 4f7e678 commit a2273e4
Showing 1 changed file with 12 additions and 12 deletions.
24 changes: 12 additions & 12 deletions 07_pytorch_experiment_tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"\n",
"We've trained a fair few models now on the journey to making FoodVision Mini (an image classification model to classify images of pizza, steak or sushi).\n",
"\n",
"And so far we've keep track of them via Python dictionaries.\n",
"And so far we've kept track of them via Python dictionaries.\n",
"\n",
"Or just comparing them by the metric print outs during training.\n",
"\n",
Expand Down Expand Up @@ -83,7 +83,7 @@
"source": [
"## Different ways to track machine learning experiments \n",
"\n",
"There are as many different ways to track machine learning experiments as there is experiments to run.\n",
"There are as many different ways to track machine learning experiments as there are experiments to run.\n",
"\n",
"This table covers a few.\n",
"\n",
Expand All @@ -92,7 +92,7 @@
"| Python dictionaries, CSV files, print outs | None | Easy to setup, runs in pure Python | Hard to keep track of large numbers of experiments | Free |\n",
"| [TensorBoard](https://www.tensorflow.org/tensorboard/get_started) | Minimal, install [`tensorboard`](https://pypi.org/project/tensorboard/) | Extensions built into PyTorch, widely recognized and used, easily scales. | User-experience not as nice as other options. | Free |\n",
"| [Weights & Biases Experiment Tracking](https://wandb.ai/site/experiment-tracking) | Minimal, install [`wandb`](https://docs.wandb.ai/quickstart), make an account | Incredible user experience, make experiments public, tracks almost anything. | Requires external resource outside of PyTorch. | Free for personal use | \n",
"| [MLFlow](https://mlflow.org/) | Minimal, install `mlflow` and starting tracking | Fully open-source MLOps lifecycle management, many integrations. | Little bit harder to setup a remote tracking server than other services. | Free | \n",
"| [MLFlow](https://mlflow.org/) | Minimal, install `mlflow` and start tracking | Fully open-source MLOps lifecycle management, many integrations. | Little bit harder to setup a remote tracking server than other services. | Free | \n",
"\n",
"<img src=\"https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/images/07-different-places-to-track-experiments.png\" alt=\"various places to track machine learning experiments\" width=900/>\n",
"\n",
Expand Down Expand Up @@ -276,7 +276,7 @@
"\n",
"Let's create a function to \"set the seeds\" called `set_seeds()`.\n",
"\n",
"> **Note:** Recall a [random seed](https://en.wikipedia.org/wiki/Random_seed) is a way of flavouring the randomness generated by a computer. They aren't necessary to always set when running machine learning code, however, they help ensure there's an element of reproducibility (the numbers I get with my code are similar to the numbers you get with your code). Outside of an education or experimental setting, random seeds generally aren't required."
"> **Note:** Recalling a [random seed](https://en.wikipedia.org/wiki/Random_seed) is a way of flavouring the randomness generated by a computer. They aren't necessary to always set when running machine learning code, however, they help ensure there's an element of reproducibility (the numbers I get with my code are similar to the numbers you get with your code). Outside of an educational or experimental setting, random seeds generally aren't required."
]
},
{
Expand Down Expand Up @@ -313,7 +313,7 @@
"\n",
"So how about we run some experiments and try to further improve our results?\n",
"\n",
"To do so, we'll use similar code to the previous section to download the [`pizza_steak_sushi.zip`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/data/pizza_steak_sushi.zip) (if the data doesn't already exist) except this time its been functionised.\n",
"To do so, we'll use similar code to the previous section to download the [`pizza_steak_sushi.zip`](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/data/pizza_steak_sushi.zip) (if the data doesn't already exist) except this time it's been functionalised.\n",
"\n",
"This will allow us to use it again later. "
]
Expand Down Expand Up @@ -421,7 +421,7 @@
"\n",
"And since we'll be using transfer learning and specifically pretrained models from [`torchvision.models`](https://pytorch.org/vision/stable/models.html), we'll create a transform to prepare our images correctly.\n",
"\n",
"To transform our images in tensors, we can use:\n",
"To transform our images into tensors, we can use:\n",
"1. Manually created transforms using `torchvision.transforms`.\n",
"2. Automatically created transforms using `torchvision.models.MODEL_NAME.MODEL_WEIGHTS.DEFAULT.transforms()`.\n",
" * Where `MODEL_NAME` is a specific `torchvision.models` architecture, `MODEL_WEIGHTS` is a specific set of pretrained weights and `DEFAULT` means the \"best available weights\".\n",
Expand Down Expand Up @@ -959,7 +959,7 @@
"source": [
"> **Note:** You might notice the results here are slightly different to what our model got in 06. PyTorch Transfer Learning. The difference comes from using the `engine.train()` and our modified `train()` function. Can you guess why? The [PyTorch documentation on randomness](https://pytorch.org/docs/stable/notes/randomness.html) may help more.\n",
"\n",
"Running the cell above we get similar outputs we got in [06. PyTorch Transfer Learning section 4: Train model](https://www.learnpytorch.io/06_pytorch_transfer_learning/#4-train-model) but the difference is behind the scenes our `writer` instance has created a `runs/` directory storing our model's results.\n",
"Running the cell above we get similar outputs we got in [06. PyTorch Transfer Learning section 4: Train model](https://www.learnpytorch.io/06_pytorch_transfer_learning/#4-train-model) but the difference is that behind the scenes our `writer` instance has created a `runs/` directory storing our model's results.\n",
"\n",
"For example, the save location might look like:\n",
"\n",
Expand Down Expand Up @@ -1361,7 +1361,7 @@
"\n",
"With practice and running many different experiments, you'll start to build an intuition of what *might* help your model.\n",
"\n",
"I say *might* on purpose because there's no guarantees.\n",
"I say *might* on purpose because there's no guarantee.\n",
"\n",
"But generally, in light of [*The Bitter Lesson*](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) (I've mentioned this twice now because it's an important essay in the world of AI), generally the bigger your model (more learnable parameters) and the more data you have (more opportunities to learn), the better the performance.\n",
"\n",
Expand Down Expand Up @@ -1692,7 +1692,7 @@
"\n",
"# Create an EffNetB0 feature extractor\n",
"def create_effnetb0():\n",
" # 1. Get the base mdoel with pretrained weights and send to target device\n",
" # 1. Get the base model with pretrained weights and send to target device\n",
" weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT\n",
" model = torchvision.models.efficientnet_b0(weights=weights).to(device)\n",
"\n",
Expand Down Expand Up @@ -2417,7 +2417,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks like our best model so far is 29 MB in size. We'll keep this in mind if we wanted to deploy it later on.\n",
"Looks like our best model so far is 29 MB in size. We'll keep this in mind if we want to deploy it later on.\n",
"\n",
"Time to make and visualize some predictions.\n",
"\n",
Expand Down Expand Up @@ -2595,7 +2595,7 @@
"\n",
"The main ideas you should take away from this Milestone Project 1 are:\n",
"\n",
"* The machine learning practioner's motto: *experiment, experiment, experiment!* (though we've been doing plenty of this already).\n",
"* The machine learning practitioner's motto: *experiment, experiment, experiment!* (though we've been doing plenty of this already).\n",
"* In the beginning, keep your experiments small so you can work fast, your first few experiments shouldn't take more than a few seconds to a few minutes to run.\n",
"* The more experiments you do, the quicker you can figure out what *doesn't* work.\n",
"* Scale up when you find something that works. For example, since we've found a pretty good performing model with EffNetB2 as a feature extractor, perhaps you'd now like to see what happens when you scale it up to the whole [Food101 dataset](https://pytorch.org/vision/main/generated/torchvision.datasets.Food101.html) from `torchvision.datasets`.\n",
Expand Down Expand Up @@ -2666,7 +2666,7 @@
"NUM_WORKERS = os.cpu_count() # use maximum number of CPUs for workers to load data \n",
"\n",
"# Note: this is an update version of data_setup.create_dataloaders to handle\n",
"# differnt train and test transforms.\n",
"# different train and test transforms.\n",
"def create_dataloaders(\n",
" train_dir, \n",
" test_dir, \n",
Expand Down

0 comments on commit a2273e4

Please sign in to comment.