From 06e36151f4146b5a092c636121ee6a81409134d9 Mon Sep 17 00:00:00 2001 From: Junghwan Park Date: Sat, 1 Aug 2020 23:41:25 +0900 Subject: [PATCH] Rebuild --- .../saving_multiple_models_in_one_file.ipynb | 29 ++-- .../saving_multiple_models_in_one_file.py | 141 ++++++++-------- ...ing_and_loading_a_general_checkpoint.ipynb | 18 +-- ...saving_and_loading_a_general_checkpoint.py | 150 +++++++++-------- docs/advanced/sg_execution_times.html | 4 +- docs/beginner/sg_execution_times.html | 8 +- docs/intermediate/sg_execution_times.html | 4 +- docs/objects.inv | Bin 5859 -> 5888 bytes ...ving_and_loading_a_general_checkpoint.html | 153 +++++++++--------- .../saving_multiple_models_in_one_file.html | 144 ++++++++--------- docs/recipes/recipes_index.html | 8 +- docs/searchindex.js | 2 +- 12 files changed, 324 insertions(+), 337 deletions(-) diff --git a/docs/_downloads/9b89023ea3fb5bf2511a9c08a4311cce/saving_multiple_models_in_one_file.ipynb b/docs/_downloads/9b89023ea3fb5bf2511a9c08a4311cce/saving_multiple_models_in_one_file.ipynb index 0d5abe910..97c6dcffb 100644 --- a/docs/_downloads/9b89023ea3fb5bf2511a9c08a4311cce/saving_multiple_models_in_one_file.ipynb +++ b/docs/_downloads/9b89023ea3fb5bf2511a9c08a4311cce/saving_multiple_models_in_one_file.ipynb @@ -15,14 +15,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\nSaving and loading multiple models in one file using PyTorch\n============================================================\nSaving and loading multiple models can be helpful for reusing models\nthat you have previously trained.\n\nIntroduction\n------------\nWhen saving a model comprised of multiple ``torch.nn.Modules``, such as\na GAN, a sequence-to-sequence model, or an ensemble of models, you must\nsave a dictionary of each model\u2019s state_dict and corresponding\noptimizer. You can also save any other items that may aid you in\nresuming training by simply appending them to the dictionary.\nTo load the models, first initialize the models and optimizers, then\nload the dictionary locally using ``torch.load()``. From here, you can\neasily access the saved items by simply querying the dictionary as you\nwould expect.\nIn this recipe, we will demonstrate how to save multiple models to one\nfile using PyTorch.\n\nSetup\n-----\nBefore we begin, we need to install ``torch`` if it isn\u2019t already\navailable.\n\n::\n\n pip install torch\n \n\n" + "\nPyTorch\uc5d0\uc11c \uc5ec\ub7ec \ubaa8\ub378\uc744 \ud558\ub098\uc758 \ud30c\uc77c\uc5d0 \uc800\uc7a5\ud558\uae30 & \ubd88\ub7ec\uc624\uae30\n============================================================\n\uc5ec\ub7ec \ubaa8\ub378\uc744 \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc624\ub294 \uac83\uc740 \uc774\uc804\uc5d0 \ud559\uc2b5\ud588\ub358 \ubaa8\ub378\ub4e4\uc744 \uc7ac\uc0ac\uc6a9\ud558\ub294\ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n\uac1c\uc694\n------------\nGAN\uc774\ub098 \uc2dc\ud000\uc2a4-\ud22c-\uc2dc\ud000\uc2a4(sequence-to-sequence model), \uc559\uc0c1\ube14 \ubaa8\ub378(ensemble of models)\uacfc\n\uac19\uc774 \uc5ec\ub7ec ``torch.nn.Modules`` \ub85c \uad6c\uc131\ub41c \ubaa8\ub378\uc744 \uc800\uc7a5\ud560 \ub54c\ub294 \uac01 \ubaa8\ub378\uc758 state_dict\uc640\n\ud574\ub2f9 \uc635\ud2f0\ub9c8\uc774\uc800(optimizer)\uc758 \uc0ac\uc804\uc744 \uc800\uc7a5\ud574\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c, \ud559\uc2b5 \ud559\uc2b5\uc744 \uc7ac\uac1c\ud558\ub294\ub370\n\ud544\uc694\ud55c \ub2e4\ub978 \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0 \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubaa8\ub378\ub4e4\uc744 \ubd88\ub7ec\uc62c \ub54c\uc5d0\ub294, \uba3c\uc800\n\ubaa8\ub378\ub4e4\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud558\uace0, ``torch.load()`` \ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc804\uc744 \ubd88\ub7ec\uc635\ub2c8\ub2e4.\n\uc774\ud6c4 \uc6d0\ud558\ub294\ub300\ub85c \uc800\uc7a5\ud55c \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0 \uc870\ud68c\ud558\uc5ec \uc811\uadfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 PyTorch\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec \ubaa8\ub378\ub4e4\uc744 \ud558\ub098\uc758 \ud30c\uc77c\uc5d0 \uc5b4\ub5bb\uac8c \uc800\uc7a5\ud558\uace0\n\ubd88\ub7ec\uc624\ub294\uc9c0 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\uc124\uc815\n---------\n\uc2dc\uc791\ud558\uae30 \uc804\uc5d0 ``torch`` \uac00 \uc5c6\ub2e4\uba74 \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n::\n\n pip install torch\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Steps\n-----\n\n1. Import all necessary libraries for loading our data\n2. Define and intialize the neural network\n3. Initialize the optimizer\n4. Save multiple models\n5. Load multiple models\n\n1. Import necessary libraries for loading our data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``\nand ``torch.optim``.\n\n\n" + "\ub2e8\uacc4(Steps)\n-------------\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n4. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \uc800\uc7a5\ud558\uae30\n5. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \ubd88\ub7ec\uc624\uae30\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 ``torch`` \uc640 \uc5ec\uae30 \ud3ec\ud568\ub41c ``torch.nn`` \uc640 ``torch.optim` \uc744\n\uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n" ] }, { @@ -40,7 +40,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "2. Define and intialize the neural network\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor sake of example, we will create a neural network for training\nimages. To learn more see the Defining a Neural Network recipe. Build\ntwo variables for the models to eventually save.\n\n\n" + "2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc608\ub97c \ub4e4\uc5b4, \uc774\ubbf8\uc9c0\ub97c \ud559\uc2b5\ud558\ub294 \uc2e0\uacbd\ub9dd\uc744 \ub9cc\ub4e4\uc5b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740\n\uc2e0\uacbd\ub9dd \uad6c\uc131\ud558\uae30 \ub808\uc2dc\ud53c\ub97c \ucc38\uace0\ud574\uc8fc\uc138\uc694. \ubaa8\ub378\uc744 \uc800\uc7a5\ud560 2\uac1c\uc758 \ubcc0\uc218\ub4e4\uc744 \ub9cc\ub4ed\ub2c8\ub2e4.\n\n\n" ] }, { @@ -58,7 +58,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "3. Initialize the optimizer\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe will use SGD with momentum to build an optimizer for each model we\ncreated.\n\n\n" + "3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc0dd\uc131\ud55c \ubaa8\ub378\ub4e4 \uac01\uac01\uc5d0 \ubaa8\uba58\ud140(momentum)\uc744 \uac16\ub294 SGD\ub97c \uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n" ] }, { @@ -76,7 +76,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "4. Save multiple models\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCollect all relevant information and build your dictionary.\n\n\n" + "4. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \uc800\uc7a5\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uad00\ub828\ub41c \ubaa8\ub4e0 \uc815\ubcf4\ub4e4\uc744 \ubaa8\uc544\uc11c \uc0ac\uc804\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\n" ] }, { @@ -87,14 +87,14 @@ }, "outputs": [], "source": [ - "# Specify a path to save to\nPATH = \"model.pt\"\n\ntorch.save({\n 'modelA_state_dict': netA.state_dict(),\n 'modelB_state_dict': netB.state_dict(),\n 'optimizerA_state_dict': optimizerA.state_dict(),\n 'optimizerB_state_dict': optimizerB.state_dict(),\n }, PATH)" + "# \uc800\uc7a5\ud560 \uacbd\ub85c \uc9c0\uc815\nPATH = \"model.pt\"\n\ntorch.save({\n 'modelA_state_dict': netA.state_dict(),\n 'modelB_state_dict': netB.state_dict(),\n 'optimizerA_state_dict': optimizerA.state_dict(),\n 'optimizerB_state_dict': optimizerB.state_dict(),\n }, PATH)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "4. Load multiple models\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRemember to first initialize the models and optimizers, then load the\ndictionary locally.\n\n\n" + "5. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uba3c\uc800 \ubaa8\ub378\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud55c \ub4a4, \uc0ac\uc804\uc744 \ubd88\ub7ec\uc624\ub294 \uac83\uc744 \uae30\uc5b5\ud558\uc2ed\uc2dc\uc624.\n\n\n" ] }, { @@ -105,14 +105,25 @@ }, "outputs": [], "source": [ - "modelA = Net()\nmodelB = Net()\noptimModelA = optim.SGD(modelA.parameters(), lr=0.001, momentum=0.9)\noptimModelB = optim.SGD(modelB.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodelA.load_state_dict(checkpoint['modelA_state_dict'])\nmodelB.load_state_dict(checkpoint['modelB_state_dict'])\noptimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])\noptimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])\n\nmodelA.eval()\nmodelB.eval()\n# - or -\nmodelA.train()\nmodelB.train()" + "modelA = Net()\nmodelB = Net()\noptimModelA = optim.SGD(modelA.parameters(), lr=0.001, momentum=0.9)\noptimModelB = optim.SGD(modelB.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodelA.load_state_dict(checkpoint['modelA_state_dict'])\nmodelB.load_state_dict(checkpoint['modelB_state_dict'])\noptimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])\noptimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])\n\nmodelA.eval()\nmodelB.eval()\n# - \ub610\ub294 -\nmodelA.train()\nmodelB.train()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "You must call ``model.eval()`` to set dropout and batch normalization\nlayers to evaluation mode before running inference. Failing to do this\nwill yield inconsistent inference results.\n\nIf you wish to resuming training, call ``model.train()`` to ensure these\nlayers are in training mode.\n\nCongratulations! You have successfully saved and loaded multiple models\nin PyTorch.\n\nLearn More\n----------\n\nTake a look at these other recipes to continue your learning:\n\n- TBD\n- TBD\n\n\n" + "\ucd94\ub860(inference)\uc744 \uc2e4\ud589\ud558\uae30 \uc804\uc5d0 ``model.eval()`` \uc744 \ud638\ucd9c\ud558\uc5ec \ub4dc\ub86d\uc544\uc6c3(dropout)\uacfc\n\ubc30\uce58 \uc815\uaddc\ud654 \uce35(batch normalization layer)\uc744 \ud3c9\uac00(evaluation) \ubaa8\ub4dc\ub85c \ubc14\uafd4\uc57c\ud55c\ub2e4\ub294\n\uac83\uc744 \uae30\uc5b5\ud558\uc138\uc694. \uc774\uac83\uc744 \ube7c\uba39\uc73c\uba74 \uc77c\uad00\uc131 \uc5c6\ub294 \ucd94\ub860 \uacb0\uacfc\ub97c \uc5bb\uac8c \ub429\ub2c8\ub2e4.\n\n\ub9cc\uc57d \ud559\uc2b5\uc744 \uacc4\uc18d\ud558\uae38 \uc6d0\ud55c\ub2e4\uba74 ``model.train()`` \uc744 \ud638\ucd9c\ud558\uc5ec \uc774 \uce35(layer)\ub4e4\uc774\n\ud559\uc2b5 \ubaa8\ub4dc\uc778\uc9c0 \ud655\uc778(ensure)\ud558\uc138\uc694.\n\n\ucd95\ud558\ud569\ub2c8\ub2e4! \uc9c0\uae08\uae4c\uc9c0 PyTorch\uc5d0\uc11c \uc5ec\ub7ec \ubaa8\ub378\ub4e4\uc744 \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc654\uc2b5\ub2c8\ub2e4.\n\n\ub354 \uc54c\uc544\ubcf4\uae30\n------------\n\n\ub2e4\ub978 \ub808\uc2dc\ud53c\ub97c \ub458\ub7ec\ubcf4\uace0 \uacc4\uc18d \ubc30\uc6cc\ubcf4\uc138\uc694:\n\n- :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`\n- :doc:`/recipes/recipes/saving_multiple_models_in_one_file`\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "#" ] } ], diff --git a/docs/_downloads/d449c5b77ef602a05bd35ac8cb38fbec/saving_multiple_models_in_one_file.py b/docs/_downloads/d449c5b77ef602a05bd35ac8cb38fbec/saving_multiple_models_in_one_file.py index b2f38247b..1e0b631f2 100644 --- a/docs/_downloads/d449c5b77ef602a05bd35ac8cb38fbec/saving_multiple_models_in_one_file.py +++ b/docs/_downloads/d449c5b77ef602a05bd35ac8cb38fbec/saving_multiple_models_in_one_file.py @@ -1,52 +1,47 @@ """ -Saving and loading multiple models in one file using PyTorch +PyTorch에서 여러 모델을 하나의 파일에 저장하기 & 불러오기 ============================================================ -Saving and loading multiple models can be helpful for reusing models -that you have previously trained. +여러 모델을 저장하고 불러오는 것은 이전에 학습했던 모델들을 재사용하는데 도움이 됩니다. -Introduction +개요 ------------ -When saving a model comprised of multiple ``torch.nn.Modules``, such as -a GAN, a sequence-to-sequence model, or an ensemble of models, you must -save a dictionary of each model’s state_dict and corresponding -optimizer. You can also save any other items that may aid you in -resuming training by simply appending them to the dictionary. -To load the models, first initialize the models and optimizers, then -load the dictionary locally using ``torch.load()``. From here, you can -easily access the saved items by simply querying the dictionary as you -would expect. -In this recipe, we will demonstrate how to save multiple models to one -file using PyTorch. - -Setup ------ -Before we begin, we need to install ``torch`` if it isn’t already -available. +GAN이나 시퀀스-투-시퀀스(sequence-to-sequence model), 앙상블 모델(ensemble of models)과 +같이 여러 ``torch.nn.Modules`` 로 구성된 모델을 저장할 때는 각 모델의 state_dict와 +해당 옵티마이저(optimizer)의 사전을 저장해야 합니다. 또한, 학습 학습을 재개하는데 +필요한 다른 항목들을 사전에 추가할 수 있습니다. 모델들을 불러올 때에는, 먼저 +모델들과 옵티마이저를 초기화하고, ``torch.load()`` 를 사용하여 사전을 불러옵니다. +이후 원하는대로 저장한 항목들을 사전에 조회하여 접근할 수 있습니다. +이 레시피에서는 PyTorch를 사용하여 여러 모델들을 하나의 파일에 어떻게 저장하고 +불러오는지 살펴보겠습니다. + +설정 +--------- +시작하기 전에 ``torch`` 가 없다면 설치해야 합니다. :: pip install torch - + """ ###################################################################### -# Steps -# ----- -# -# 1. Import all necessary libraries for loading our data -# 2. Define and intialize the neural network -# 3. Initialize the optimizer -# 4. Save multiple models -# 5. Load multiple models -# -# 1. Import necessary libraries for loading our data +# 단계(Steps) +# ------------- +# +# 1. 데이터 불러올 때 필요한 라이브러리들 불러오기 +# 2. 신경망을 구성하고 초기화하기 +# 3. 옵티마이저 초기화하기 +# 4. 여러 모델들 저장하기 +# 5. 여러 모델들 불러오기 +# +# 1. 데이터 불러올 때 필요한 라이브러리들 불러오기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn`` -# and ``torch.optim``. -# +# +# 이 레시피에서는 ``torch`` 와 여기 포함된 ``torch.nn`` 와 ``torch.optim` 을 +# 사용하겠습니다. +# import torch import torch.nn as nn @@ -54,13 +49,12 @@ ###################################################################### -# 2. Define and intialize the neural network +# 2. 신경망을 구성하고 초기화하기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# For sake of example, we will create a neural network for training -# images. To learn more see the Defining a Neural Network recipe. Build -# two variables for the models to eventually save. -# +# +# 예를 들어, 이미지를 학습하는 신경망을 만들어보겠습니다. 더 자세한 내용은 +# 신경망 구성하기 레시피를 참고해주세요. 모델을 저장할 2개의 변수들을 만듭니다. +# class Net(nn.Module): def __init__(self): @@ -86,25 +80,24 @@ def forward(self, x): ###################################################################### -# 3. Initialize the optimizer +# 3. 옵티마이저 초기화하기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# We will use SGD with momentum to build an optimizer for each model we -# created. -# +# +# 생성한 모델들 각각에 모멘텀(momentum)을 갖는 SGD를 사용하겠습니다. +# optimizerA = optim.SGD(netA.parameters(), lr=0.001, momentum=0.9) optimizerB = optim.SGD(netB.parameters(), lr=0.001, momentum=0.9) ###################################################################### -# 4. Save multiple models +# 4. 여러 모델들 저장하기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# Collect all relevant information and build your dictionary. -# +# +# 관련된 모든 정보들을 모아서 사전을 구성합니다. +# -# Specify a path to save to +# 저장할 경로 지정 PATH = "model.pt" torch.save({ @@ -116,12 +109,11 @@ def forward(self, x): ###################################################################### -# 4. Load multiple models +# 5. 여러 모델들 불러오기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# Remember to first initialize the models and optimizers, then load the -# dictionary locally. -# +# +# 먼저 모델과 옵티마이저를 초기화한 뒤, 사전을 불러오는 것을 기억하십시오. +# modelA = Net() modelB = Net() @@ -136,27 +128,26 @@ def forward(self, x): modelA.eval() modelB.eval() -# - or - +# - 또는 - modelA.train() modelB.train() ###################################################################### -# You must call ``model.eval()`` to set dropout and batch normalization -# layers to evaluation mode before running inference. Failing to do this -# will yield inconsistent inference results. -# -# If you wish to resuming training, call ``model.train()`` to ensure these -# layers are in training mode. -# -# Congratulations! You have successfully saved and loaded multiple models -# in PyTorch. -# -# Learn More -# ---------- -# -# Take a look at these other recipes to continue your learning: -# -# - TBD -# - TBD -# +# 추론(inference)을 실행하기 전에 ``model.eval()`` 을 호출하여 드롭아웃(dropout)과 +# 배치 정규화 층(batch normalization layer)을 평가(evaluation) 모드로 바꿔야한다는 +# 것을 기억하세요. 이것을 빼먹으면 일관성 없는 추론 결과를 얻게 됩니다. +# +# 만약 학습을 계속하길 원한다면 ``model.train()`` 을 호출하여 이 층(layer)들이 +# 학습 모드인지 확인(ensure)하세요. +# +# 축하합니다! 지금까지 PyTorch에서 여러 모델들을 저장하고 불러왔습니다. +# +# 더 알아보기 +# ------------ +# +# 다른 레시피를 둘러보고 계속 배워보세요: +# +# - :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint` +# - :doc:`/recipes/recipes/saving_multiple_models_in_one_file` +# \ No newline at end of file diff --git a/docs/_downloads/dbcf5e6a5e95bf9f7a0e49e123c13b60/saving_and_loading_a_general_checkpoint.ipynb b/docs/_downloads/dbcf5e6a5e95bf9f7a0e49e123c13b60/saving_and_loading_a_general_checkpoint.ipynb index 1bc754046..cd1c072c7 100644 --- a/docs/_downloads/dbcf5e6a5e95bf9f7a0e49e123c13b60/saving_and_loading_a_general_checkpoint.ipynb +++ b/docs/_downloads/dbcf5e6a5e95bf9f7a0e49e123c13b60/saving_and_loading_a_general_checkpoint.ipynb @@ -15,14 +15,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\nSaving and loading a general checkpoint in PyTorch\n==================================================\nSaving and loading a general checkpoint model for inference or \nresuming training can be helpful for picking up where you last left off.\nWhen saving a general checkpoint, you must save more than just the\nmodel\u2019s state_dict. It is important to also save the optimizer\u2019s\nstate_dict, as this contains buffers and parameters that are updated as\nthe model trains. Other items that you may want to save are the epoch\nyou left off on, the latest recorded training loss, external\n``torch.nn.Embedding`` layers, and more, based on your own algorithm.\n\nIntroduction\n------------\nTo save multiple checkpoints, you must organize them in a dictionary and\nuse ``torch.save()`` to serialize the dictionary. A common PyTorch\nconvention is to save these checkpoints using the ``.tar`` file\nextension. To load the items, first initialize the model and optimizer,\nthen load the dictionary locally using torch.load(). From here, you can\neasily access the saved items by simply querying the dictionary as you\nwould expect.\n\nIn this recipe, we will explore how to save and load multiple\ncheckpoints.\n\nSetup\n-----\nBefore we begin, we need to install ``torch`` if it isn\u2019t already\navailable.\n\n::\n\n pip install torch\n\n\n\n" + "\nPyTorch\uc5d0\uc11c \uc77c\ubc18\uc801\uc778 \uccb4\ud06c\ud3ec\uc778\ud2b8(checkpoint) \uc800\uc7a5\ud558\uae30 & \ubd88\ub7ec\uc624\uae30\n===================================================================\n\ucd94\ub860(inference) \ub610\ub294 \ud559\uc2b5(training)\uc758 \uc7ac\uac1c\ub97c \uc704\ud574 \uccb4\ud06c\ud3ec\uc778\ud2b8(checkpoint) \ubaa8\ub378\uc744\n\uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc624\ub294 \uac83\uc740 \ub9c8\uc9c0\ub9c9\uc73c\ub85c \uc911\ub2e8\ud588\ub358 \ubd80\ubd84\uc744 \uc120\ud0dd\ud558\ub294\ub370 \ub3c4\uc6c0\uc744 \uc904 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uccb4\ud06c\ud3ec\uc778\ud2b8\ub97c \uc800\uc7a5\ud560 \ub54c\ub294 \ub2e8\uc21c\ud788 \ubaa8\ub378\uc758 state_dict \uc774\uc0c1\uc758 \uac83\uc744 \uc800\uc7a5\ud574\uc57c \ud569\ub2c8\ub2e4.\n\ubaa8\ub378 \ud559\uc2b5 \uc911\uc5d0 \uac31\uc2e0\ub418\ub294 \ud37c\ubc84\uc640 \ub9e4\uac1c\ubcc0\uc218\ub4e4\uc744 \ud3ec\ud568\ud558\ub294 \uc635\ud2f0\ub9c8\uc774\uc800(Optimizer)\uc758\nstate_dict\ub97c \ud568\uaed8 \uc800\uc7a5\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \uc774 \uc678\uc5d0\ub3c4 \uc911\ub2e8 \uc2dc\uc810\uc758 \uc5d0\ud3ec\ud06c(epoch),\n\ub9c8\uc9c0\ub9c9\uc73c\ub85c \uae30\ub85d\ub41c \ud559\uc2b5 \uc624\ucc28(training loss), \uc678\ubd80 ``torch.nn.Embedding`` \uacc4\uce35 \ub4f1,\n\uc54c\uace0\ub9ac\uc998\uc5d0 \ub530\ub77c \uc800\uc7a5\ud558\uace0 \uc2f6\uc740 \ud56d\ubaa9\ub4e4\uc774 \uc788\uc744 \uac83\uc785\ub2c8\ub2e4.\n\n\uac1c\uc694\n------------\n\uc5ec\ub7ec \uccb4\ud06c\ud3ec\uc778\ud2b8\ub4e4\uc744 \uc800\uc7a5\ud558\uae30 \uc704\ud574\uc11c\ub294 \uc0ac\uc804(dictionary)\uc5d0 \uccb4\ud06c\ud3ec\uc778\ud2b8\ub4e4\uc744 \uad6c\uc131\ud558\uace0\n``torch.save()`` \ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc804\uc744 \uc9c1\ub82c\ud654(serialize)\ud574\uc57c \ud569\ub2c8\ub2e4. \uc77c\ubc18\uc801\uc778\nPyTorch\uc5d0\uc11c\ub294 \uc774\ub7ec\ud55c \uc5ec\ub7ec \uccb4\ud06c\ud3ec\uc778\ud2b8\ub4e4\uc744 \uc800\uc7a5\ud560 \ub54c ``.tar`` \ud655\uc7a5\uc790\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774\n\uc77c\ubc18\uc801\uc778 \uaddc\uce59\uc785\ub2c8\ub2e4. \ud56d\ubaa9\ub4e4\uc744 \ubd88\ub7ec\uc62c \ub54c\uc5d0\ub294, \uba3c\uc800 \ubaa8\ub378\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud558\uace0,\ntorch.load()\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc804\uc744 \ubd88\ub7ec\uc635\ub2c8\ub2e4. \uc774\ud6c4 \uc6d0\ud558\ub294\ub300\ub85c \uc800\uc7a5\ud55c \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0\n\uc870\ud68c\ud558\uc5ec \uc811\uadfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 \uc5ec\ub7ec \uccb4\ud06c\ud3ec\uc778\ud2b8\ub4e4\uc744 \uc5b4\ub5bb\uac8c \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc624\ub294\uc9c0 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\uc124\uc815\n-----\n\uc2dc\uc791\ud558\uae30 \uc804\uc5d0 ``torch`` \uac00 \uc5c6\ub2e4\uba74 \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\n::\n\n pip install torch\n\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Steps\n-----\n\n1. Import all necessary libraries for loading our data\n2. Define and intialize the neural network\n3. Initialize the optimizer\n4. Save the general checkpoint\n5. Load the general checkpoint\n\n1. Import necessary libraries for loading our data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``\nand ``torch.optim``.\n\n\n" + "\ub2e8\uacc4(Steps)\n------------\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n4. \uc77c\ubc18\uc801\uc778 \uccb4\ud06c\ud3ec\uc778\ud2b8 \uc800\uc7a5\ud558\uae30\n5. \uc77c\ubc18\uc801\uc778 \uccb4\ud06c\ud3ec\uc778\ud2b8 \ubd88\ub7ec\uc624\uae30\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 ``torch`` \uc640 \uc5ec\uae30 \ud3ec\ud568\ub41c ``torch.nn`` \uc640 ``torch.optim` \uc744\n\uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n" ] }, { @@ -40,7 +40,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "2. Define and intialize the neural network\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor sake of example, we will create a neural network for training\nimages. To learn more see the Defining a Neural Network recipe.\n\n\n" + "2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc608\ub97c \ub4e4\uc5b4, \uc774\ubbf8\uc9c0\ub97c \ud559\uc2b5\ud558\ub294 \uc2e0\uacbd\ub9dd\uc744 \ub9cc\ub4e4\uc5b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740\n\uc2e0\uacbd\ub9dd \uad6c\uc131\ud558\uae30 \ub808\uc2dc\ud53c\ub97c \ucc38\uace0\ud574\uc8fc\uc138\uc694.\n\n\n" ] }, { @@ -58,7 +58,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "3. Initialize the optimizer\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe will use SGD with momentum.\n\n\n" + "3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\ubaa8\uba58\ud140(momentum)\uc744 \uac16\ub294 SGD\ub97c \uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n" ] }, { @@ -76,7 +76,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "4. Save the general checkpoint\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCollect all relevant information and build your dictionary.\n\n\n" + "4. \uc77c\ubc18\uc801\uc778 \uccb4\ud06c\ud3ec\uc778\ud2b8 \uc800\uc7a5\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uad00\ub828\ub41c \ubaa8\ub4e0 \uc815\ubcf4\ub4e4\uc744 \ubaa8\uc544\uc11c \uc0ac\uc804\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\n" ] }, { @@ -87,14 +87,14 @@ }, "outputs": [], "source": [ - "# Additional information\nEPOCH = 5\nPATH = \"model.pt\"\nLOSS = 0.4\n\ntorch.save({\n 'epoch': EPOCH,\n 'model_state_dict': net.state_dict(),\n 'optimizer_state_dict': optimizer.state_dict(),\n 'loss': LOSS,\n }, PATH)" + "# \ucd94\uac00 \uc815\ubcf4\nEPOCH = 5\nPATH = \"model.pt\"\nLOSS = 0.4\n\ntorch.save({\n 'epoch': EPOCH,\n 'model_state_dict': net.state_dict(),\n 'optimizer_state_dict': optimizer.state_dict(),\n 'loss': LOSS,\n }, PATH)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "5. Load the general checkpoint\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRemember to first initialize the model and optimizer, then load the\ndictionary locally.\n\n\n" + "5. \uc77c\ubc18\uc801\uc778 \uccb4\ud06c\ud3ec\uc778\ud2b8 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uba3c\uc800 \ubaa8\ub378\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud55c \ub4a4, \uc0ac\uc804\uc744 \ubd88\ub7ec\uc624\ub294 \uac83\uc744 \uae30\uc5b5\ud558\uc2ed\uc2dc\uc624.\n\n\n" ] }, { @@ -105,14 +105,14 @@ }, "outputs": [], "source": [ - "model = Net()\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodel.load_state_dict(checkpoint['model_state_dict'])\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\nepoch = checkpoint['epoch']\nloss = checkpoint['loss']\n\nmodel.eval()\n# - or -\nmodel.train()" + "model = Net()\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodel.load_state_dict(checkpoint['model_state_dict'])\noptimizer.load_state_dict(checkpoint['optimizer_state_dict'])\nepoch = checkpoint['epoch']\nloss = checkpoint['loss']\n\nmodel.eval()\n# - \ub610\ub294 -\nmodel.train()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "You must call ``model.eval()`` to set dropout and batch normalization\nlayers to evaluation mode before running inference. Failing to do this\nwill yield inconsistent inference results.\n\nIf you wish to resuming training, call ``model.train()`` to ensure these\nlayers are in training mode.\n\nCongratulations! You have successfully saved and loaded a general\ncheckpoint for inference and/or resuming training in PyTorch.\n\nLearn More\n----------\n\nTake a look at these other recipes to continue your learning:\n\n- TBD\n- TBD\n\n" + "\ucd94\ub860(inference)\uc744 \uc2e4\ud589\ud558\uae30 \uc804\uc5d0 ``model.eval()`` \uc744 \ud638\ucd9c\ud558\uc5ec \ub4dc\ub86d\uc544\uc6c3(dropout)\uacfc\n\ubc30\uce58 \uc815\uaddc\ud654 \uce35(batch normalization layer)\uc744 \ud3c9\uac00(evaluation) \ubaa8\ub4dc\ub85c \ubc14\uafd4\uc57c\ud55c\ub2e4\ub294\n\uac83\uc744 \uae30\uc5b5\ud558\uc138\uc694. \uc774\uac83\uc744 \ube7c\uba39\uc73c\uba74 \uc77c\uad00\uc131 \uc5c6\ub294 \ucd94\ub860 \uacb0\uacfc\ub97c \uc5bb\uac8c \ub429\ub2c8\ub2e4.\n\n\ub9cc\uc57d \ud559\uc2b5\uc744 \uacc4\uc18d\ud558\uae38 \uc6d0\ud55c\ub2e4\uba74 ``model.train()`` \uc744 \ud638\ucd9c\ud558\uc5ec \uc774 \uce35(layer)\ub4e4\uc774\n\ud559\uc2b5 \ubaa8\ub4dc\uc778\uc9c0 \ud655\uc778(ensure)\ud558\uc138\uc694.\n\n\ucd95\ud558\ud569\ub2c8\ub2e4! \uc9c0\uae08\uae4c\uc9c0 PyTorch\uc5d0\uc11c \ucd94\ub860 \ub610\ub294 \ud559\uc2b5 \uc7ac\uac1c\ub97c \uc704\ud574 \uc77c\ubc18\uc801\uc778 \uccb4\ud06c\ud3ec\uc778\ud2b8\ub97c\n\uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc654\uc2b5\ub2c8\ub2e4.\n\n\ub354 \uc54c\uc544\ubcf4\uae30\n------------\n\n\ub2e4\ub978 \ub808\uc2dc\ud53c\ub97c \ub458\ub7ec\ubcf4\uace0 \uacc4\uc18d \ubc30\uc6cc\ubcf4\uc138\uc694:\n\n- :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`\n- :doc:`/recipes/recipes/saving_multiple_models_in_one_file`\n\n" ] } ], diff --git a/docs/_downloads/f1c4e73325cc32385d0f39145a3d83eb/saving_and_loading_a_general_checkpoint.py b/docs/_downloads/f1c4e73325cc32385d0f39145a3d83eb/saving_and_loading_a_general_checkpoint.py index 6e0c490ec..5be01c48b 100644 --- a/docs/_downloads/f1c4e73325cc32385d0f39145a3d83eb/saving_and_loading_a_general_checkpoint.py +++ b/docs/_downloads/f1c4e73325cc32385d0f39145a3d83eb/saving_and_loading_a_general_checkpoint.py @@ -1,32 +1,29 @@ """ -Saving and loading a general checkpoint in PyTorch -================================================== -Saving and loading a general checkpoint model for inference or -resuming training can be helpful for picking up where you last left off. -When saving a general checkpoint, you must save more than just the -model’s state_dict. It is important to also save the optimizer’s -state_dict, as this contains buffers and parameters that are updated as -the model trains. Other items that you may want to save are the epoch -you left off on, the latest recorded training loss, external -``torch.nn.Embedding`` layers, and more, based on your own algorithm. - -Introduction +PyTorch에서 일반적인 체크포인트(checkpoint) 저장하기 & 불러오기 +=================================================================== +추론(inference) 또는 학습(training)의 재개를 위해 체크포인트(checkpoint) 모델을 +저장하고 불러오는 것은 마지막으로 중단했던 부분을 선택하는데 도움을 줄 수 있습니다. +체크포인트를 저장할 때는 단순히 모델의 state_dict 이상의 것을 저장해야 합니다. +모델 학습 중에 갱신되는 퍼버와 매개변수들을 포함하는 옵티마이저(Optimizer)의 +state_dict를 함께 저장하는 것이 중요합니다. 이 외에도 중단 시점의 에포크(epoch), +마지막으로 기록된 학습 오차(training loss), 외부 ``torch.nn.Embedding`` 계층 등, +알고리즘에 따라 저장하고 싶은 항목들이 있을 것입니다. + +개요 ------------ -To save multiple checkpoints, you must organize them in a dictionary and -use ``torch.save()`` to serialize the dictionary. A common PyTorch -convention is to save these checkpoints using the ``.tar`` file -extension. To load the items, first initialize the model and optimizer, -then load the dictionary locally using torch.load(). From here, you can -easily access the saved items by simply querying the dictionary as you -would expect. - -In this recipe, we will explore how to save and load multiple -checkpoints. - -Setup +여러 체크포인트들을 저장하기 위해서는 사전(dictionary)에 체크포인트들을 구성하고 +``torch.save()`` 를 사용하여 사전을 직렬화(serialize)해야 합니다. 일반적인 +PyTorch에서는 이러한 여러 체크포인트들을 저장할 때 ``.tar`` 확장자를 사용하는 것이 +일반적인 규칙입니다. 항목들을 불러올 때에는, 먼저 모델과 옵티마이저를 초기화하고, +torch.load()를 사용하여 사전을 불러옵니다. 이후 원하는대로 저장한 항목들을 사전에 +조회하여 접근할 수 있습니다. + +이 레시피에서는 여러 체크포인트들을 어떻게 저장하고 불러오는지 살펴보겠습니다. + +설정 ----- -Before we begin, we need to install ``torch`` if it isn’t already -available. +시작하기 전에 ``torch`` 가 없다면 설치해야 합니다. + :: @@ -38,21 +35,21 @@ ###################################################################### -# Steps -# ----- -# -# 1. Import all necessary libraries for loading our data -# 2. Define and intialize the neural network -# 3. Initialize the optimizer -# 4. Save the general checkpoint -# 5. Load the general checkpoint -# -# 1. Import necessary libraries for loading our data +# 단계(Steps) +# ------------ +# +# 1. 데이터 불러올 때 필요한 라이브러리들 불러오기 +# 2. 신경망을 구성하고 초기화하기 +# 3. 옵티마이저 초기화하기 +# 4. 일반적인 체크포인트 저장하기 +# 5. 일반적인 체크포인트 불러오기 +# +# 1. 데이터 불러올 때 필요한 라이브러리들 불러오기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn`` -# and ``torch.optim``. -# +# +# 이 레시피에서는 ``torch`` 와 여기 포함된 ``torch.nn`` 와 ``torch.optim` 을 +# 사용하겠습니다. +# import torch import torch.nn as nn @@ -60,12 +57,12 @@ ###################################################################### -# 2. Define and intialize the neural network +# 2. 신경망을 구성하고 초기화하기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# For sake of example, we will create a neural network for training -# images. To learn more see the Defining a Neural Network recipe. -# +# +# 예를 들어, 이미지를 학습하는 신경망을 만들어보겠습니다. 더 자세한 내용은 +# 신경망 구성하기 레시피를 참고해주세요. +# class Net(nn.Module): def __init__(self): @@ -91,23 +88,23 @@ def forward(self, x): ###################################################################### -# 3. Initialize the optimizer +# 3. 옵티마이저 초기화하기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# We will use SGD with momentum. -# +# +# 모멘텀(momentum)을 갖는 SGD를 사용하겠습니다. +# optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ###################################################################### -# 4. Save the general checkpoint +# 4. 일반적인 체크포인트 저장하기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# Collect all relevant information and build your dictionary. -# +# +# 관련된 모든 정보들을 모아서 사전을 구성합니다. +# -# Additional information +# 추가 정보 EPOCH = 5 PATH = "model.pt" LOSS = 0.4 @@ -121,12 +118,11 @@ def forward(self, x): ###################################################################### -# 5. Load the general checkpoint +# 5. 일반적인 체크포인트 불러오기 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# -# Remember to first initialize the model and optimizer, then load the -# dictionary locally. -# +# +# 먼저 모델과 옵티마이저를 초기화한 뒤, 사전을 불러오는 것을 기억하십시오. +# model = Net() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) @@ -138,25 +134,25 @@ def forward(self, x): loss = checkpoint['loss'] model.eval() -# - or - +# - 또는 - model.train() ###################################################################### -# You must call ``model.eval()`` to set dropout and batch normalization -# layers to evaluation mode before running inference. Failing to do this -# will yield inconsistent inference results. -# -# If you wish to resuming training, call ``model.train()`` to ensure these -# layers are in training mode. -# -# Congratulations! You have successfully saved and loaded a general -# checkpoint for inference and/or resuming training in PyTorch. -# -# Learn More -# ---------- -# -# Take a look at these other recipes to continue your learning: -# -# - TBD -# - TBD +# 추론(inference)을 실행하기 전에 ``model.eval()`` 을 호출하여 드롭아웃(dropout)과 +# 배치 정규화 층(batch normalization layer)을 평가(evaluation) 모드로 바꿔야한다는 +# 것을 기억하세요. 이것을 빼먹으면 일관성 없는 추론 결과를 얻게 됩니다. +# +# 만약 학습을 계속하길 원한다면 ``model.train()`` 을 호출하여 이 층(layer)들이 +# 학습 모드인지 확인(ensure)하세요. +# +# 축하합니다! 지금까지 PyTorch에서 추론 또는 학습 재개를 위해 일반적인 체크포인트를 +# 저장하고 불러왔습니다. +# +# 더 알아보기 +# ------------ +# +# 다른 레시피를 둘러보고 계속 배워보세요: +# +# - :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint` +# - :doc:`/recipes/recipes/saving_multiple_models_in_one_file` diff --git a/docs/advanced/sg_execution_times.html b/docs/advanced/sg_execution_times.html index 3a692b4df..ba73371ef 100644 --- a/docs/advanced/sg_execution_times.html +++ b/docs/advanced/sg_execution_times.html @@ -291,9 +291,9 @@

Computation times

-

00:00.056 total execution time for advanced files:

+

00:00.058 total execution time for advanced files: