Skip to content

Commit

Permalink
Rebuild
Browse files Browse the repository at this point in the history
  • Loading branch information
9bow committed Aug 1, 2020
1 parent a04ca8d commit 06e3615
Show file tree
Hide file tree
Showing 12 changed files with 324 additions and 337 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\nSaving and loading multiple models in one file using PyTorch\n============================================================\nSaving and loading multiple models can be helpful for reusing models\nthat you have previously trained.\n\nIntroduction\n------------\nWhen saving a model comprised of multiple ``torch.nn.Modules``, such as\na GAN, a sequence-to-sequence model, or an ensemble of models, you must\nsave a dictionary of each model\u2019s state_dict and corresponding\noptimizer. You can also save any other items that may aid you in\nresuming training by simply appending them to the dictionary.\nTo load the models, first initialize the models and optimizers, then\nload the dictionary locally using ``torch.load()``. From here, you can\neasily access the saved items by simply querying the dictionary as you\nwould expect.\nIn this recipe, we will demonstrate how to save multiple models to one\nfile using PyTorch.\n\nSetup\n-----\nBefore we begin, we need to install ``torch`` if it isn\u2019t already\navailable.\n\n::\n\n pip install torch\n \n\n"
"\nPyTorch\uc5d0\uc11c \uc5ec\ub7ec \ubaa8\ub378\uc744 \ud558\ub098\uc758 \ud30c\uc77c\uc5d0 \uc800\uc7a5\ud558\uae30 & \ubd88\ub7ec\uc624\uae30\n============================================================\n\uc5ec\ub7ec \ubaa8\ub378\uc744 \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc624\ub294 \uac83\uc740 \uc774\uc804\uc5d0 \ud559\uc2b5\ud588\ub358 \ubaa8\ub378\ub4e4\uc744 \uc7ac\uc0ac\uc6a9\ud558\ub294\ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n\uac1c\uc694\n------------\nGAN\uc774\ub098 \uc2dc\ud000\uc2a4-\ud22c-\uc2dc\ud000\uc2a4(sequence-to-sequence model), \uc559\uc0c1\ube14 \ubaa8\ub378(ensemble of models)\uacfc\n\uac19\uc774 \uc5ec\ub7ec ``torch.nn.Modules`` \ub85c \uad6c\uc131\ub41c \ubaa8\ub378\uc744 \uc800\uc7a5\ud560 \ub54c\ub294 \uac01 \ubaa8\ub378\uc758 state_dict\uc640\n\ud574\ub2f9 \uc635\ud2f0\ub9c8\uc774\uc800(optimizer)\uc758 \uc0ac\uc804\uc744 \uc800\uc7a5\ud574\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c, \ud559\uc2b5 \ud559\uc2b5\uc744 \uc7ac\uac1c\ud558\ub294\ub370\n\ud544\uc694\ud55c \ub2e4\ub978 \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0 \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubaa8\ub378\ub4e4\uc744 \ubd88\ub7ec\uc62c \ub54c\uc5d0\ub294, \uba3c\uc800\n\ubaa8\ub378\ub4e4\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud558\uace0, ``torch.load()`` \ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc804\uc744 \ubd88\ub7ec\uc635\ub2c8\ub2e4.\n\uc774\ud6c4 \uc6d0\ud558\ub294\ub300\ub85c \uc800\uc7a5\ud55c \ud56d\ubaa9\ub4e4\uc744 \uc0ac\uc804\uc5d0 \uc870\ud68c\ud558\uc5ec \uc811\uadfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 PyTorch\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec \ubaa8\ub378\ub4e4\uc744 \ud558\ub098\uc758 \ud30c\uc77c\uc5d0 \uc5b4\ub5bb\uac8c \uc800\uc7a5\ud558\uace0\n\ubd88\ub7ec\uc624\ub294\uc9c0 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\uc124\uc815\n---------\n\uc2dc\uc791\ud558\uae30 \uc804\uc5d0 ``torch`` \uac00 \uc5c6\ub2e4\uba74 \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n::\n\n pip install torch\n\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Steps\n-----\n\n1. Import all necessary libraries for loading our data\n2. Define and intialize the neural network\n3. Initialize the optimizer\n4. Save multiple models\n5. Load multiple models\n\n1. Import necessary libraries for loading our data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``\nand ``torch.optim``.\n\n\n"
"\ub2e8\uacc4(Steps)\n-------------\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n4. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \uc800\uc7a5\ud558\uae30\n5. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \ubd88\ub7ec\uc624\uae30\n\n1. \ub370\uc774\ud130 \ubd88\ub7ec\uc62c \ub54c \ud544\uc694\ud55c \ub77c\uc774\ube0c\ub7ec\ub9ac\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc774 \ub808\uc2dc\ud53c\uc5d0\uc11c\ub294 ``torch`` \uc640 \uc5ec\uae30 \ud3ec\ud568\ub41c ``torch.nn`` \uc640 ``torch.optim` \uc744\n\uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n"
]
},
{
Expand All @@ -40,7 +40,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"2. Define and intialize the neural network\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor sake of example, we will create a neural network for training\nimages. To learn more see the Defining a Neural Network recipe. Build\ntwo variables for the models to eventually save.\n\n\n"
"2. \uc2e0\uacbd\ub9dd\uc744 \uad6c\uc131\ud558\uace0 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc608\ub97c \ub4e4\uc5b4, \uc774\ubbf8\uc9c0\ub97c \ud559\uc2b5\ud558\ub294 \uc2e0\uacbd\ub9dd\uc744 \ub9cc\ub4e4\uc5b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740\n\uc2e0\uacbd\ub9dd \uad6c\uc131\ud558\uae30 \ub808\uc2dc\ud53c\ub97c \ucc38\uace0\ud574\uc8fc\uc138\uc694. \ubaa8\ub378\uc744 \uc800\uc7a5\ud560 2\uac1c\uc758 \ubcc0\uc218\ub4e4\uc744 \ub9cc\ub4ed\ub2c8\ub2e4.\n\n\n"
]
},
{
Expand All @@ -58,7 +58,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"3. Initialize the optimizer\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe will use SGD with momentum to build an optimizer for each model we\ncreated.\n\n\n"
"3. \uc635\ud2f0\ub9c8\uc774\uc800 \ucd08\uae30\ud654\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uc0dd\uc131\ud55c \ubaa8\ub378\ub4e4 \uac01\uac01\uc5d0 \ubaa8\uba58\ud140(momentum)\uc744 \uac16\ub294 SGD\ub97c \uc0ac\uc6a9\ud558\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n"
]
},
{
Expand All @@ -76,7 +76,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"4. Save multiple models\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCollect all relevant information and build your dictionary.\n\n\n"
"4. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \uc800\uc7a5\ud558\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uad00\ub828\ub41c \ubaa8\ub4e0 \uc815\ubcf4\ub4e4\uc744 \ubaa8\uc544\uc11c \uc0ac\uc804\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\n"
]
},
{
Expand All @@ -87,14 +87,14 @@
},
"outputs": [],
"source": [
"# Specify a path to save to\nPATH = \"model.pt\"\n\ntorch.save({\n 'modelA_state_dict': netA.state_dict(),\n 'modelB_state_dict': netB.state_dict(),\n 'optimizerA_state_dict': optimizerA.state_dict(),\n 'optimizerB_state_dict': optimizerB.state_dict(),\n }, PATH)"
"# \uc800\uc7a5\ud560 \uacbd\ub85c \uc9c0\uc815\nPATH = \"model.pt\"\n\ntorch.save({\n 'modelA_state_dict': netA.state_dict(),\n 'modelB_state_dict': netB.state_dict(),\n 'optimizerA_state_dict': optimizerA.state_dict(),\n 'optimizerB_state_dict': optimizerB.state_dict(),\n }, PATH)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"4. Load multiple models\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRemember to first initialize the models and optimizers, then load the\ndictionary locally.\n\n\n"
"5. \uc5ec\ub7ec \ubaa8\ub378\ub4e4 \ubd88\ub7ec\uc624\uae30\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\uba3c\uc800 \ubaa8\ub378\uacfc \uc635\ud2f0\ub9c8\uc774\uc800\ub97c \ucd08\uae30\ud654\ud55c \ub4a4, \uc0ac\uc804\uc744 \ubd88\ub7ec\uc624\ub294 \uac83\uc744 \uae30\uc5b5\ud558\uc2ed\uc2dc\uc624.\n\n\n"
]
},
{
Expand All @@ -105,14 +105,25 @@
},
"outputs": [],
"source": [
"modelA = Net()\nmodelB = Net()\noptimModelA = optim.SGD(modelA.parameters(), lr=0.001, momentum=0.9)\noptimModelB = optim.SGD(modelB.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodelA.load_state_dict(checkpoint['modelA_state_dict'])\nmodelB.load_state_dict(checkpoint['modelB_state_dict'])\noptimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])\noptimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])\n\nmodelA.eval()\nmodelB.eval()\n# - or -\nmodelA.train()\nmodelB.train()"
"modelA = Net()\nmodelB = Net()\noptimModelA = optim.SGD(modelA.parameters(), lr=0.001, momentum=0.9)\noptimModelB = optim.SGD(modelB.parameters(), lr=0.001, momentum=0.9)\n\ncheckpoint = torch.load(PATH)\nmodelA.load_state_dict(checkpoint['modelA_state_dict'])\nmodelB.load_state_dict(checkpoint['modelB_state_dict'])\noptimizerA.load_state_dict(checkpoint['optimizerA_state_dict'])\noptimizerB.load_state_dict(checkpoint['optimizerB_state_dict'])\n\nmodelA.eval()\nmodelB.eval()\n# - \ub610\ub294 -\nmodelA.train()\nmodelB.train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You must call ``model.eval()`` to set dropout and batch normalization\nlayers to evaluation mode before running inference. Failing to do this\nwill yield inconsistent inference results.\n\nIf you wish to resuming training, call ``model.train()`` to ensure these\nlayers are in training mode.\n\nCongratulations! You have successfully saved and loaded multiple models\nin PyTorch.\n\nLearn More\n----------\n\nTake a look at these other recipes to continue your learning:\n\n- TBD\n- TBD\n\n\n"
"\ucd94\ub860(inference)\uc744 \uc2e4\ud589\ud558\uae30 \uc804\uc5d0 ``model.eval()`` \uc744 \ud638\ucd9c\ud558\uc5ec \ub4dc\ub86d\uc544\uc6c3(dropout)\uacfc\n\ubc30\uce58 \uc815\uaddc\ud654 \uce35(batch normalization layer)\uc744 \ud3c9\uac00(evaluation) \ubaa8\ub4dc\ub85c \ubc14\uafd4\uc57c\ud55c\ub2e4\ub294\n\uac83\uc744 \uae30\uc5b5\ud558\uc138\uc694. \uc774\uac83\uc744 \ube7c\uba39\uc73c\uba74 \uc77c\uad00\uc131 \uc5c6\ub294 \ucd94\ub860 \uacb0\uacfc\ub97c \uc5bb\uac8c \ub429\ub2c8\ub2e4.\n\n\ub9cc\uc57d \ud559\uc2b5\uc744 \uacc4\uc18d\ud558\uae38 \uc6d0\ud55c\ub2e4\uba74 ``model.train()`` \uc744 \ud638\ucd9c\ud558\uc5ec \uc774 \uce35(layer)\ub4e4\uc774\n\ud559\uc2b5 \ubaa8\ub4dc\uc778\uc9c0 \ud655\uc778(ensure)\ud558\uc138\uc694.\n\n\ucd95\ud558\ud569\ub2c8\ub2e4! \uc9c0\uae08\uae4c\uc9c0 PyTorch\uc5d0\uc11c \uc5ec\ub7ec \ubaa8\ub378\ub4e4\uc744 \uc800\uc7a5\ud558\uace0 \ubd88\ub7ec\uc654\uc2b5\ub2c8\ub2e4.\n\n\ub354 \uc54c\uc544\ubcf4\uae30\n------------\n\n\ub2e4\ub978 \ub808\uc2dc\ud53c\ub97c \ub458\ub7ec\ubcf4\uace0 \uacc4\uc18d \ubc30\uc6cc\ubcf4\uc138\uc694:\n\n- :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`\n- :doc:`/recipes/recipes/saving_multiple_models_in_one_file`\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
@@ -1,66 +1,60 @@
"""
Saving and loading multiple models in one file using PyTorch
PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ธฐ & ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
============================================================
Saving and loading multiple models can be helpful for reusing models
that you have previously trained.
์—ฌ๋Ÿฌ ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์€ ์ด์ „์— ํ•™์Šตํ–ˆ๋˜ ๋ชจ๋ธ๋“ค์„ ์žฌ์‚ฌ์šฉํ•˜๋Š”๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค.
Introduction
๊ฐœ์š”
------------
When saving a model comprised of multiple ``torch.nn.Modules``, such as
a GAN, a sequence-to-sequence model, or an ensemble of models, you must
save a dictionary of each modelโ€™s state_dict and corresponding
optimizer. You can also save any other items that may aid you in
resuming training by simply appending them to the dictionary.
To load the models, first initialize the models and optimizers, then
load the dictionary locally using ``torch.load()``. From here, you can
easily access the saved items by simply querying the dictionary as you
would expect.
In this recipe, we will demonstrate how to save multiple models to one
file using PyTorch.
Setup
-----
Before we begin, we need to install ``torch`` if it isnโ€™t already
available.
GAN์ด๋‚˜ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค(sequence-to-sequence model), ์•™์ƒ๋ธ” ๋ชจ๋ธ(ensemble of models)๊ณผ
๊ฐ™์ด ์—ฌ๋Ÿฌ ``torch.nn.Modules`` ๋กœ ๊ตฌ์„ฑ๋œ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๋•Œ๋Š” ๊ฐ ๋ชจ๋ธ์˜ state_dict์™€
ํ•ด๋‹น ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer)์˜ ์‚ฌ์ „์„ ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ํ•™์Šต ํ•™์Šต์„ ์žฌ๊ฐœํ•˜๋Š”๋ฐ
ํ•„์š”ํ•œ ๋‹ค๋ฅธ ํ•ญ๋ชฉ๋“ค์„ ์‚ฌ์ „์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ๋“ค์„ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ์—๋Š”, ๋จผ์ €
๋ชจ๋ธ๋“ค๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๊ณ , ``torch.load()`` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „์„ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค.
์ดํ›„ ์›ํ•˜๋Š”๋Œ€๋กœ ์ €์žฅํ•œ ํ•ญ๋ชฉ๋“ค์„ ์‚ฌ์ „์— ์กฐํšŒํ•˜์—ฌ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
์ด ๋ ˆ์‹œํ”ผ์—์„œ๋Š” PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค์„ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์–ด๋–ป๊ฒŒ ์ €์žฅํ•˜๊ณ 
๋ถˆ๋Ÿฌ์˜ค๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.
์„ค์ •
---------
์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ``torch`` ๊ฐ€ ์—†๋‹ค๋ฉด ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
::
pip install torch
"""



######################################################################
# Steps
# -----
#
# 1. Import all necessary libraries for loading our data
# 2. Define and intialize the neural network
# 3. Initialize the optimizer
# 4. Save multiple models
# 5. Load multiple models
#
# 1. Import necessary libraries for loading our data
# ๋‹จ๊ณ„(Steps)
# -------------
#
# 1. ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
# 2. ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
# 3. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
# 4. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ์ €์žฅํ•˜๊ธฐ
# 5. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
#
# 1. ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# For this recipe, we will use ``torch`` and its subsidiaries ``torch.nn``
# and ``torch.optim``.
#
#
# ์ด ๋ ˆ์‹œํ”ผ์—์„œ๋Š” ``torch`` ์™€ ์—ฌ๊ธฐ ํฌํ•จ๋œ ``torch.nn`` ์™€ ``torch.optim` ์„
# ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
#

import torch
import torch.nn as nn
import torch.optim as optim


######################################################################
# 2. Define and intialize the neural network
# 2. ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# For sake of example, we will create a neural network for training
# images. To learn more see the Defining a Neural Network recipe. Build
# two variables for the models to eventually save.
#
#
# ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฏธ์ง€๋ฅผ ํ•™์Šตํ•˜๋Š” ์‹ ๊ฒฝ๋ง์„ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€
# ์‹ ๊ฒฝ๋ง ๊ตฌ์„ฑํ•˜๊ธฐ ๋ ˆ์‹œํ”ผ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. ๋ชจ๋ธ์„ ์ €์žฅํ•  2๊ฐœ์˜ ๋ณ€์ˆ˜๋“ค์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
#

class Net(nn.Module):
def __init__(self):
Expand All @@ -86,25 +80,24 @@ def forward(self, x):


######################################################################
# 3. Initialize the optimizer
# 3. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# We will use SGD with momentum to build an optimizer for each model we
# created.
#
#
# ์ƒ์„ฑํ•œ ๋ชจ๋ธ๋“ค ๊ฐ๊ฐ์— ๋ชจ๋ฉ˜ํ…€(momentum)์„ ๊ฐ–๋Š” SGD๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
#

optimizerA = optim.SGD(netA.parameters(), lr=0.001, momentum=0.9)
optimizerB = optim.SGD(netB.parameters(), lr=0.001, momentum=0.9)


######################################################################
# 4. Save multiple models
# 4. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ์ €์žฅํ•˜๊ธฐ
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Collect all relevant information and build your dictionary.
#
#
# ๊ด€๋ จ๋œ ๋ชจ๋“  ์ •๋ณด๋“ค์„ ๋ชจ์•„์„œ ์‚ฌ์ „์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค.
#

# Specify a path to save to
# ์ €์žฅํ•  ๊ฒฝ๋กœ ์ง€์ •
PATH = "model.pt"

torch.save({
Expand All @@ -116,12 +109,11 @@ def forward(self, x):


######################################################################
# 4. Load multiple models
# 5. ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Remember to first initialize the models and optimizers, then load the
# dictionary locally.
#
#
# ๋จผ์ € ๋ชจ๋ธ๊ณผ ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์ดˆ๊ธฐํ™”ํ•œ ๋’ค, ์‚ฌ์ „์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์‹ญ์‹œ์˜ค.
#

modelA = Net()
modelB = Net()
Expand All @@ -136,27 +128,26 @@ def forward(self, x):

modelA.eval()
modelB.eval()
# - or -
# - ๋˜๋Š” -
modelA.train()
modelB.train()


######################################################################
# You must call ``model.eval()`` to set dropout and batch normalization
# layers to evaluation mode before running inference. Failing to do this
# will yield inconsistent inference results.
#
# If you wish to resuming training, call ``model.train()`` to ensure these
# layers are in training mode.
#
# Congratulations! You have successfully saved and loaded multiple models
# in PyTorch.
#
# Learn More
# ----------
#
# Take a look at these other recipes to continue your learning:
#
# - TBD
# - TBD
#
# ์ถ”๋ก (inference)์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— ``model.eval()`` ์„ ํ˜ธ์ถœํ•˜์—ฌ ๋“œ๋กญ์•„์›ƒ(dropout)๊ณผ
# ๋ฐฐ์น˜ ์ •๊ทœํ™” ์ธต(batch normalization layer)์„ ํ‰๊ฐ€(evaluation) ๋ชจ๋“œ๋กœ ๋ฐ”๊ฟ”์•ผํ•œ๋‹ค๋Š”
# ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์ด๊ฒƒ์„ ๋นผ๋จน์œผ๋ฉด ์ผ๊ด€์„ฑ ์—†๋Š” ์ถ”๋ก  ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
#
# ๋งŒ์•ฝ ํ•™์Šต์„ ๊ณ„์†ํ•˜๊ธธ ์›ํ•œ๋‹ค๋ฉด ``model.train()`` ์„ ํ˜ธ์ถœํ•˜์—ฌ ์ด ์ธต(layer)๋“ค์ด
# ํ•™์Šต ๋ชจ๋“œ์ธ์ง€ ํ™•์ธ(ensure)ํ•˜์„ธ์š”.
#
# ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์ง€๊ธˆ๊นŒ์ง€ PyTorch์—์„œ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋“ค์„ ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์™”์Šต๋‹ˆ๋‹ค.
#
# ๋” ์•Œ์•„๋ณด๊ธฐ
# ------------
#
# ๋‹ค๋ฅธ ๋ ˆ์‹œํ”ผ๋ฅผ ๋‘˜๋Ÿฌ๋ณด๊ณ  ๊ณ„์† ๋ฐฐ์›Œ๋ณด์„ธ์š”:
#
# - :doc:`/recipes/recipes/saving_and_loading_a_general_checkpoint`
# - :doc:`/recipes/recipes/saving_multiple_models_in_one_file`
#
Loading

0 comments on commit 06e3615

Please sign in to comment.