Skip to content

Commit

Permalink
Merge branch 'dev' into unified_public
Browse files Browse the repository at this point in the history
  • Loading branch information
cyndwith committed Aug 27, 2024
1 parent 2f6dd68 commit 3a8ea56
Showing 1 changed file with 118 additions and 64 deletions.
182 changes: 118 additions & 64 deletions tutorial/hello_world/hello_world.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,59 +8,31 @@
"\n",
"This is a simple Jupyter Notebook that walks through the 4 steps of compiling and running a PyTorch model on the embedded Neural Processing Unit (NPU) in your AMD Ryzen AI enabled PC. The steps are as follows:\n",
"\n",
"1. Get model - download or create a PyTorch model that we will run on the NPU\n",
"2. Export to ONNX - convert the PyTorch model to ONNX format.\n",
"3. Quantize - optimize the model for faster inference on the NPU by reducing its precision to INT8.\n",
"4. Run Model on CPU and NPU - compare performance between running the model on the CPU and on the NPU."
"1. Get model\n",
"2. Export to ONNX\n",
"3. Quantize\n",
"4. Run Model on CPU and NPU"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: torch in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from -r requirements.txt (line 1)) (2.4.0)\n",
"Requirement already satisfied: ipykernel in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from -r requirements.txt (line 2)) (6.29.5)\n",
"Requirement already satisfied: filelock in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (3.15.4)\n",
"Requirement already satisfied: typing-extensions>=4.8.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (4.12.2)\n",
"Requirement already satisfied: sympy in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (1.13.2)\n",
"Requirement already satisfied: networkx in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (3.3)\n",
"Requirement already satisfied: jinja2 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (3.1.4)\n",
"Requirement already satisfied: fsspec in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (2024.6.1)\n",
"Requirement already satisfied: comm>=0.1.1 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (0.2.2)\n",
"Requirement already satisfied: debugpy>=1.6.5 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (1.8.5)\n",
"Requirement already satisfied: ipython>=7.23.1 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (8.26.0)\n",
"Requirement already satisfied: jupyter-client>=6.1.12 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (8.6.2)\n",
"Requirement already satisfied: jupyter-core!=5.0.*,>=4.12 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (5.7.2)\n",
"Requirement already satisfied: matplotlib-inline>=0.1 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (0.1.7)\n",
"Requirement already satisfied: nest-asyncio in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (1.6.0)\n",
"Requirement already satisfied: packaging in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (24.1)\n",
"Requirement already satisfied: psutil in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (6.0.0)\n",
"Requirement already satisfied: pyzmq>=24 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (26.1.0)\n",
"Requirement already satisfied: tornado>=6.1 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (6.4.1)\n",
"Requirement already satisfied: traitlets>=5.4.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipykernel->-r requirements.txt (line 2)) (5.14.3)\n",
"Requirement already satisfied: decorator in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (5.1.1)\n",
"Requirement already satisfied: jedi>=0.16 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (0.19.1)\n",
"Requirement already satisfied: prompt-toolkit<3.1.0,>=3.0.41 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (3.0.47)\n",
"Requirement already satisfied: pygments>=2.4.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (2.18.0)\n",
"Requirement already satisfied: stack-data in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (0.6.2)\n",
"Requirement already satisfied: exceptiongroup in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (1.2.2)\n",
"Requirement already satisfied: colorama in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (0.4.6)\n",
"Requirement already satisfied: python-dateutil>=2.8.2 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from jupyter-client>=6.1.12->ipykernel->-r requirements.txt (line 2)) (2.9.0)\n",
"Requirement already satisfied: platformdirs>=2.5 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from jupyter-core!=5.0.*,>=4.12->ipykernel->-r requirements.txt (line 2)) (4.2.2)\n",
"Requirement already satisfied: pywin32>=300 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from jupyter-core!=5.0.*,>=4.12->ipykernel->-r requirements.txt (line 2)) (306)\n",
"Requirement already satisfied: MarkupSafe>=2.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from jinja2->torch->-r requirements.txt (line 1)) (2.1.5)\n",
"Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from sympy->torch->-r requirements.txt (line 1)) (1.3.0)\n",
"Requirement already satisfied: parso<0.9.0,>=0.8.3 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from jedi>=0.16->ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (0.8.4)\n",
"Requirement already satisfied: wcwidth in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from prompt-toolkit<3.1.0,>=3.0.41->ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (0.2.13)\n",
"Requirement already satisfied: six>=1.5 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.12->ipykernel->-r requirements.txt (line 2)) (1.16.0)\n",
"Requirement already satisfied: executing>=1.2.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from stack-data->ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (2.0.1)\n",
"Requirement already satisfied: asttokens>=2.1.0 in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from stack-data->ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (2.4.1)\n",
"Requirement already satisfied: pure-eval in c:\\users\\vgods\\miniconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from stack-data->ipython>=7.23.1->ipykernel->-r requirements.txt (line 2)) (0.2.3)\n"
"Defaulting to user installation because normal site-packages is not writeable\n",
"Requirement already satisfied: torch in c:\\users\\vgods\\appdata\\roaming\\python\\python310\\site-packages (from -r requirements.txt (line 1)) (2.4.0)\n",
"Requirement already satisfied: filelock in c:\\users\\vgods\\appdata\\roaming\\python\\python310\\site-packages (from torch->-r requirements.txt (line 1)) (3.15.4)\n",
"Requirement already satisfied: typing-extensions>=4.8.0 in c:\\programdata\\anaconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (4.11.0)\n",
"Requirement already satisfied: sympy in c:\\programdata\\anaconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (1.13.2)\n",
"Requirement already satisfied: networkx in c:\\users\\vgods\\appdata\\roaming\\python\\python310\\site-packages (from torch->-r requirements.txt (line 1)) (3.3)\n",
"Requirement already satisfied: jinja2 in c:\\programdata\\anaconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from torch->-r requirements.txt (line 1)) (3.1.4)\n",
"Requirement already satisfied: fsspec in c:\\users\\vgods\\appdata\\roaming\\python\\python310\\site-packages (from torch->-r requirements.txt (line 1)) (2024.6.1)\n",
"Requirement already satisfied: MarkupSafe>=2.0 in c:\\programdata\\anaconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from jinja2->torch->-r requirements.txt (line 1)) (2.1.5)\n",
"Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\\programdata\\anaconda3\\envs\\ryzen-ai-1.2.0\\lib\\site-packages (from sympy->torch->-r requirements.txt (line 1)) (1.3.0)\n"
]
}
],
Expand All @@ -75,7 +47,7 @@
"source": [
"### 0. Imports & Environment Variables\n",
"\n",
"We'll use the following imports in our example. `torch` and `torch_nn` are used for building and running ML models. We'll use them to define a small neural network and to generate the model weights. `os` is used for interacting with the operating system and is used to manage our environment variables, file paths, and directories. `subprocess` allows us to retrieve the hardware information. `onnx` and `onnxruntime` are used to work with our model in the ONNX format and for running our inference. `vai_q_onnx` is part of the Vitis AI Quantizer for ONNX models. We use it to perform quantization, converting the model into an INT8 format that is optimized for the NPU."
"We'll use the following imports in our example."
]
},
{
Expand All @@ -100,7 +72,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As well, we want to set the environment variables based on the NPU device we have in our PC. For more information about NPU configurations, see: For more information about NPU configurations, refer to the official [AMD Ryzen AI Documentation](https://ryzenai.docs.amd.com/en/latest/runtime_setup.html)."
"As well, we want to set the environment variables based on the NPU device we have in our PC. For more information about NPU configurations, see: https://ryzenai.docs.amd.com/en/latest/runtime_setup.html"
]
},
{
Expand Down Expand Up @@ -192,6 +164,87 @@
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"APU Type: PHX/HPT\n"
]
}
],
"source": [
"# Test code to figure out what system I'm on\n",
"def get_apu_info():\n",
" # Run pnputil as a subprocess to enumerate PCI devices\n",
" command = r'pnputil /enum-devices /bus PCI /deviceids '\n",
" process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n",
" stdout, stderr = process.communicate()\n",
" # Check for supported Hardware IDs\n",
" apu_type = ''\n",
" if 'PCI\\\\VEN_1022&DEV_1502&REV_00' in stdout.decode(): apu_type = 'PHX/HPT'\n",
" if 'PCI\\\\VEN_1022&DEV_17F0&REV_00' in stdout.decode(): apu_type = 'STX'\n",
" if 'PCI\\\\VEN_1022&DEV_17F0&REV_10' in stdout.decode(): apu_type = 'STX'\n",
" if 'PCI\\\\VEN_1022&DEV_17F0&REV_11' in stdout.decode(): apu_type = 'STX'\n",
" return apu_type\n",
"\n",
"apu_type = get_apu_info()\n",
"print(f\"APU Type: {apu_type}\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Setting environment for PHX/HPT\n",
"XLNX_VART_FIRMWARE= C:\\Program Files\\RyzenAI\\1.2.0\\voe-4.0-win_amd64\\xclbins\\phoenix\\1x4.xclbin\n",
"NUM_OF_DPU_RUNNERS= 1\n",
"XLNX_TARGET_NAME= AMD_AIE2_Nx4_Overlay\n"
]
}
],
"source": [
"def set_environment_variable(apu_type):\n",
"\n",
" install_dir = os.environ['RYZEN_AI_INSTALLATION_PATH']\n",
" match apu_type:\n",
" case 'PHX/HPT':\n",
" print(\"Setting environment for PHX/HPT\")\n",
" os.environ['XLNX_VART_FIRMWARE']= os.path.join(install_dir, 'voe-4.0-win_amd64', 'xclbins', 'phoenix', '1x4.xclbin')\n",
" os.environ['NUM_OF_DPU_RUNNERS']='1'\n",
" os.environ['XLNX_TARGET_NAME']='AMD_AIE2_Nx4_Overlay'\n",
" case 'STX':\n",
" print(\"Setting environment for STX\")\n",
" os.environ['XLNX_VART_FIRMWARE']= os.path.join(install_dir, 'voe-4.0-win_amd64', 'xclbins', 'strix', 'AMD_AIE2P_Nx4_Overlay.xclbin')\n",
" os.environ['NUM_OF_DPU_RUNNERS']='1'\n",
" os.environ['XLNX_TARGET_NAME']='AMD_AIE2_Nx4_Overlay'\n",
" case _:\n",
" print(\"Unrecognized APU type. Exiting.\")\n",
" exit()\n",
" print('XLNX_VART_FIRMWARE=', os.environ['XLNX_VART_FIRMWARE'])\n",
" print('NUM_OF_DPU_RUNNERS=', os.environ['NUM_OF_DPU_RUNNERS'])\n",
" print('XLNX_TARGET_NAME=', os.environ['XLNX_TARGET_NAME'])\n",
"\n",
"set_environment_variable(apu_type)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Get Model\n",
"Here, we'll use the PyTorch library to define and instantiate a simple neural network model called `SmallModel`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
Expand Down Expand Up @@ -256,7 +309,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -295,7 +348,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"metadata": {},
"outputs": [
{
Expand All @@ -308,24 +361,25 @@
"INFO:vai_q_onnx.quant_utils:Obtained calibration data with 1 iters\n",
"INFO:vai_q_onnx.quantize:Removed initializers from input\n",
"INFO:vai_q_onnx.quantize:Simplified model sucessfully\n",
"INFO:vai_q_onnx.quantize:Loading model...\n"
"INFO:vai_q_onnx.quantize:Loading model...\n",
"INFO:vai_q_onnx.quant_utils:The input ONNX model C:/Users/vgods/AppData/Local/Temp/vai.simp.av5354ht/model_simp.onnx can run inference successfully\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[VAI_Q_ONNX_INFO]: Time information:\n",
"2024-08-23 10:12:35.362481\n",
"2024-08-16 10:37:50.999103\n",
"[VAI_Q_ONNX_INFO]: OS and CPU information:\n",
" system --- Windows\n",
" node --- vgodsoe-ryzen\n",
" node --- Mini-PC\n",
" release --- 10\n",
" version --- 10.0.26100\n",
" machine --- AMD64\n",
" processor --- AMD64 Family 25 Model 116 Stepping 1, AuthenticAMD\n",
"[VAI_Q_ONNX_INFO]: Tools version information:\n",
" python --- 3.10.14\n",
" python --- 3.10.6\n",
" onnx --- 1.16.2\n",
" onnxruntime --- 1.17.0\n",
" vai_q_onnx --- 1.17.0+511d6f4\n",
Expand Down Expand Up @@ -365,14 +419,13 @@
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:vai_q_onnx.quant_utils:The input ONNX model C:/Users/vgods/AppData/Local/Temp/vai.simp.kpf9kmm3/model_simp.onnx can run inference successfully\n",
"INFO:vai_q_onnx.quantize:optimize the model for better hardware compatibility.\n",
"INFO:vai_q_onnx.quantize:Start calibration...\n",
"INFO:vai_q_onnx.quantize:Start collecting data, runtime depends on your model size and the number of calibration dataset.\n",
"INFO:vai_q_onnx.calibrate:Finding optimal threshold for each tensor using PowerOfTwoMethod.MinMSE algorithm ...\n",
"INFO:vai_q_onnx.calibrate:Use all calibration data to calculate min mse\n",
"Computing range: 100%|██████████| 10/10 [00:04<00:00, 2.30tensor/s]\n",
"INFO:vai_q_onnx.quantize:Finished the calibration of PowerOfTwoMethod.MinMSE which costs 4.6s\n",
"Computing range: 100%|██████████| 10/10 [00:02<00:00, 3.78tensor/s]\n",
"INFO:vai_q_onnx.quantize:Finished the calibration of PowerOfTwoMethod.MinMSE which costs 2.8s\n",
"INFO:vai_q_onnx.qdq_quantizer:Remove QuantizeLinear & DequantizeLinear on certain operations(such as conv-relu).\n",
"INFO:vai_q_onnx.refine:Adjust the quantize info to meet the compiler constraints\n"
]
Expand Down Expand Up @@ -451,7 +504,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -489,7 +542,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 8,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -526,7 +579,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -567,15 +620,15 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU Execution Time: 0.11257850000004055\n",
"NPU Execution Time: 0.08555689999997185\n"
"CPU Execution Time: 0.07306490000337362\n",
"NPU Execution Time: 0.06736009998712689\n"
]
}
],
Expand All @@ -601,7 +654,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -616,7 +669,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And there you have it. Your first model running on the NPU. We recommend trying a more complex model like ResNet50 or a custom model to compare performance and accuracy on the NPU.\n"
"And there you have it. Your first model running on the NPU."
]
}
],
Expand All @@ -636,8 +689,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
"version": "3.10.6"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 4
Expand Down

0 comments on commit 3a8ea56

Please sign in to comment.