From bdb9afaf91923011fea0db2b59190fac6d2b100e Mon Sep 17 00:00:00 2001 From: Suryaprakash Shanmugam Date: Mon, 16 Jan 2023 19:23:13 +0530 Subject: [PATCH] Disable Myriad Plugin (#415) * Disable Myriad Plugin * Changes to MYD documentation * Disable Myriad plugin for OpenVINO source builds * Add back required file plugins.xml --- README.md | 17 +++++++---------- README_cn.md | 4 ---- docs/INSTALL.md | 8 ++++---- docs/INSTALL_cn.md | 8 ++++---- ...VINO_TensorFlow_classification_example.ipynb | 4 ++-- ...NO_TensorFlow_object_detection_example.ipynb | 5 ++--- ...sorFlow_tfhub_object_detection_example.ipynb | 4 ++-- openvino_tensorflow/CMakeLists.txt | 9 +-------- python/CreatePipWhl.cmake | 10 ---------- python/README.md | 15 ++++++--------- tools/build_utils.py | 3 ++- 11 files changed, 30 insertions(+), 57 deletions(-) diff --git a/README.md b/README.md index 64dade8f..fc8af585 100644 --- a/README.md +++ b/README.md @@ -14,9 +14,9 @@ This repository contains the source code of **OpenVINO™ integration with Tenso This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow accelerates** inference across many AI models on a variety of Intel® silicon such as: - Intel® CPUs -- Intel® integrated GPUs -- Intel® Movidius™ Vision Processing Units - referred to as VPU -- Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL +- Intel® integrated and discrete GPUs + +Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs. [Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.] @@ -33,8 +33,7 @@ Check our [Interactive Installation Table](https://openvinotoolkit.github.io/ope The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately. This package supports: - Intel® CPUs -- Intel® integrated GPUs -- Intel® Movidius™ Vision Processing Units (VPUs) +- Intel® integrated and discrete GPUs pip3 install -U pip @@ -46,8 +45,6 @@ For installation instructions on Windows please refer to [**OpenVINO™ integrat To use Intel® integrated GPUs for inference, make sure to install the [Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu) -To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](./docs/INSTALL.md#install-openvino-integration-with-tensorflow-pypi-release-alongside-the-intel-distribution-of-openvino-toolkit-for-vad-m-support). - For more details on installation please refer to [INSTALL.md](docs/INSTALL.md), and for build from source options please refer to [BUILD.md](docs/BUILD.md) ## Configuration @@ -68,11 +65,11 @@ This should produce an output like: CXX11_ABI flag used for this build: 1 -By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done. +By default, Intel® CPU is used to run inference. However, you can change the default option to Intel® integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done. openvino_tensorflow.set_backend('') -Supported backends include 'CPU', 'GPU', 'GPU_FP16', 'MYRIAD', and 'VAD-M'. +Supported backends include 'CPU', 'GPU', 'GPU_FP16' To determine what processing units are available on your system for inference, use the following function: @@ -85,7 +82,7 @@ For further performance improvements, it is advised to set the environment varia To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](./examples) directory. ## Docker Support -Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU, VPU, and VAD-M. +Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU. For more details see [docker readme](docker/README.md). ### Prebuilt Images diff --git a/README_cn.md b/README_cn.md index 1d991a47..58693dee 100644 --- a/README_cn.md +++ b/README_cn.md @@ -15,8 +15,6 @@ - 英特尔® CPU - 英特尔® 集成 GPU -- 英特尔® Movidius™ 视觉处理单元 (VPU) -- 支持 8 颗英特尔 Movidius™ MyriadX VPU 的英特尔® 视觉加速器设计(称作 VAD-M 或 HDDL) [注:为实现最佳的性能、效率、工具定制和硬件控制,我们建议开发人员使用原生 OpenVINO™ API 及其运行时。] @@ -34,7 +32,6 @@ **OpenVINO™ integration with TensorFlow** 安装包附带 OpenVINO™ 2022.3.0 版本的预建库,用户无需单独安装 OpenVINO™。该安装包支持: - 英特尔® CPU - 英特尔® 集成 GPU -- 英特尔® Movidius™ 视觉处理单元 (VPU) pip3 install -U pip @@ -45,7 +42,6 @@ 如果您想使用Intel® 集成显卡进行推理,请确保安装[Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu) -如果您想使用支持 Movidius™ (VAD-M)进行推理的英特尔® 视觉加速器设计 (VAD-M) 进行推理,请安装 [**OpenVINO™ integration with TensorFlow** 以及英特尔® OpenVINO™ 工具套件发布版](docs/INSTALL_cn.md#安装-openvino-integration-with-tensorflow-pypi-发布版与独立安装intel-openvino-发布版以支持vad-m)。 更多安装详情,请参阅 [INSTALL.md](docs/INSTALL_cn.md), 更多源构建选项请参阅 [BUILD.md](docs/BUILD_cn.md) diff --git a/docs/INSTALL.md b/docs/INSTALL.md index 1ef9da7f..7a635290 100644 --- a/docs/INSTALL.md +++ b/docs/INSTALL.md @@ -9,7 +9,7 @@ ### Install **OpenVINO™ integration with TensorFlow** PyPi release * Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately - * Supports Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units (VPUs). No VAD-M support + * Supports Intel® CPUs, Intel® integrated and discrete GPUs pip3 install -U pip pip3 install tensorflow==2.9.3 @@ -19,7 +19,7 @@ ### Install **OpenVINO™ integration with TensorFlow** PyPi release alongside the Intel® Distribution of OpenVINO™ Toolkit for VAD-M Support * Compatible with OpenVINO™ version 2022.3.0 - * Supports Intel® Vision Accelerator Design with Movidius™ (VAD-M), it also supports Intel® CPUs, Intel® integrated GPUs and Intel® Movidius™ Vision Processing Units (VPUs) + * Supports it also supports Intel® CPUs, Intel® integrated and discrete GPUs * To use it: 1. Install tensorflow and openvino-tensorflow packages from PyPi as explained in the section above 2. Download & install Intel® Distribution of OpenVINO™ Toolkit 2022.3.0 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)). @@ -32,7 +32,7 @@ Install **OpenVINO™ integration with TensorFlow** PyPi release * Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately - * Supports Intel® CPUs, Intel®, and Intel® Movidius™ Vision Processing Units (VPUs). No VAD-M support + * Supports Intel® CPUs, Intel® integrated and discrete GPUs pip3 install -U pip pip3 install tensorflow==2.9.3 @@ -44,7 +44,7 @@ Install **OpenVINO™ integration with TensorFlow** PyPi release alongside TensorFlow released in Github * TensorFlow wheel for Windows from PyPi does't have all the API symbols enabled which are required for **OpenVINO™ integration with TensorFlow**. User needs to install the TensorFlow wheel from the assets of the Github release page * Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately - * Supports Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units (VPUs). No VAD-M support + * Supports Intel® CPUs, Intel® integrated and discrete GPUs pip3.9 install -U pip pip3.9 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v2.2.0/tensorflow-2.9.2-cp39-cp39-win_amd64.whl diff --git a/docs/INSTALL_cn.md b/docs/INSTALL_cn.md index 235aae54..9d31db67 100644 --- a/docs/INSTALL_cn.md +++ b/docs/INSTALL_cn.md @@ -8,7 +8,7 @@ ### 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版 * 包括 Intel® OpenVINO™ 2022.3.0 版的预建库,用户无需单独安装 OpenVINO™。 - * 支持 Intel® CPU、Intel® 集成 GPU 和 Intel® Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。 + * 支持 Intel® CPU、Intel® 集成 GPU 和 pip3 install -U pip pip3 install tensorflow==2.9.3 @@ -18,7 +18,7 @@ ### 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版与独立安装Intel® OpenVINO™ 发布版以支持VAD-M * 兼容 Intel® OpenVINO™ 2022.3.0版本 - * 支持 Intel® Movidius™ (VAD-M) 的视觉加速器设计 同时支持 Intel® CPU、Intel® 集成 GPU、Intel® Movidius™ 视觉处理单元 (VPU)。 + * 支持 的视觉加速器设计 同时支持 Intel® CPU、Intel® 集成 GPU * 使用方法: 1. 按照上述方法从PyPi安装tensorflow 和 openvino-tensorflow。 2. 下载安装Intel® OpenVINO™ 2022.3.0发布版,一并安装其依赖([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html))。 @@ -31,7 +31,7 @@ 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版 * 包括 Intel® OpenVINO™ 2022.3.0 版的预建库,用户无需单独安装Intel® OpenVINO™ - * 支持 Intel® CPU、Intel® 集成 GPU 和 Intel® Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。 + * 支持 Intel® CPU、Intel® 集成 GPU 和 pip3 install -U pip pip3 install tensorflow==2.9.3 @@ -43,7 +43,7 @@ 安装 **OpenVINO™ integration with TensorFlow** PyPi 版本与独立安装TensorFlow Github版本 * 基于Windows 的TensorFlow PyPi 安装版并没有使能 **OpenVINO™ integration with TensorFlow** 需要的所有API。用户需要从Github 发布中安装TensorFlow wheel。 * 包括 OpenVINO™ 2022.3.0 版的预建库。 用户无需单独安装 Intel® OpenVINO™ 。 - * 支持 Intel® CPU、Intel® 集成 GPU 和 Intel® Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。 + * 支持 Intel® CPU、Intel® 集成 GPU 和 pip3.9 install -U pip pip3.9 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v2.2.0/tensorflow-2.9.2-cp39-cp39-win_amd64.whl diff --git a/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb b/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb index 5b1daafe..286c037f 100644 --- a/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb +++ b/examples/notebooks/OpenVINO_TensorFlow_classification_example.ipynb @@ -10,6 +10,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "1s7OK7vW3put" @@ -19,8 +20,7 @@ "\n", "OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n", "* Intel® CPUs\n", - "* Intel® integrated GPUs\n", - "* Intel® Movidius™ Vision Processing Units - referred to as VPU\n", + "* Intel® integrated and discrete GPUs\n", "\n", "**Overview**\n", "\n", diff --git a/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb b/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb index 2420fa67..8feab7cc 100644 --- a/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb +++ b/examples/notebooks/OpenVINO_TensorFlow_object_detection_example.ipynb @@ -10,6 +10,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "metadata": { "id": "atwwZdgc3d3_" @@ -21,9 +22,7 @@ "\n", "OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n", "* Intel® CPUs\n", - "* Intel® integrated GPUs\n", - "* Intel® Movidius™ Vision Processing Units - referred to as VPU\n", - "* Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL\n", + "* Intel® integrated and discrete GPUs\n", "\n", "**Overview**\n", "\n", diff --git a/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb b/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb index 1ad9a7b3..6a48573e 100644 --- a/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb +++ b/examples/notebooks/OpenVINO_TensorFlow_tfhub_object_detection_example.ipynb @@ -17,6 +17,7 @@ ] }, { + "attachments": {}, "cell_type": "markdown", "id": "898d9206", "metadata": {}, @@ -24,8 +25,7 @@ "[OpenVINO™ integration with TensorFlow](https://github.com/openvinotoolkit/openvino_tensorflow) is designed for TensorFlow developers who want to get started with [OpenVINO™](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) in their inferencing applications. This product delivers OpenVINO™ inline optimizations, which enhance inferencing performance of popular deep learning models with minimal code changes and without any accuracy drop. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:\n", "\n", " - Intel® CPUs\n", - " - Intel® integrated GPUs\n", - " - Intel® Movidius™ Vision Processing Units - referred to as VPU" + " - Intel® integrated and discrete GPUs" ] }, { diff --git a/openvino_tensorflow/CMakeLists.txt b/openvino_tensorflow/CMakeLists.txt index 82309d65..9008b895 100644 --- a/openvino_tensorflow/CMakeLists.txt +++ b/openvino_tensorflow/CMakeLists.txt @@ -145,9 +145,6 @@ if (APPLE) endif() set(IE_LIBS_PATH ${OPENVINO_ARTIFACTS_DIR}/runtime/lib/intel64/${CMAKE_BUILD_TYPE}) - set(IE_LIBS - "${IE_LIBS_PATH}/pcie-ma2x8x.mvcmd" - ) set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/lib/) install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}") install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${TF_CONVERSION_EXTENSIONS_MODULE_NAME}/${TF_CONVERSION_EXTENSIONS_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}") @@ -161,7 +158,6 @@ elseif(WIN32) set (IE_LIBS "${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_gpu_plugin.${PLUGIN_LIB_EXT}" "${IE_LIBS_PATH}/cache.json" - "${IE_LIBS_PATH}/pcie-ma2x8x.elf" ) set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/bin/) install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${CMAKE_BUILD_TYPE}/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}") @@ -183,7 +179,6 @@ else() set (IE_LIBS "${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_gpu_plugin.${PLUGIN_LIB_EXT}" "${IE_LIBS_PATH}/cache.json" - "${IE_LIBS_PATH}/pcie-ma2x8x.mvcmd" ) set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/lib/) install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}") @@ -200,9 +195,7 @@ set (IE_LIBS "${IE_LIBS_PATH}/${LIB_PREFIX}openvino_c.${OV_LIB_EXT_DOT}" "${IE_LIBS_PATH}/${LIB_PREFIX}openvino_tensorflow_frontend.${OV_LIB_EXT_DOT}" "${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_cpu_plugin.${PLUGIN_LIB_EXT}" - "${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_myriad_plugin.${PLUGIN_LIB_EXT}" - "${IE_LIBS_PATH}/usb-ma2x8x.mvcmd" - "${IE_LIBS_PATH}/plugins.xml" + "${IE_LIBS_PATH}/plugins.xml" ) # Install Openvino and TBB libraries diff --git a/python/CreatePipWhl.cmake b/python/CreatePipWhl.cmake index 29f4e6c7..7c7ed248 100644 --- a/python/CreatePipWhl.cmake +++ b/python/CreatePipWhl.cmake @@ -91,10 +91,8 @@ if (PYTHON) if (APPLE) if(CMAKE_BUILD_TYPE STREQUAL "Debug") set(libMKLDNNPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_cpu_plugind.so") - set(libmyriadPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_myriad_plugind.so") else() set(libMKLDNNPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_cpu_plugin.so") - set(libmyriadPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_myriad_plugin.so") endif() # libMKLDNNPluginPath @@ -111,14 +109,6 @@ if (PYTHON) endif() # libmyriadPluginPath - execute_process(COMMAND - install_name_tool -add_rpath - @loader_path - ${libmyriadPluginPath} - RESULT_VARIABLE result - ERROR_VARIABLE ERR - ERROR_STRIP_TRAILING_WHITESPACE - ) if(${result}) message(FATAL_ERROR "Cannot add rpath") endif() diff --git a/python/README.md b/python/README.md index 221b2fb8..f34d5d9d 100644 --- a/python/README.md +++ b/python/README.md @@ -2,9 +2,9 @@ [**OpenVINO™ integration with TensorFlow**](https://github.com/openvinotoolkit/openvino_tensorflow/) is a product designed for TensorFlow* developers who want to get started with [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) in their inferencing applications. This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow** accelerates inference across many [AI models](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/MODELS.md) on a variety of Intel® silicon such as: - Intel® CPUs -- Intel® integrated GPUs -- Intel® Movidius™ Vision Processing Units - referred to as VPU -- Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL +- Intel® integrated and discrete GPUs + +Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs. [Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.] @@ -22,14 +22,11 @@ This **OpenVINO™ integration with TensorFlow** package comes with pre-built li This package supports: - Intel® CPUs - Intel® integrated GPUs -- Intel® Movidius™ Vision Processing Units (VPUs) pip3 install -U pip pip3 install tensorflow==2.9.3 pip3 install openvino-tensorflow==2.3.0 -To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, please refer to: [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/INSTALL.md#install-openvino™-integration-with-tensorflow-pypi-release-alongside-the-intel®-distribution-of-openvino™-toolkit-for-vad-m-support). - For installation instructions on Windows please refer to [**OpenVINO™ integration with TensorFlow** for Windows ](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/INSTALL.md#windows) For more details on installation please refer to [INSTALL.md](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/INSTALL.md), and for build from source options please refer to [BUILD.md](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/BUILD.md) @@ -53,11 +50,11 @@ This should produce an output like: ## Usage -By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done. +By default, Intel® CPU is used to run inference. However, you can change the default option to Intel® integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done. openvino_tensorflow.set_backend('') -Supported backends include 'CPU', 'GPU', 'GPU_FP16', and 'MYRIAD'. +Supported backends include 'CPU', 'GPU', 'GPU_FP16'. To determine what processing units are available on your system for inference, use the following function: @@ -72,7 +69,7 @@ For further performance improvements, it is advised to set the environment varia To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples) repository. ## Docker Support -Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU, VPU, and VAD-M. +Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU. For more details see [docker readme](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docker/README.md). ### Prebuilt Images diff --git a/tools/build_utils.py b/tools/build_utils.py index 671e5a11..0e13a5b7 100755 --- a/tools/build_utils.py +++ b/tools/build_utils.py @@ -876,7 +876,8 @@ def build_openvino(build_dir, openvino_src_dir, cxx_abi, target_arch, "-DENABLE_TESTING=OFF", "-DENABLE_SAMPLES=OFF", "-DENABLE_FUNCTIONAL_TESTS=OFF", "-DCMAKE_CXX_FLAGS=-D_GLIBCXX_USE_CXX11_ABI=" + cxx_abi, - "-DCMAKE_INSTALL_RPATH=\"$ORIGIN\"", "-DTHREADING=" + threading + "-DCMAKE_INSTALL_RPATH=\"$ORIGIN\"", "-DTHREADING=" + threading, + "-DENABLE_INTEL_MYRIAD=OFF", "-DENABLE_INTEL_MYRIAD_COMMON=OFF" ] if (platform.system() == 'Windows'):