Skip to content

Commit

Permalink
Merge pull request #309 from asuhov/2019-r31
Browse files Browse the repository at this point in the history
Publishing 2019 R3.1 content
  • Loading branch information
asuhov authored Oct 28, 2019
2 parents 1798ac0 + 6dfc778 commit fe3f978
Show file tree
Hide file tree
Showing 33 changed files with 635 additions and 48 deletions.
70 changes: 43 additions & 27 deletions inference-engine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
- [Build Steps](#build-steps-2)
- [Additional Build Options](#additional-build-options-3)
- [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
- [Adding Inference Engine to your project](#adding-inference-engine-to-your-project)
- [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
- [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
- [For Windows](#for-windows-1)
Expand Down Expand Up @@ -62,7 +63,13 @@ The software was validated on:
git submodule init
git submodule update --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder.
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder:
```sh
chmod +x install_dependencies.sh
```
```sh
./install_dependencies.sh
```
3. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Compute Runtime for OpenCL™ Driver.
4. Create a build folder:
```sh
Expand Down Expand Up @@ -90,33 +97,20 @@ You can use the following additional build options:

- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.

- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE=`which python3.7` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7
```
- To build the Python API wrapper:
1. Install all additional packages listed in the `/inference-engine/ie_bridges/python/requirements.txt` file:
```sh
pip install -r requirements.txt
```
2. use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE=`which python3.7` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7
```

- To switch off/on the CPU and GPU plugins, use the `cmake` options `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.

5. Adding to your project

For CMake projects, set an environment variable `InferenceEngine_DIR`:

```sh
export InferenceEngine_DIR=/path/to/dldt/inference-engine/build/
```

Then you can find Inference Engine by `find_package`:

```cmake
find_package(InferenceEngine)
include_directories(${InferenceEngine_INCLUDE_DIRS})
target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
```

## Build for Raspbian Stretch* OS

> **NOTE**: Only the MYRIAD plugin is supported.
Expand Down Expand Up @@ -371,7 +365,13 @@ The software was validated on:
git submodule init
git submodule update --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder.
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder:
```sh
chmod +x install_dependencies.sh
```
```sh
./install_dependencies.sh
```
3. Create a build folder:
```sh
mkdir build
Expand Down Expand Up @@ -419,6 +419,22 @@ After you got the built OpenCV library, perform the following preparation steps
1. Set the `OpenCV_DIR` environment variable to the directory where the `OpenCVConfig.cmake` file of you custom OpenCV build is located.
2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF` option for CMake-based build script for Inference Engine.
## Adding Inference Engine to your project
For CMake projects, set the `InferenceEngine_DIR` environment variable:
```sh
export InferenceEngine_DIR=/path/to/dldt/inference-engine/build/
```
Then you can find Inference Engine by `find_package`:
```cmake
find_package(InferenceEngine)
include_directories(${InferenceEngine_INCLUDE_DIRS})
target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
```
## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
> **NOTE**: These steps are only required if you want to perform inference on Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
Expand Down Expand Up @@ -461,7 +477,7 @@ For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2,
1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver` directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT repository was cloned.
2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from the pop up menu.
You have installed the driver for your Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
You have installed the driver for your Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
## Next Steps
Expand Down
4 changes: 4 additions & 0 deletions inference-engine/include/builders/ie_layer_decorator.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@
#include <vector>

namespace InferenceEngine {

/**
* @brief Neural network builder API
*/
namespace Builder {

/**
Expand Down
3 changes: 3 additions & 0 deletions inference-engine/include/cldnn/cldnn_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@

namespace InferenceEngine {

/**
* @brief GPU plugin configuration
*/
namespace CLDNNConfigParams {

/**
Expand Down
6 changes: 6 additions & 0 deletions inference-engine/include/dlia/dlia_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@

namespace InferenceEngine {

/**
* @brief DLIA plugin metrics
*/
namespace DliaMetrics {

/**
Expand All @@ -37,6 +40,9 @@ DECLARE_DLIA_METRIC_VALUE(INPUT_STREAMING);

} // namespace DliaMetrics

/**
* @brief DLIA plugin configuration
*/
namespace DLIAConfigParams {

/**
Expand Down
15 changes: 13 additions & 2 deletions inference-engine/include/gna/gna_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
//

/**
* @brief A header that defines advanced related properties for VPU plugins.
* @brief A header that defines advanced related properties for GNA plugin.
* These properties should be used in SetConfig() and LoadNetwork() methods of plugins
*
* @file vpu_plugin_config.hpp
* @file gna_config.hpp
*/

#pragma once
Expand All @@ -16,9 +16,20 @@

namespace InferenceEngine {

/**
* @brief GNA plugin configuration
*/
namespace GNAConfigParams {

/**
* @def GNA_CONFIG_KEY(name)
* @brief Shortcut for defining configuration keys
*/
#define GNA_CONFIG_KEY(name) InferenceEngine::GNAConfigParams::_CONFIG_KEY(GNA_##name)
/**
* @def GNA_CONFIG_VALUE(name)
* @brief Shortcut for defining configuration values
*/
#define GNA_CONFIG_VALUE(name) InferenceEngine::GNAConfigParams::GNA_##name

#define DECLARE_GNA_CONFIG_KEY(name) DECLARE_CONFIG_KEY(GNA_##name)
Expand Down
3 changes: 3 additions & 0 deletions inference-engine/include/hetero/hetero_plugin_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@

namespace InferenceEngine {

/**
* @brief Heterogeneous plugin configuration
*/
namespace HeteroConfigParams {

/**
Expand Down
6 changes: 6 additions & 0 deletions inference-engine/include/ie_plugin_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@

namespace InferenceEngine {

/**
* @brief %Metrics
*/
namespace Metrics {

#ifndef DECLARE_METRIC_KEY_IMPL
Expand Down Expand Up @@ -144,6 +147,9 @@ DECLARE_EXEC_NETWORK_METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS, unsigned int);

} // namespace Metrics

/**
* @brief Generic plugin configuration
*/
namespace PluginConfigParams {

/**
Expand Down
3 changes: 3 additions & 0 deletions inference-engine/include/inference_engine.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@
#include <cpp/ie_executable_network.hpp>
#include <ie_version.hpp>

/**
* @brief Inference Engine API
*/
namespace InferenceEngine {
/**
* @brief Gets the top n results from a tblob
Expand Down
3 changes: 3 additions & 0 deletions inference-engine/include/multi-device/multi_device_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@

namespace InferenceEngine {

/**
* @brief Multi Device plugin configuration
*/
namespace MultiDeviceConfigParams {

/**
Expand Down
3 changes: 3 additions & 0 deletions inference-engine/include/vpu/vpu_plugin_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@

namespace InferenceEngine {

/**
* @brief VPU plugin configuration
*/
namespace VPUConfigParams {

//
Expand Down
6 changes: 4 additions & 2 deletions inference-engine/tools/calibration_tool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Command line:
python collect_statistics.py --config ~/inception_v1.yml -d ~/defenitions.yml -M /home/user/intel/openvino/deployment_tools/model_optimizer --models ~/models --source /media/user/calibration/datasets --annotations ~/annotations --converted_models ~/models
```

Result model has statistics which allow you to infer this model in INT8 precision. To measure performance, you can use the [Benchmark App](./inference-engine/ie_bridges/python/sample/benchmark_app/README.md).
Result model has statistics which allow you to infer this model in INT8 precision. To measure performance, you can use the [Benchmark App](./inference-engine/tools/benchmark_tool/README.md).

### Calibrate the Model
During calibration process, the model is adjusted for efficient quantization and minimization of accuracy drop on calibration dataset. Calibration tool produces calibrated model which will be executed in low precision 8-bit quantized mode after loading into CPU plugin.
Expand Down Expand Up @@ -180,4 +180,6 @@ To run the Calibration Tool in the simplified mode, use the following command:
```sh
python3 calibrate.py -sm -m <path-to-ir.xml> -s <path-to-dataset> -ss <images-number> -e <path-to-extensions-folder> -td <target-device> -precision <output-ir-precision> --output-dir <output-directory-path>
```
It accepts models with FP32, FP16 precisions and image files as the dataset.
Input:
- FP32 and FP16 models
- image files as a dataset
Empty file.
98 changes: 98 additions & 0 deletions model-optimizer/extensions/analysis/inputs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
"""
Copyright (c) 2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import logging as log

import numpy as np

from mo.graph.graph import Graph
from mo.utils.model_analysis import AnalyzeAction


class InputsAnalysis(AnalyzeAction):
"""
The analyser gets information about model inputs and their default values if any.
"""

@classmethod
def fifo_queue_analysis(cls, graph: Graph, inputs_desc: dict):
"""
The FIFOQueue with QueueDeque has a separate input that specifies the size of batch to extract from queue. This
input is redundant and should be remove from the model analysis output.
"""
inputs_to_ignore = set()
for fifo_queue in graph.get_op_nodes(op='FIFOQueueV2'):
if len(fifo_queue.get_outputs({'out': 0})) != 1:
log.debug('The FIFOQueue operation "{}" has more than 1 consumers'.format(fifo_queue.id))
continue
queue_deque = fifo_queue.out_node(0)
if queue_deque.op in ['QueueDequeueMany', 'QueueDequeueManyV2', 'QueueDequeueUpTo', 'QueueDequeueUpToV2']:
queue_deque_input_1 = queue_deque.in_node(1)
if queue_deque_input_1.op in ['Parameter', 'PlaceholderWithDefault']:
log.debug('Adding node "{}" to placeholder ignore list'.format(queue_deque_input_1.id))
inputs_to_ignore.add(queue_deque_input_1.id)

# create input per each QueueDeque output port
for port_ind in range(len(queue_deque.out_nodes())):
inputs_desc["{}:{}".format(queue_deque.id, port_ind)] = {'shape': fifo_queue.shapes[port_ind].tolist(),
'value': None,
'data_type': fifo_queue.types[port_ind]}
return inputs_to_ignore

@classmethod
def ignore_mxnet_softmax_inputs(cls, graph: Graph):
"""
MxNet Softmax layers may have additional inputs which should be ignored. Refer to the
extensions/front/mxnet/check_softmax_node_inputs.py.
"""
inputs_to_ignore = set()
softmax_nodes = []
[softmax_nodes.extend(graph.get_op_nodes(op=op)) for op in ('SoftMax', 'SoftmaxActivation', 'SoftmaxOutput')]
for softmax_node in softmax_nodes:
for i in range(1, len(softmax_node.in_nodes())):
if softmax_node.in_node(i).has_valid('op') and softmax_node.in_node(i).op == 'Parameter':
inputs_to_ignore.add(softmax_node.in_node(i).id)
return inputs_to_ignore

def analyze(self, graph: Graph):
inputs_desc = dict()

inputs_to_ignore = InputsAnalysis.fifo_queue_analysis(graph, inputs_desc)
if graph.graph['fw'] == 'mxnet':
inputs_to_ignore.update(InputsAnalysis.ignore_mxnet_softmax_inputs(graph))

inputs = graph.get_op_nodes(op='Parameter')
for input in inputs:
inputs_desc[input.name] = {'shape': input.soft_get('shape', None),
'data_type': input.soft_get('data_type', None),
'value': None,
}

placeholders_with_default = graph.get_op_nodes(op='PlaceholderWithDefault')
for input in placeholders_with_default:
inputs_desc[input.name] = {'shape': input.soft_get('shape', None),
'data_type': input.soft_get('data_type', None),
'value': input.in_node(0).value if 0 in input.in_nodes() and
input.in_node(0).has_valid('value') else None}

for input_to_ignore in inputs_to_ignore:
del inputs_desc[input_to_ignore]

# workaround for the ONNX models case where input shape is specified as string value like: "width", "height".
# In this case the string value is converted to 0, but in fact it is an arbitrary value so should be -1
if graph.graph['fw'] == 'onnx':
for inp in inputs_desc.values():
inp['shape'] = [-1 if item == 0 else item for item in inp['shape']]
return {'inputs': inputs_desc}
Loading

0 comments on commit fe3f978

Please sign in to comment.