@@ -213,9 +213,9 @@ For example, to run a benchmark on GPU, use:
213213 ./benchmark_app -m model.xml -d GPU
214214
215215
216- You may also specify ``AUTO `` as the device, in which case the ``benchmark_app `` will
217- automatically select the best device for benchmarking and support it with the
218- CPU at the model loading stage . You can use ``AUTO `` when you aim for better performance.
216+ You may also specify ``AUTO `` as the device, to let ``benchmark_app ``
217+ automatically select the best device for benchmarking and support it with
218+ CPU when loading the model. You can use ``AUTO `` when you aim for better performance.
219219For more information, see the
220220:doc: `Automatic device selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection >` page.
221221
@@ -279,8 +279,8 @@ the path to an image or a folder of images:
279279 ./benchmark_app -m model.xml -i test1.jpg
280280
281281
282- The tool will repeatedly loop through the provided inputs and run inference on
283- them for the specified amount of time or the number of iterations. If the ``-i ``
282+ The tool will repeatedly loop through the provided inputs and run inference
283+ for the specified amount of time or the number of iterations. If the ``-i ``
284284flag is not used, the tool will automatically generate random data to fit the
285285input shape of the model.
286286
@@ -300,10 +300,9 @@ Advanced Usage
300300 By default, OpenVINO samples, tools and demos expect input with BGR channels
301301 order. If you trained your model to work with RGB order, you need to manually
302302 rearrange the default channel order in the sample or demo application or reconvert
303- your model using model conversion API with ``reverse_input_channels `` argument
304- specified. For more information about the argument, refer to When to Reverse
305- Input Channels section of Converting a Model to Intermediate Representation (IR).
306-
303+ your model.
304+ For more information, refer to the **Color Conversion ** section of
305+ :doc: `Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details >`.
307306
308307Per-layer performance and logging
309308+++++++++++++++++++++++++++++++++
@@ -312,27 +311,25 @@ The application also collects per-layer Performance Measurement (PM) counters fo
312311each executed infer request if you enable statistics dumping by setting the
313312``-report_type `` parameter to one of the possible values:
314313
315- * ``no_counters `` report includes configuration options specified, resulting
316- FPS and latency.
317- * ``average_counters `` report extends the ``no_counters `` report and additionally
318- includes average PM counters values for each layer from the network.
319- * ``detailed_counters `` report extends the ``average_counters `` report and
314+ * ``no_counters `` - includes specified configuration options, resulting FPS and latency.
315+ * ``average_counters `` - extends the ``no_counters `` report and additionally
316+ includes average PM counters values for each layer from the model.
317+ * ``detailed_counters `` - extends the ``average_counters `` report and
320318 additionally includes per-layer PM counters and latency for each executed infer request.
321319
322- Depending on the type, the report is stored in ``benchmark_no_counters_report.csv ``,
320+ Depending on the type, the report is saved to the ``benchmark_no_counters_report.csv ``,
323321``benchmark_average_counters_report.csv ``, or ``benchmark_detailed_counters_report.csv ``
324- file located in the path specified in ``-report_folder ``. The application also
325- saves executable graph information serialized to an XML file if you specify a
326- path to it with the ``-exec_graph_path `` parameter.
322+ file located in the path specified with ``-report_folder ``. The application also
323+ saves executable graph information to an XML file, located in a folder
324+ specified with the ``-exec_graph_path `` parameter.
327325
328326.. _all-configuration-options-python-benchmark :
329327
330328All configuration options
331329+++++++++++++++++++++++++
332330
333- Running the application with the ``-h `` or ``--help `` option yields the
334- following usage message:
335-
331+ Run the application with the ``-h `` or ``--help `` flags to get information on
332+ available options and parameters:
336333
337334.. tab-set ::
338335
@@ -604,8 +601,7 @@ following usage message:
604601 }
605602
606603
607-
608- Running the application with the empty list of options yields the usage message given above and an error message.
604+ The help information is also displayed when you run the application without any parameters.
609605
610606More information on inputs
611607++++++++++++++++++++++++++
@@ -625,18 +621,18 @@ Examples of Running the Tool
625621############################
626622
627623This section provides step-by-step instructions on how to run the Benchmark Tool
628- with the ` ` asl-recognition` ` Intel model on CPU or GPU devices. It uses random data as the input.
624+ with the ` ` asl-recognition` ` Intel model on CPU or GPU devices. It uses random data as input.
629625
630626.. note::
631627
632628 Internet access is required to execute the following steps successfully. If you
633- have access to the Internet through a proxy server only, please make sure that
634- it is configured in your OS environment .
629+ have access to the Internet through a proxy server only, make sure
630+ it is configured in your OS.
635631
636- Run the tool, specifying the location of the OpenVINO Intermediate Representation
637- (IR) model ` ` .xml ` ` file , the device to perform inference on, and a performance hint.
638- The following commands demonstrate examples of how to run the Benchmark Tool
639- in latency mode on CPU and throughput mode on GPU devices :
632+ Run the tool, specifying the location of the ` ` .xml ` ` model file of OpenVINO Intermediate
633+ Representation (IR), the inference device and a performance hint.
634+ The following examples show how to run the Benchmark Tool
635+ on CPU and GPU in latency and throughput mode respectively :
640636
641637* On CPU (latency mode):
642638
@@ -677,14 +673,15 @@ in latency mode on CPU and throughput mode on GPU devices:
677673
678674
679675The application outputs the number of executed iterations, total duration of execution,
680- latency, and throughput. Additionally, if you set the ` ` -report_type` ` parameter,
681- the application outputs a statistics report. If you set the ` ` -pc` ` parameter,
682- the application outputs performance counters. If you set ` ` -exec_graph_path` ` ,
683- the application reports executable graph information serialized. All measurements
684- including per-layer PM counters are reported in milliseconds.
676+ latency, and throughput. Additionally, if you set the parameters:
677+
678+ * ` ` -report_type` ` - the application outputs a statistics report,
679+ * ` ` -pc` ` - the application outputs performance counters,
680+ * ` ` -exec_graph_path` ` - the application reports executable graph information serialized.
681+
682+ All measurements including per-layer PM counters are reported in milliseconds.
685683
686- An example of the information output when running ` ` benchmark_app` ` on CPU in
687- latency mode is shown below:
684+ An example of running ` ` benchmark_app` ` on CPU in latency mode and its output are shown below:
688685
689686.. tab-set::
690687
@@ -826,11 +823,11 @@ latency mode is shown below:
826823 [ INFO ] Throughput: 91.12 FPS
827824
828825
829- The Benchmark Tool can also be used with dynamically shaped networks to measure
826+ The Benchmark Tool can also be used with dynamically shaped models to measure
830827expected inference time for various input data shapes. See the ` ` -shape` ` and
831828` ` -data_shape` ` argument descriptions in the :ref:` All configuration options < all-configuration-options-python-benchmark> `
832- section to learn more about using dynamic shapes. Here is a command example for
833- using ` ` benchmark_app` ` with dynamic networks and a portion of the resulting output:
829+ section to learn more about using dynamic shapes. Below is an example of
830+ using ` ` benchmark_app` ` with dynamic models and a portion of the resulting output:
834831
835832
836833.. tab-set::
0 commit comments