Skip to content

Commit 02374b7

Browse files
committed
NXP Backend: Update documentation to the new scheme
1 parent 53c1a77 commit 02374b7

File tree

9 files changed

+255
-82
lines changed

9 files changed

+255
-82
lines changed

docs/source/backends-nxp.md

Lines changed: 0 additions & 79 deletions
This file was deleted.

docs/source/backends-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Backends are the bridge between your exported model and the hardware it runs on.
2929
| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs |
3030
| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms |
3131
| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs |
32-
| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs |
32+
| [NXP](backends/nxp/nxp-overview.md) | Embedded | NPU | NXP SoCs |
3333
| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads |
3434
| [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung SoCs |
3535

@@ -59,6 +59,6 @@ backends-mediatek
5959
backends-arm-ethos-u
6060
backends-arm-vgf
6161
build-run-openvino
62-
backends-nxp
62+
backends/nxp/nxp-overview
6363
backends-cadence
6464
backends-samsung-exynos
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# NXP eIQ Neutron Backend
2+
3+
This manual page is dedicated to introduction NXP eIQ Neutron backend.
4+
NXP offers accelerated machine learning models inference on edge devices.
5+
To learn more about NXP's machine learning acceleration platform, please refer to [the official NXP website](https://www.nxp.com/applications/technologies/ai-and-machine-learning:MACHINE-LEARNING).
6+
7+
<div class="admonition tip">
8+
For up-to-date status about running ExecuTorch on Neutron backend please visit the <a href="https://github.com/pytorch/executorch/blob/main/backends/nxp/README.md">manual page</a>.
9+
</div>
10+
11+
## Features
12+
13+
14+
ExecuTorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700).
15+
Among currently supported machine learning models are:
16+
- Convolution-based neutral networks
17+
- Full support for MobileNetV2 and CifarNet
18+
19+
## Target Requirements
20+
21+
- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a evaluation board like MIMXRT700-EVK.
22+
23+
## Development Requirements
24+
25+
- [MCUXpresso IDE](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) or [MCUXpresso Visual Studio Code extension](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-for-visual-studio-code:MCUXPRESSO-VSC)
26+
- [MCUXpresso SDK 25.06](https://mcuxpresso.nxp.com/mcuxsdk/25.06.00/html/index.html)
27+
- eIQ Neutron Converter for MCUXPresso SDK 25.06, what you can download from eIQ PyPI:
28+
29+
```commandline
30+
$ pip install --index-url https://eiq.nxp.com/repository neutron_converter_SDK_25_06
31+
```
32+
33+
Instead of manually installing requirements, except MCUXpresso IDE and SDK, you can use the setup script:
34+
```commandline
35+
$ ./examples/nxp/setup.sh
36+
```
37+
38+
## Using NXP eIQ Backend
39+
40+
To test converting a neural network model for inference on NXP eIQ Neutron backend, you can use our example script:
41+
42+
```shell
43+
# cd to the root of executorch repository
44+
./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)]
45+
```
46+
47+
For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py).
48+
49+
50+
## Runtime Integration
51+
52+
To learn how to run the converted model on the NXP hardware, use one of our example projects on using ExecuTorch runtime from MCUXpresso IDE example projects list.
53+
For more finegrained tutorial, visit [this manual page](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html).
54+
55+
## Reference
56+
57+
**→{doc}`nxp-partitioner` — Partitioner options.**
58+
59+
**→{doc}`nxp-quantization` — Supported quantization schemes.**
60+
61+
**→{doc}`tutorials/nxp-tutorials` — Tutorials.**
62+
63+
```{toctree}
64+
:maxdepth: 2
65+
:hidden:
66+
:caption: NXP Backend
67+
68+
nxp-partitioner
69+
nxp-quantization
70+
tutorials/nxp-tutorials
71+
```
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
===============
2+
Partitioner API
3+
===============
4+
5+
The Neutron partitioner API allows for configuration of the model delegation to Neutron. Passing an ``NeutronPartitioner`` instance with no additional parameters will run as much of the model as possible on the Neutron backend. This is the most common use-case.
6+
7+
It has the following arguments:
8+
9+
* `compile_spec` - list of key-value pairs defining compilation:
10+
* `custom_delegation_options` - custom options for specifying node delegation.
11+
12+
--------------------
13+
Compile Spec Options
14+
--------------------
15+
To generate the Compile Spec for Neutron backend, you can use the `generate_neutron_compile_spec` function or directly the `NeutronCompileSpecBuilder().neutron_compile_spec()`
16+
Following fields can be set:
17+
18+
* `config` - NXP platform defining the Neutron NPU configuration, e.g. "imxrt700".
19+
* `neutron_converter_flavor` - Flavor of the neutron-converter module to use. Neutron-converter module named neutron_converter_SDK_25_06' has flavor 'SDK_25_06'. You shall set the flavour to the MCUXpresso SDK version you will use.
20+
* `extra_flags` - Extra flags for the Neutron compiler.
21+
* `operators_not_to_delegate` - List of operators that will not be delegated.
22+
23+
-------------------------
24+
Custom Delegation Options
25+
-------------------------
26+
By default the Neutron backend is defensive, what means it does not delegate operators which cannot be decided statically during partitioning. But as the model author you typically have insight into the model and so you can allow opportunistic delegation for some cases. For list of options, see
27+
`CustomDelegationOptions <https://github.com/pytorch/executorch/blob/release/1.0/backends/nxp/backend/custom_delegation_options.py#L11>`_
28+
29+
================
30+
Operator Support
31+
================
32+
33+
Operators are the building blocks of the ML model. See `IRs <https://docs.pytorch.org/docs/stable/torch.compiler_ir.html>`_ for more information on the PyTorch operator set.
34+
35+
This section lists the Edge operators supported by the Neutron backend.
36+
For detailed constraints of the operators see the conditions in the ``is_supported_*`` functions in the `Node converters <https://github.com/pytorch/executorch/blob/release/1.0/backends/nxp/neutron_partitioner.py#L192>`_
37+
38+
39+
.. csv-table:: Operator Support
40+
:file: op-support.csv
41+
:header-rows: 1
42+
:widths: 20 15 30 30
43+
:align: center
Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# NXP eIQ Neutron Quantization
2+
3+
The eIQ Neutron NPU requires the operators delegated to be quantized. To quantize the PyTorch model for the Neutron backend, use the `NeutronQuantizer` from `backends/nxp/quantizer/neutron_quantizer.py`.
4+
The `NeutronQuantizer` is configured to quantize the model with quantization scheme supported by the eIQ Neutron NPU.
5+
6+
### Supported Quantization Schemes
7+
8+
The Neutron delegate supports the following quantization schemes:
9+
10+
- Static quantization with 8-bit symmetric weights and 8-bit asymmetric activations (via the PT2E quantization flow), per-tensor granularity.
11+
- Following operators are supported at this moment:
12+
- `aten.abs.default`
13+
- `aten.adaptive_avg_pool2d.default`
14+
- `aten.addmm.default`
15+
- `aten.add.Tensor`
16+
- `aten.avg_pool2d.default`
17+
- `aten.cat.default`
18+
- `aten.conv1d.default`
19+
- `aten.conv2d.default`
20+
- `aten.dropout.default`
21+
- `aten.flatten.using_ints`
22+
- `aten.hardtanh.default`
23+
- `aten.hardtanh_.default`
24+
- `aten.linear.default`
25+
- `aten.max_pool2d.default`
26+
- `aten.mean.dim`
27+
- `aten.pad.default`
28+
- `aten.permute.default`
29+
- `aten.relu.default` and `aten.relu_.default`
30+
- `aten.reshape.default`
31+
- `aten.view.default`
32+
- `aten.softmax.int`
33+
- `aten.tanh.default`, `aten.tanh_.default`
34+
- `aten.sigmoid.default`
35+
36+
### Static 8-bit Quantization Using the PT2E Flow
37+
38+
To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model to edge:
39+
40+
1) Create an instance of the `NeutronQuantizer` class.
41+
2) Use `torch.export.export` to export the model to ATen Dialect.
42+
3) Call `prepare_pt2e` with the instance of the `NeutronQuantizer` to annotate the model with observers for quantization.
43+
4) As static quantization is required, run the prepared model with representative samples to calibrate the quantized tensor activation ranges.
44+
5) Call `convert_pt2e` to quantize the model.
45+
6) Export and lower the model using the standard flow.
46+
47+
The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques.
48+
49+
```python
50+
import torch
51+
import torchvision.models as models
52+
from torchvision.models.mobilenetv2 import MobileNet_V2_Weights
53+
from executorch.backends.nxp.quantizer.neutron_quantizer import NeutronQuantizer
54+
from executorch.backends.nxp.neutron_partitioner import NeutronPartitioner
55+
from executorch.backends.nxp.nxp_backend import generate_neutron_compile_spec
56+
from executorch.exir import to_edge_transform_and_lower
57+
from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e
58+
59+
model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval()
60+
sample_inputs = (torch.randn(1, 3, 224, 224), )
61+
62+
quantizer = NeutronQuantizer() # (1)
63+
64+
training_ep = torch.export.export(model, sample_inputs).module() # (2)
65+
prepared_model = prepare_pt2e(training_ep, quantizer) # (3)
66+
67+
for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs
68+
prepared_model(cal_sample) # (4) Calibrate
69+
70+
quantized_model = convert_pt2e(prepared_model) # (5)
71+
72+
compile_spec = generate_neutron_compile_spec(
73+
"imxrt700",
74+
operators_not_to_delegate=None,
75+
neutron_converter_flavor="SDK_25_06",
76+
)
77+
78+
et_program = to_edge_transform_and_lower( # (6)
79+
torch.export.export(quantized_model, sample_inputs),
80+
partitioner=[NeutronPartitioner(compile_spec=compile_spec)],
81+
).to_executorch()
82+
```
83+
84+
See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information.
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
Operator,Compute DType,Quantization,Constraints
2+
aten.abs.default,int8,static int8,
3+
aten._adaptive_avg_pool2d.default,int8,static int8,"ceil_mode=False, count_include_pad=False, divisor_override=False"
4+
aten.addmm.default,int8,static int8,2D tensor only
5+
aten.add.Tensor,int8,static int8,"alpha = 1, input tensor of rame rank"
6+
aten.avg_pool2d.default,int8,static int8,"ceil_mode=False, count_include_pad=False, divisor_override=False"
7+
aten.cat.default,int8,static int8,"input_channels % 8 = 0, output_channels %8 = 0"
8+
aten.clone.default,int8,static int8,
9+
aten.constant_pad_nd.default,int8,static int8,"H or W padding only"
10+
aten.convolution.default,int8,static int8,"1D or 2D convolution, constant weights, groups=1 or groups=channels_count (depthwise)"
11+
aten.hardtanh.default,int8,static int8,"supported ranges: <0,6>, <-1, 1>, <0,1>, <0,inf>"
12+
aten.max_pool2d.default,int8,static int8,"dilation=1, ceil_mode=False"
13+
aten.max_pool2d_with_indices.default,int8,static int8,"dilation=1, ceil_mode=False"
14+
aten.mean.dim,int8,static int8,"4D tensor only, dims = [-1,-2] or [-2,-1]"
15+
aten.mm.default,int8,static int8,2D tensor only
16+
aten.relu.default,int8,static int8,
17+
aten.tanh.default,int8,static int8,
18+
aten.view_copy.default,int8,static int8,
19+
aten.sigmoid.default,int8,static int8,
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# Preparing a Model for NXP eIQ Neutron Backend
2+
3+
This guide demonstrating the use of ExecuTorch AoT flow to convert a PyTorch model to ExecuTorch
4+
format and delegate the model computation to eIQ Neutron NPU using the eIQ Neutron Backend.
5+
6+
## Step 1: Environment Setup
7+
8+
This tutorial is intended to be run from a Linux and uses Conda or Virtual Env for Python environment management. For full setup details and system requirements, see [Getting Started with ExecuTorch](/getting-started).
9+
10+
Create a Conda environment and install the ExecuTorch Python package.
11+
```bash
12+
conda create -y --name executorch python=3.12
13+
conda activate executorch
14+
conda install executorch
15+
```
16+
17+
Run the setup.sh script to install the neutron-converter:
18+
```commandline
19+
$ ./examples/nxp/setup.sh
20+
```
21+
22+
## Step 2: Model Preparation and Running the Model on Target
23+
24+
See the example `aot_neutron_compile.py` and its [README](https://github.com/pytorch/executorch/blob/release/1.0/examples/nxp/README.md) file.
25+
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# NXP Tutorials
2+
3+
**→{doc}`nxp-basic-tutorial` — Lower and run a model on the NXP eIQ Neutron backend.**
4+
5+
```{toctree}
6+
:hidden:
7+
:maxdepth: 1
8+
9+
nxp-basic-tutorial
10+
```

docs/source/embedded-nxp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
```{include} backends-nxp.md
1+
```{include} backends/nxp/nxp-overview.md

0 commit comments

Comments
 (0)