Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Image classification getting started - NUCLEO-H743ZI2

The purpose of this package is to enable image classification application on a STM32 board.

This project provides an STM32 microcontroller embedded real time environnement to execute X-CUBE-AI generated model targeting image classification application.

Directory contents

This repository is structured as follows:

Directory Content
Application\<STM32_Board_Name>\STM32CubeIDE cubeIDE project files; only IDE files related
Application\<STM32_Board_Name>\Inc Application include files
Application\<STM32_Board_Name>\Src Application source files
Application\Network\* Place holder for AI C-model; files generated by STM32Cube.AI
Drivers\CMSIS CMSIS Drivers
Drivers\BSP Board Support Package and Drivers
Drivers\STM32XXxx_HAL_Driver Hardware Abstraction Layer for STM32XXxx family products
Middlewares\ST\STM32_AI_Runtime Place holder for AI runtime library
Middlewares\ST\STM32_ImageProcessing_Library Usual image processing functions
Middlewares\ST\STM32_USB_Camera USB camera library
Middlewares\ST\STM32_USB_Device USB device library
Middlewares\ST\STM32_USB_Display USB display library
Middlewares\ST\STM32_USB_Host USB host library
Middlewares\Utilities\Fonts API to manage the fonts
Middlewares\Utilities\lcd API to manage the lcd screen

Before you start

Hardware and Software environment

In order to run this image classification application examples you need to have at least one hardware part for each of the following components.

STM32 board:

Camera:

Display:

  • USB - Any USB host able to display a camera/webcam output (most of the PCs can handle it)
  • SPI - An LCD screen drived by an ILI9341

Only this hardware is supported for now.

Note: please note that using jumper wires to connect DCMI camera and SPI display simultaneously may induce interferences as high speed protocols are used. Connecting the camera and the display from either side of the board will help avoid interferences.

Tools installations

This getting started needs STM32CubeIDE as well as X-CUBE-AI from v7.3.0 to latest version.

You can find the info to install the tools in the parents README of the deployment part and the general README of the model zoo.

Hardware layout

B-CAMS-OMV

The pinout of the NUCLEO-H743ZI2 board for the OV5640 sensor is described in the following table:

Nucleo-H743ZI2 pin OV5640 I/O
PB8 I2C SCL
PB9 I2C SDA
PC6 DCMI D0
PC7 DCMI D1
PC8 DCMI D2
PC9 DCMI D3
PC11 DCMI D4
PD3 DCMI D5
PE5 DCMI D6
PE6 DCMI D7
PA6 DCMI PXCLK
PA4 DCMI HSYNC
PB7 DCMI VSYNC
PF2 Sensor RST
PF3 Sensor PWDN

Here is a scheme of the connections between the NUCLEO-H743ZI2 board and the B-CAMS-OMV camera module:

B-CAMS-OMV layout

As the B-CAMS-OMV module is already providing XCLK to the OV5640 sensor thanks to the oscillator X1, the XCLK pin of the module needs to stay unplugged.

ILI9341 SPI LCD screen

The pinout of the NUCLEO-H743ZI2 board for the ILI9341 SPI display is described in the following table:

Nucleo-H743ZI2 pin ILI9341 I/O
PB10 SPI SCK
PB15 SPI MOSI
PB1 LED/BKL (backlight control)
PB6 RST
PB11 CS
PB12 DC

Here is a scheme of the connections between the NUCLEO-H743ZI2 board and the B-CAMS-OMV camera module:

ILI9341 layout

Note: the ILI9341 LCD driver should receive new image data at high rate to display a smooth moving image. As the image data is sent after each inference ends, the framerate is dictated by the inference time. For this reason the framerate is low and flickering may appear on the screen.

Arducam SPI camera

The pinout of the NUCLEO-H743ZI2 board for the SPI camera is described in the following table:

Nucleo-H743ZI2 pin Arducam I/O
PB3 SPI SCK
PB5 SPI MISO
PB4 SPI MOSI
PA15 SPI CS

Here is a scheme of the connections between the NUCLEO-H743ZI2 board and the Arducam Mega camera module:

Arducam layout

Note: The framerate of the application when using the Arducam SPI camera is low. This is due to the maximum SPI speed of the camera that is 8MHz.

USB display & USB camera

To allow an USB host or an USB device to communicate with the STM32, you need to connect it to the STM32 micro-USB port of the Nucleo-H743ZI2. The STM32 micro-USB port is the following one:

USB port

Note: as there is only one USB port on the Nucleo-H747ZI2 board, the USB camera option and the USB display option can't be selected at the same time.

Deployment

Generate C code from tflite file

This repo does not provide the AI C-model generated by X-CUBE-AI.

The user needs to generate the AI C-model.

It is generated by the deployment script of the model zoo.

Build and deploy

You should use the deploy.py script to automatically build and deploy the program on the target (if the hardware is connected).

After the deployment script has been launched once, you can launch the Application\NUCLEO-H743ZI2\STM32CubeIDE\.project with STM32CubeIDE. With the IDE you can modify, build and deploy on the target.

USB Display

Launch the camera application on the host. On Windows, you can find it by typing "camera" in the search bar.

USB port

If needed click on the camera switch button to switch to the STM32 Usb FS Display. Then reset the board. The Welcome screen of the application should appear:

USB port

After few seconds, the output of the neural network should be printed with the camera input displayed:

USB port

Note: due to the USB interrupts occuring during inference, it may take longer for the STM32 to run the inference. For example the MobileNetV2 in the illustrations used as flowers classifier goes from 93ms/inference to 130ms/inference when USB display is used.

USB Camera

To connect a webcam to the STM32, you need to use a female USB to male micro-USB adapter. Connect the webcam on the female USB side and the STM32 to the male micro-USB side. Then reset the board. The main reasons why the application may not work are the following ones:

  • If the red LED next to the STM32 micro-USB port (LD7) lights up, it means that the USB power supply part of the board encounters an overcurrent issue and can't handle the webcam power supply. Your webcam doesn't operate properly and may be broken.
  • If the red LED next to the ST-Link micro-USB port (LD6) lights up, it means the overall board encounters an overcurrent issue. This problem can be solved using an external power supply connected to the Vin pin of the board and by setting the power source jumper JP2 properly, as shown below. The output values of the power supply need to be 7V and 800mA. Software Architecture
  • If the red LED (LD3) lights up, it might mean that the webcam is unable to output MJPEG codec in a QVGA format. This problem may appear with the latest webcams. Please try another webcam.
  • If the application doesn't work but no red LED lights up, the application may fail to enumerate the USB device. Please try another webcam.

Getting started deep dive

The purpose of this package is to enable image classification application on a STM32 board.

This package also provides a feature-rich image processing library (STM32_ImageProcessing_Library software component).

Software Architecture

Processing workflow

The software executes an image classification on each image captured by the camera. The framerate depends on each step of the processing workflow. For the USB Display, the framerate is limited by the USB bandwidth (12Mbit/s for USB FS) as the program is waiting for the image to be sent to the host before capturing a new one.

processing Workflow schema

Captured_image: image from the camera

Network_Preprocess - 3 steps:

  • ImageResize: rescale the image to fit the resolution needed by the network
  • PixelFormatConversion: convert image format (usually RGB565) to fit the network color channels (RGB888 or Grayscale)
  • PixelValueConversion: convert to pixel type used by the network (uint8 or int8)

HxWxC: Height, Width and Number of color channels, format defined by the neural network

Network_Inference: call AI C-model network

Network_Postprocess: call Output_Dequantize to convert the output type (only float32 for now)

Memory layout

The application software uses different buffers. The following diagram describes how they are used and which functions interact with it.

Memory Layout schema

Unlike the STM32H747I-DISCO, the NUCLEO-H743ZI2 embedded no external SDRAM, the memory is limited to the internal RAM. The biggest RAM block is an AXI SRAM of 512KB. It needs to be shared between the Lcd_Display buffer and the activation_buffer. For this reason, the activation_buffer is limited to 360KB. The models used in this example need to be small enough to fit in this buffer.

Model configuration

The '<getting-start-install-dir>/Application/NUCLEO-H743ZI2/Inc/CM7/ai_model_config.h' file contains configuration information.

This file is generated by the deploy.py script.

The number of output class for the model:

#define NB_CLASSES          (5)

The dimension of the model input tensor:

#define INPUT_HEIGHT        (128)
#define INPUT_WIDTH         (128)
#define INPUT_CHANNELS      (3)

A table containing the list of the labels for the output classes:

#define CLASSES_TABLE const char* classes_table[NB_CLASSES] = {\
   "daisy" ,   "dandelion" ,   "roses" ,   "sunflowers" ,   "tulips"}\

The type of resizing algorithm that should be used by the preprocessing stage:

#define NO_RESIZE                   (0)
#define INTERPOLATION_NEAREST       (1)

#define PP_RESIZING_ALGO       INTERPOLATION_NEAREST

In the version V1.0 of the package, only the nearest neighbor algorithm is supported.

Input frame aspect ratio algorithms:

#define ASPECT_RATIO_FIT            (0)
#define ASPECT_RATIO_CROP         (1)
#define ASPECT_RATIO_PADDING      (2)

#define ASPECT_RATIO_MODE ASPECT_RATIO_FIT

The pixel color format that is expected by the neural network model:

#define RGB_FORMAT        (1)
#define BGR_FORMAT        (2)
#define GRAYSCALE_FORMAT  (3)
#define PP_COLOR_MODE    RGB_FORMAT

Data format supported for the input and/or the output of the neural network model:

#define UINT8_FORMAT     (1)
#define INT8_FORMAT      (2)
#define FLOAT32_FORMAT   (3)

Data format that is expected by the input layer of the quantized neural network model (only UINT8 and INT8 formats are supported in V1.0):

#define QUANT_INPUT_TYPE    INT8_FORMAT

Data format that is provided by the output layer of the quantized neural network model (only FLOAT32 format is supported in V1.0):

#define QUANT_OUTPUT_TYPE    FLOAT32_FORMAT

Display interfaces supported for the application:

#define DISPLAY_INTERFACE_USB (1)
#define DISPLAY_INTERFACE_SPI (2)

Display interface selection:

#define DISPLAY_INTERFACE DISPLAY_INTERFACE_USB

Camera interfaces supported for the application:

#define CAMERA_INTERFACE_DCMI (1)
#define CAMERA_INTERFACE_USB  (2)
#define CAMERA_INTERFACE_SPI  (3)

Camera interface selection:

#define CAMERA_INTERFACE CAMERA_INTERFACE_DCMI

Camera DCMI sensors supported for the application:

#define CAMERA_SENSOR_OV5640 (1)

Camera DCMI sensor selection:

#define CAMERA_SENSOR CAMERA_SENSOR_OV5640

The rest of the model details will be embedded in the .c and .h files generated by the tool X-CUBE-AI.

Image processing

The frame captured by the camera is in a standard video format. As the neural network needs to receive a square-shaped image as input, three solutions are provided to reshape the captured frame before running the inference

  • ASPECT_RATIO_FIT: the frame is compacted to fit into a square with a side equal to the height of the captured frame. The aspect ratio is modified.

ASPECT_RATIO_FIT

  • ASPECT_RATIO_CROP: the frame is cropped to fit into a square with a side equal to the height of the captured frame. The aspect ratio remains but some data is lost on each side of the image.

ASPECT_RATIO_CROP

  • ASPECT_RATIO_PADDING: the frame is filled with black borders to fit into a square with a side equal to the width of the captured frame. The aspect ratio remains.

ASPECT_RATIO_PADDING

Limitations

  • Supports only Cube-AI from v7.3.0 to latest version
  • Supports only neural network model whom size fits in SoC internal memory
  • Supports only 8-bits quantized model
  • Supports only models with less than 360KB of activation buffer
  • Input layer of the quantized model supports only data in UINT8 or INT8 format
  • Output layer of the quantized model provides data in only FLOAT32 format
  • Limited to STM32CubeIDE / arm gcc toolchain; IAR and Keil are coming
  • Manageable through STM32CubeIDE (open, modification, debug)