Skip to content

ngisedsysle/A-Eye

Repository files navigation

A-Eye project, by Elsys DESIGN

An awesome project for embedded AI

Developpers : Guilhem ROLLAND and Thomas DU BOISROUVRAY

This repository contains the documented source code for this project.

About The Project

The goal of the project was to build a system capable of sorting images with and without boats.

Use case

Schéma use case

The target for this system was a ZYNQ processor. Our system is divided between an embedded sytem on a Zynq board and an application on a computer.

Application

A-Eye_Controller

Here you can see the A-Eye Controller app (on windows PC) which controls the embedded system. From there you can choose the mode between auto mode (processing images with AI) or manual mode (where you can take pictures manually).

Depending on the current mode, the application will show boat images (if you are in auto mode) or the latest image if you choose to manually take a picture.

(back to top)

Built With

Hardware

This project was firstly developped for ZYNQ target, and then the system was migrated to a Kria SOM. We used the ZYBO Z7 board on petalinux (DEPRECATED) for developpment. For the Kria, we are based on a KV260.

A convolution IP is developped using VHDL for hardware acceleration.

Software

  • Tensorflow : lib for high-level AI
  • Colab : Cloud plateform which provides GPU for AI training
  • Cdeotte C CNN implementation
  • TCP & MQTT : communication
  • C, C# and Python for software developpement
  • Gherkin : test automatisation
  • CMake : compiling and building the project
  • Doxygen : documentation of the project
  • .NET6 : Framework used for A-Eye_Controller
  • Jenkins : continuous integration and test driven developpment

(back to top)

Getting Started

Prerequisites

Before anything, you need to set up your board. This can be done by creating and flashing a petalinux (DEPRECATED), or on the Kria, by using the Ubuntu 20.04 provided.

For petalinux (DEPRECATED), please refer to this user guide. Here are the most important commands :

petalinux-create -t project --template zynq -n petaFromVivado # Create the workspace   
petalinux-config --get-hw-description # Load HDF file, it must be copy there before !!!   
petalinux-config # Enter global configuration    
petalinux-config -c rootfs # Enter rootfs configuration   
petalinux-build # produce image.ub, system.dtb and rootfs files   
petalinux-package --boot --force --fsbl images/linux/zynq_fsbl.elf --fpga images/linux/*_wrapper.bit --u-boot # produce BOOT.BIN    

Then, partitionned the SD card and copy the file as mentionned in the user guide above.
Once you have a fully functionnal OS on the board, you may run the installation.

Install and run the A-Eye app

  1. Connect to the board using SSH and open a bash terminal
git clone https://github.com/GuilhemROLLAND/A-Eye.git #clone the repository

If the board isn't connected to internet, you can clone the repository locally and transfer it to the board using SSH (with winSCP or whatever you want).

  1. Then, you have to put the dataset on the board. This must be unzip and place in folder A-Eye, root of the project.

  2. Build the embedded project. From your working directory :

cd A_Eye/A_Eye_root/
./restart
  1. Once the embedded part is running, you can interract with using A-Eye_Controller.
    Using Visual Studio : You have to get Visual studio for c# (winforms, .net6). Then, run the .sln in A-Eye_Controller folder.
    You can easily build the application and use it !

(back to top)

Usage

Here you can find different usage case of our project.

Receive boat images from a dataset :

  1. Put the dataset on the embedded filesystem. The dataset must be named "dataset" and contain two folders, one with boat images named "bateau" and the other one without boats named "pas_bateau". You can download an example dataset here
  2. Go to the folder A-Eye_root/ and run restart.sh
  3. On your host machine, run the application
  4. Set the correct IP and set the config as AUTO MODE (don't forget to click on the autoload button)
  5. Watch the boat images displayed on your application :)

(back to top)

Documentation

Introduction

An important part of the project is to have a clear an complete documentation associated to the project.
You can find a lot of UML documentation in folder A-Eye_Documentation.

On ubuntu

You can also generate Doxygen documentation for all source code in the project by following these steps :

sudo apt install doxygen
git clone https://github.com/GuilhemROLLAND/A-Eye.git
cd A-Eye
doxygen ./Doxyfile

On windows

To generate html webpages about source code documentation, we advise you to use Doxygen and load in the doxywizard the doxyfile located in the root folder of the project. You will then find index.html in the generated folder "doc". Otherwise, you can run the above command from linux in a WSL invite.

(back to top)

Performances

Platform AI Architecture Software Architecture Processing time (s) Loading time (min)
Zybo arch_1 monothread 50 10
Kria arch_1 monothread 35 7
Kria arch_1 multithread 10 3
Kria arch_2 multithread 3 3
Kria arch_3 multithread 1.3 3

Architectures description

  • arch_1 : ["57600", "C3:32:1", "P2", "C3:32:1", "P2", "C3:32:1", "P2", "C3:32:1", "P2", "32", "2"] with 91 % accuracy
  • arch_2 : ["57600", "C3:8:1", "P2", "C3:16:1", "P2", "C3:32:1", "P2", "C3:32:1", "P2", "32", "2"] with 89 % accuracy
  • arch_3 : ["57600", "C3:4:1", "P2", "C3:8:1", "P2", "C3:32:1", "P2", "C3:32:1", "P2", "32", "2"] with 89 % accuracy

Comments

Loading time : Time to load weights and biais from a json file.
Processing time : Time needed to process one RGB bmp picture of 640x480 pixels.
When multithreadings, we need all the CPU performance (400%) to achieve these times. Thus, in the demo using vlc video rendering, which consumes 150% of the performance, we can only achieve 2.5 seconds of processing time. Multithreading is used in loading parameters and in convolutionnal processing.
Server use less than 50% on peak.

(back to top)

Contact

Guilhem ROLLAND - [email protected]
Embedded developper | ELSYS DESIGN

Thomas DU BOISROUVRAY - [email protected]
Embedded developper | ELSYS DESIGN

Clément LEROY - [email protected]
FPGA developper | ELSYS DESIGN

Noémie ROLLAND - [email protected]
FPGA developper | ELSYS DESIGN

Arnaud DANIEL - Lead tech | ELSYS DESIGN

(back to top)