Skip to content

Documentation

linusrj edited this page Aug 2, 2022 · 15 revisions

MicroGUI Core

MicroGUI-Embedded is currently built on the Arduino ESP32 Core.This is because the display (WT32-SC01) used for development of the MicroGUI MVP (minimum viable product). The code for the MicroGUI Core is found here

Features

  • Easily render a GUI created with the MicroGUI web application
  • Provides a touch interface to GUI objects
  • Allows for event-driven main sketch
  • Very little boilerplate code needed to get started

Core Dependencies

The MicroGUI Core library is built with a few library dependencies to achieve its functionality.

For rendering graphics, MicroGUI-Embedded makes use of LVGL by LVGL.

For display drivers, MicroGUI-Embedded currently makes use of LovyanGFX by lovyan03. This library is solely used because it works with the WT32-SC01 display, however there are many more libraries that may be used for display drivers. Depending on interest in and future development of this project, switching to another display driver library or digging deeper into LovyanGFX might be beneficial to support more different displays.

For parsing JSON from the MicroGUI web application, this library makes use of ArduinoJson by bblanchon.

For managing objects created when rendering a GUI with this library, LinkedList by ivanseidel.

How does it work?

Have a look at the code to get a proper picture of what is going on behind the scenes. However the basic working principles of MicroGUI-Embedded Core are described below.

The MicroGUI Core is built to provide a simple interface for users of the library to graphics and on-screen objects. Events are a big part of this simple interface; it is the way for users of the library to know when an on-screen object has changed state/value. LVGL events are triggered when on-screen objects are clicked, and these events are later translated to MicroGUI events which are accessible from the user's main sketch. It is crucial that the mgui_run() function is inside the main loop for the user to receive triggered events as well as updating the GUI. It is also important that the user makes their code non-blocking so as not to interrupt and block the GUI.

Whenever MicroGUI-Embedded is initialized with the mgui_init() function it is going to look for a GUI to render. This GUI will be parsed to extract height and width of it, so that the display drivers know how to initialize the screen properly. After initialization of the screen, the mgui_init() function proceeds to render the document. mgui_render(char *document) is the function for rendering GUI components based on a JSON string. If the GUI JSON contains a button, the rendering function will call the mgui_render_button() function and so on. Each function for rendering a specific object follows the same principles but differ a little bit between types due to LVGL implementations.

MicroGUI Remote

MicroGUI Remote is the extension that enables displays to be connected to WiFi networks and later monitored and controlled remotely. The code for MicroGUI Remote is found here.

Features

  • Remote monitoring & control of GUI
  • Rendering remotely uploaded GUIs on the fly
  • Non-blocking WiFi handling
  • Automatic connection to known WiFi network
  • Custom light-weight captive portal for entering WiFi network credentials

Remote Dependencies

The Arduino ESP32 Core already includes many libraries for Arduino compatibility.

Beyond these, ESPAsyncWebserver is utilized by MicroGUI-Embedded to handle WiFi traffic asynchronously.

How does it work?

Have a look at the code to get a proper picture of what is going on behind the scenes. However the basic working principles of the Remote extension are described below.

The Remote extension is built to be event-driven, meaning that the code executes upon events from different subsystems and there are no continuously running loop. The reason for this is to make it non-blocking for the MicroGUI Core. This yields code execution with virtually no interrupts from the Remote extension.

Whenever the Remote extension is initialized with mgui_remote_init() it starts an AsyncWebServer, attaches handlers for the WebSocket route as well as the DNS server for the captive portal. Then it tries to look for WiFi network credentials stored in flash memory. If credentials are found it will immediately try to connect to this network. If connection is successful, the user is ready to use all features included in the Remote extension. However if unsuccessful, the ESP32 is set to AP (access point) mode which allows for connecting directly to it via another device (mobile phone or computer). Upon connection, the user is redirected to a captive portal where they can enter new WiFi network credentials. The display will then store and try to connect to this new WiFi network upon submitting the credentials. The ESP32 will not leave AP mode until it finds a known network, however when it does it will switch to station mode. After WiFi is completely set up, the Remote extension will handle disconnects and reconnections automatically.

The Remote monitoring & control happens via WebSockets. It is assumed that these connections are from the MicroGUI web application always.

Whenever a WebSocket client connects to the display and requests the GUI document, the display will send it in chunks (due to limits in the library). The display will then tell the client when the entire document has been sent and the client can then try to render the GUI. This is what happens in the MicroGUI web application. The web application will mirror exactly what is on the actual display. If an object changes state on the client side, that change will be sent to the display which makes the equivalent object on the display change state as well. And this works both ways, hence monitor & control. Changes in the display's GUI are broadcasted to all connected clients.

If the WebSocket client does not request the document, but rather tells the display that it wants to upload a new GUI, the display will start listening for incoming string data. This string data is appended to a new document string since the GUI is sent in chunks from the client as well. After the entire new document is received, which the client tells the display, the display proceeds to try to render this new GUI.

There is a sort of custom handshake system that happens when uploading a new GUI to a display. It works well for the MVP, however if more WebSocket features are added in the future, it might need to be redone.

Clone this wiki locally