-
Notifications
You must be signed in to change notification settings - Fork 0
Soletta Machine Learning
SML is an open source machine learning library for development of IoT devices. It provides APIs to handle with client side AI and an easy to use flow-based Soletta module.
Initially supporting neural networks and fuzzy logic learning, using well established open source libraries it could be easily extended to support others.
See Getting Started on SML for a tutorial.
First of all we need to understand the concept of inputs and outputs used in SML.
Inputs are all variables that provides information about the current state of the environment being monitored, such as sensors attached to IoT devices. It may be a light sensor, a device that monitors the Bluetooth activity or even a simple switch. Outputs are devices that are operated by users and that are expected to be controlled.
SML works by learning how the current inputs values affect user changes in outputs. Using this knowledge, SML can read current input values to create a scenario, predict expected outputs values and control an output device. This way, devices will be adapted to users need, without expecting him to set a configuration or even knowing his own behavior.
SML Neural Network engine keep record of input and output values, and use them to train an Artificial Neural Network. After training, SML will use current input values as the input for the trained neural network to predict output values.
As users behavior changes, the neural network is retrained periodically.
Fuzzy Logic Engine uses a Fuzzy Logic model to represent inputs and outputs. In each iteration, the engine will create fuzzy rules to represent the current state and keep them to predict future states. When inputs are read and they fit any previous record rule, the engine will use fuzzy math to compute the expected outputs values.
To keep SML always updated with users needs, Fuzzy engine assigns higher weights to frequent rules and non recurring rules tend to be forgotten.
- Better when interpolation is needed
- Low memory consumption
- Prediction is faster
- For neural network previously trained and where there is no need for retraining, memory consumption is even lower.
- Always provide a prediction, even if the state is completely new
- Training takes longer and training time is unpredictable
- It does not work while not collect enough data
- Tends to forget old events
- Better results when using id fields as inputs or outputs
- Training is faster
- Start to provide prediction just after first read
- Higher memory consumption
- Tends to forget non recurring events
- only it provides prediction when the state is similar to state already previously read. This is only a problem when there is a need for prediction in all iterations.
Code is hosted at GitHub under BSD 3-clauses license.