Skip to content

The TasterAI is a neural network model for food categorization. It consists of a database of 15 types of food.

Notifications You must be signed in to change notification settings

Nachosori/TasterAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Descripción de la imagen

The TasterAI is a neural network model for food categorization. It consists of a database of 15 types of food with an accuracy of 85%. A database with calories and recipes of the foods predicted by the model has also been implemented.

At any time you can train the model with more food types. But at the same time they have to update the calorie data etc. of the foods that are not originally implemented.

The model predicts the following foods: Carrot Cake, Chocolate Cake, Cheesecake, Hamburger, Hot Dog, Cup Cake, Guacamole, Nachos, Gyoza, Ice Cream, Paella, Donuts, Sushi, Macarons and Pizza.

Contents of this file

Index
  1. Introduction
  2. Requeriments
  3. Objectives
  4. Acknowledgments
  5. Autor

Introduction

The database with which the model is trained is based on 15,000 photographs, 1,000 for each type of food.

The photographs were extracted from the food-101 database.

I attach a link below:

https://www.kaggle.com/datasets/dansbecker/food-101

The model has been trained after several tests with MobileNet v2.

The MobileNet v2 architecture is based on an inverted residual structure in which the input and output of the residual block are thin bottleneck layers, unlike traditional residual models that use expanded representations at the input. MobileNet v2 uses lightweight convolutions in depth to filter features in the intermediate expansion layer. In addition, nonlinearities in the narrow layers have been removed to maintain representational power.

The training with MobileNet gave good results from the very first moment, even though training food recognition is always a bit complicated since each type of food can be represented in very different ways.

Descripción de la imagen

As can be seen in the image, the model leads to an accuracy of almost 85%.

After training, two different datasets have been created, the first one with nutritional data for each of the foods recognized by the model and the last one with recipes for each of the foods.

All this has been implemented in Streamlit so that when a food is recognized, the nutritional data is automatically displayed and a recipe is suggested.

Descripción de la imagen

Within Streamlit the model has been implemented with opencv where we can see the model working with a live camera.

Descripción de la imagen

Finally we have implemented an administration section with username and password where we can upload recipes directly to the database so that the model can suggest them.

Descripción de la imagen

Requeriments

You need to have all the libraries from the requirements.txt file installed. Once everything is installed, you can start running the TasterAIdashboad.

To start, go to the Dashboard folder from the console and run the following command:

streamlit run main.py

To be able to use the administrator section from the Dashboard it is necessary to include a file with the users that are going to have these permissions, this file will be placed inside the .streamlit folder with the name secrets.toml and will have the following format:

#.streamlit/secrets.toml

[passwords]
# Follow the rule: username = "password"
username = "password"

Objectives

The objectives for improving the model in the short term are as follows:

  • Detecting live food through the camera
  • Detect several meals at the same time
  • Improve the accuracy of the model.
  • Increase the number of foods recognized by the model.
  • Implement its use on a Rasberry Pi and test the system on the street or in stores.

Acknowledgments

  1. CORE Code School
  2. Marc Pomar
  3. Alvaro Lucas
  4. Daniel Alvarado
  5. Santino Lede

Autor

Nacho Soria

[email protected]