ML2Scratch connects Machine Learning(TensorFlow.js) with Scratch.
If you take a few images with a webcam, label them, and learn them, you can classify similar new images based on the learning results. The captured images are not sent to the server, and all learning and classification are performed in the browser. (However, a network connection is required to load the application at startup and to download the learning model.)
The block languages are English, Japanese, Japanese, Simplified Chinese(provided by 陶旭 https://twitter.com/taoxu_toukyoku), and Traditional Chinese(provided by CAVEDU Education https://github.com/cavedunissin). If you would like to help us translate into other languages, please open an issue or contact us.
Read this in other languages: English, 日本語, 简体中文.
- Rock/Scissors/Paper Demo YouTube | .mov file
- Control a toy robot, MiP, by hand gestures YouTube | .mov file
- OS
- Windows 8
- Windows 10
- MacOS
- Chrome OS
- iOS
- Browser
- Chrome
- Safari(iOS)
ML2Scratch sometimes does not work because of some Chrome extensions. Plese switch to Guest Mode in such cases.
-
Open "Choose an Extension" window and select "ML2Scratch".
-
Chrome asks you to allow the access to Camera, then click "Allow".
-
Check the checkboxes besides "label", "counts of label 1", "counts of label 2" and "counts of label 3" blocks.
-
Show "rock" hand sign to the camera and click "train label 1" block. This is to train the machine to recognize "rock" sign as label 1.
-
Keep clicking the button until you capture about 20 images. The number of images captured is displayed in "counts of label 1" field in Stage window.
-
Show "paper" hand sign to the camera and keep clicking "train label 2" block until you get 20 as "counts of label 2".
-
Show "scissors" hand sign to the camera and keep clicking "train label 3" block until you get 20 as "counts of label 3".
-
After training, the recognition result shows in the "label" field in Stage area. If you show "rock", the "label" should show "1", if you show "paper", the "label" should show "2" and if you show "scissors", the "label" should show "3".
-
You can use "when received label #" blocks and create a sample program like this:
You can switch the images to be learned/classified.
By default, Scratch's stage image is used for learning/classification.
If there is a webcam image on the stage, it learns/classified the webcam image, or if the "Turn off video" block stops showing the webcam image and shows a game or animation screen, etc., it uses that screen for learning/classfication.
If you want to learn/classify only the webcam's image, you can use
It can be switch to a webcam image for learning/classification. If you want to move the character by gestures on the camera image, I think this is a more accurate way to judge.
With ML2Scratch, you can download and save the trained model on your PC by using the "download learning data" block.
Click, specify the file download destination, and press the "Save" button. The learning data will be saved as a file <numerical string>.json.
The project itself is not saved automatically like a normal Scratch, so select "File" > "Save to your computer" and save it on your PC as a .sb3 file.
To reopen a saved project, choose "File" > "Load from your computer" and select the saved .sb3 file. After that, upload the learning data.
The saved learning data can be uploaded in the "upload learning data" block.
When you click, a window called "upload learning data" opens, so click the "Select file" button, select the training data file (<numerical sequence>.json), and press Click.
At this time, be aware that the data that has been learned will be overwritten.
-
Setup LLK/scratch-gui on your computer.
git clone --depth 1 [email protected]:LLK/scratch-gui.git cd scratch-gui npm install
-
In scratch-gui folder, clone ML2Scratch. You will have ml2scratch folder under scratch-gui.
git clone [email protected]:champierre/ml2scratch.git
-
Run the install script.
sh ml2scratch/install.sh
-
Run Scratch, then go to http://localhost:8601/.
npm start
This project was made possible by the contributions of the following people. I would also like to thank those who have reported bugs or suggested improvements not listed below, and those who have used the software in workshops and given me feedback.
- Banner images and icons: Yu Ishihara
- Simplified Chinese Translation: 陶旭
- Traditional Chinese Translation: CAVEDU Education
ML2Scratch is under AGPL-3.0 license, open source and freely available to anyone. You can use it at your classes, workshops. Commercial usage is also accepted. If you or your students created something cool using ML2Scratch, please share it on SNS using hashtag #ml2scratch or let me know to any of these contacts. Interesting projects will be added to the "Examples of use".
- Try to avoid obstacles with machine learning # ML2Scratch # ev3(Google Translated)
- Control Wagara-saurus(Japanese style dinosaur) using ML2Scratch
- Control an electric fan with illustration
- Smart Trash Box(Japanese)
- Making a coin sorting AI robot with Scratch and micro:bit
- Go forward with jasmine bottle, go backward with canned coffee (movie)
- ML2Scratch bookshelf arrangement check (movie)
- ML2Scratch detects parking space fullness (movie)