Skip to content
/ QNN Public

Tutorials on Quantized Neural Network using Tensorflow Lite

License

Notifications You must be signed in to change notification settings

soon-yau/QNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

8c86bbc · May 28, 2019

History

27 Commits
Jan 4, 2019
Nov 25, 2018
Dec 17, 2018
Nov 15, 2018
May 28, 2019
Nov 21, 2018
Nov 25, 2018
Nov 26, 2018
Jan 4, 2019
Jan 4, 2019
Jan 4, 2019
Nov 15, 2018
Nov 28, 2018
Nov 16, 2018
Nov 16, 2018

Repository files navigation

QNN

Quantized Neural Network

Traditionally, deep learning uses single precision floating point (float32) data type. Recent researches show that using lower precision say half precision floating point (float16) or even unsigned 8-bit integer (uint8) doesn’t impact the neural network accuracy significantly. Although there are now tonnes of tutorials in using machine learning framework like Tensorflow but I couldn't find many tutorials on Tensorflow Lite or quantization. Therefore, I decided to write some tutorials to explains quantization and fast inference.

You don’t need to have prior knowledge in quantization but I do expect you be familiar with Tensorflow and basic of deep neural network. In this tutorials I will use TensorflowLite in Tensorflow 1.10 and Python 3.

About

Tutorials on Quantized Neural Network using Tensorflow Lite

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published