Simple framework to construct machine learning models with tensorflow
SuperTF was initially conceived as a means to get familiar with Tensorflow by constructing machine learning models and executing the Tensorflow Tutorials
I have expanded SuperTF overtime and now it has a suite of tools to help in:
- Generation of datasets as tfrecords files (Currently supports Semantic segmentation, Classification and Sequence Generation)
- Rapid Prototyping of Deep learning models
- Network and Data visualization via tensorboard
- Session management for extended training sessions
Please refer to the examples for:
- Classification dataset generation
- Classification dataset reading
- Training LeNet
- Training AlexNet (Semantic segmentation examples will be added shortly)
I’ve added a several neural network architectures:
- LeNet - Gradient based learning applied to document recognition
- AlexNet - ImageNet Classification with Deep Convolutional Neural Networks
- Vgg16 - Very Deep Convolutional Networks for Large-Scale Image Recognition
- Vgg19 - Very Deep Convolutional Networks for Large-Scale Image Recognition
- Inception-resnet-v2-paper - Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
- Inception-resnet-v2-published - Improving Inception and Image Classification in TensorFlow
- Full-Resolution Residual Network-A - Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes
- Im2txt: Caption generation model ported from Tensorflow im2txt
I've edited and added to certain network architectures to fulfill a certain niche or to improve their performance. These networks are:
-
Unet1024 - U-Net: Convolutional Networks for Biomedical Image Segmentation
Unet1024 is a simple extension of the orginal Unet architecture, the network accepts an image of size 1024 x 1024 and has 7 encoder-decoder pairs.
-
Full-Resolution Residual Network-C - Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes
FRRN-C is build upon FRRN-A. Here the center Full-Resolution residual block is replaced by densely conected block of dialated convolutions. Moreover the Full-Resolution Residual Network is enclosed in an encoder decoder pair which doubles the input and output resolution.
-
Attn-Lstm Attn_Lstm is a multilayer Long short term memory network with BahdanauAttention. Initial state is set via feature vectors extracted from inception-resent-v2a. Used for image to text generation.
- Upgrading architectures to individual classes
- Preparing wrapper to work with both TF and Pytorch as backend