Skip to content

Commit c18324a

Browse files
committed
Merge branch 'main' into main-public
2 parents 5084766 + c9472e4 commit c18324a

File tree

96 files changed

+2341
-1244
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

96 files changed

+2341
-1244
lines changed

GetStarted.md

Lines changed: 59 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
This is a guide for getting started with Intel® Transfer Learning Tool and will
44
walk you through the steps to check system requirements, install, and then run
55
the tool with a couple of examples showing no-code CLI and low-code API
6-
approaches.
6+
approaches.
77

88
<p align="center"><b>Intel Transfer Learning Tool Get Started Flow</b></p>
99

@@ -90,7 +90,7 @@ approaches.
9090
9191
```
9292
python setup.py bdist_wheel
93-
pip install dist/intel_transfer_learning_tool-0.6.0-py3-none-any.whl
93+
pip install dist/intel_transfer_learning_tool-0.7.0-py3-none-any.whl
9494
```
9595
9696
@@ -131,38 +131,31 @@ tlt list models --use-case image_classification
131131
132132
**Train a Model**
133133
134-
In this example, we'll use the `tlt train` command to retrain the TensorFlow
135-
ResNet50v1.5 model using a flowers dataset from the
136-
[TensorFlow Datasets catalog](https://www.tensorflow.org/datasets/catalog/tf_flowers).
134+
In this example, we'll use the `tlt train` command to retrain the PyTorch
135+
efficientnet_b0 model using a food101 dataset from the
136+
[PyTorch Datasets](https://pytorch.org/vision/stable/generated/torchvision.datasets.Food101.html).
137137
The `--dataset-dir` and `--output-dir` paths need to point to writable folders on your system.
138138
```
139-
# Use the follow environment variable setting to reduce the warnings and log output from TensorFlow
140-
export TF_CPP_MIN_LOG_LEVEL="2"
141139

142-
tlt train -f tensorflow --model-name resnet_v1_50 --dataset-name tf_flowers --dataset-dir "/tmp/data-${USER}" --output-dir "/tmp/output-${USER}"
140+
tlt train -f pytorch --model-name efficientnet_b0 --dataset-name Food101 --dataset-dir "/tmp/data-${USER}" --output-dir "/tmp/output-${USER}"
143141
```
144142
```
145-
Model name: resnet_v1_50
146-
Framework: tensorflow
147-
Dataset name: tf_flowers
143+
Model name: efficientnet_b0
144+
Framework: pytorch
145+
Dataset name: Food101
148146
Training epochs: 1
149147
Dataset dir: /tmp/data-user
150148
Output directory: /tmp/output-user
149+
151150
...
152-
Model: "sequential"
153-
_________________________________________________________________
154-
Layer (type) Output Shape Param #
155-
=================================================================
156-
keras_layer (KerasLayer) (None, 2048) 23561152
157-
dense (Dense) (None, 5) 10245
158-
=================================================================
159-
Total params: 23,571,397
160-
Trainable params: 10,245
161-
Non-trainable params: 23,561,152
162-
_________________________________________________________________
163-
Checkpoint directory: /tmp/output-user/resnet_v1_50_checkpoints
164-
86/86 [==============================] - 24s 248ms/step - loss: 0.4600 - acc: 0.8438
165-
Saved model directory: /tmp/output-user/resnet_v1_50/1
151+
Epoch 1/1
152+
----------
153+
100%|██████████████████████████████████████████████████| 1776/1776 [27:02<00:00, 1.09it/s]
154+
Performing Evaluation
155+
100%|██████████████████████████████████████████████████| 592/592 [08:33<00:00, 1.15it/s]
156+
Loss: 2.7038 - Acc: 0.3854 - Val Loss: 2.1242 - Val Acc: 0.4880
157+
Training complete in 35m 37s
158+
Saved model directory: /tmp/output-user/efficientnet_b0/1
166159
```
167160
168161
After training completes, the `tlt train` command evaluates the model. The loss and
@@ -181,8 +174,8 @@ Find more examples in our list of [Examples](examples/README.md).
181174
182175
### b) Run Using the Low-Code API
183176
184-
The following Python code example trains an image classification model with the TensorFlow
185-
flowers dataset using API calls from Python. The model is
177+
The following Python code example trains an image classification model with the PyTorch
178+
RenderedSST2 dataset using API calls from Python. The model is
186179
benchmarked and quantized to INT8 precision for improved inference performance.
187180
188181
You can run the API example using a Jupyter notebook. See the [notebook setup
@@ -191,9 +184,6 @@ notebook environment.
191184
192185
```python
193186
import os
194-
195-
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
196-
197187
from tlt.datasets import dataset_factory
198188
from tlt.models import model_factory
199189
from tlt.utils.types import FrameworkType, UseCaseType
@@ -211,24 +201,52 @@ if not os.path.exists(output_dir):
211201
os.makedirs(output_dir)
212202
213203
# Get the model
214-
model = model_factory.get_model(model_name="resnet_v1_50", framework=FrameworkType.TENSORFLOW)
204+
model = model_factory.get_model(model_name="efficientnet_b0", framework=FrameworkType.PYTORCH)
215205
216-
# Download and preprocess the flowers dataset from the TensorFlow datasets catalog
206+
# Download and preprocess the RenderedSST2 dataset from the torchvision datasets catalog
217207
dataset = dataset_factory.get_dataset(dataset_dir=dataset_dir,
218-
dataset_name='tf_flowers',
208+
dataset_name='RenderedSST2',
219209
use_case=UseCaseType.IMAGE_CLASSIFICATION,
220-
framework=FrameworkType.TENSORFLOW,
221-
dataset_catalog='tf_datasets')
210+
framework=FrameworkType.PYTORCH,
211+
dataset_catalog='torchvision')
222212
dataset.preprocess(image_size=model.image_size, batch_size=32)
223213
dataset.shuffle_split(train_pct=.75, val_pct=.25)
224214
225215
# Train the model using the dataset
226-
model.train(dataset, output_dir=output_dir, epochs=1)
227-
228-
# Evaluate the trained model
229-
metrics = model.evaluate(dataset)
230-
for metric_name, metric_value in zip(model._model.metrics_names, metrics):
231-
print("{}: {}".format(metric_name, metric_value))
216+
model.train(dataset, output_dir=output_dir, epochs=1, ipex_optimize=False)
217+
218+
# Visualize the trained model result
219+
import matplotlib.pyplot as plt
220+
import numpy as np
221+
images, labels = dataset.get_batch()
222+
223+
# Predict with a single batch
224+
predictions = model.predict(images)
225+
226+
# Map the predicted ids to the class names
227+
predictions = [dataset.class_names[id] for id in predictions]
228+
229+
# Display the results
230+
plt.figure(figsize=(16,16))
231+
plt.subplots_adjust(hspace=0.5)
232+
for n in range(min(batch_size, 30)):
233+
plt.subplot(6,5,n+1)
234+
inp = images[n]
235+
inp = inp.numpy().transpose((1, 2, 0))
236+
mean = np.array([0.485, 0.456, 0.406])
237+
std = np.array([0.229, 0.224, 0.225])
238+
inp = std * inp + mean
239+
inp = np.clip(inp, 0, 1)
240+
plt.imshow(inp)
241+
correct_prediction = labels[n] == predictions[n]
242+
color = "darkgreen" if correct_prediction else "crimson"
243+
title = predictions[n].title() if correct_prediction else "{}\n({})".format(predictions[n], labels[n])
244+
plt.title(title, fontsize=14, color=color)
245+
plt.axis('off')
246+
_ = plt.suptitle("Model predictions", fontsize=16)
247+
plt.show()
248+
print("Correct predictions are shown in green")
249+
print("Incorrect predictions are shown in red with the actual label in parenthesis")
232250
233251
# Export the model
234252
saved_model_dir = model.export(output_dir=output_dir)
@@ -239,10 +257,6 @@ model.quantize(quantization_output, dataset, overwrite_model=True)
239257
240258
# Benchmark the trained model using the Intel Neural Compressor config file
241259
model.benchmark(dataset, saved_model_dir=quantization_output)
242-
243-
# Do graph optimization on the trained model
244-
optimization_output = os.path.join(output_dir, "optimized_model")
245-
model.optimize_graph(optimization_output, overwrite_model=True)
246260
```
247261

248262
For more information on the API, see the [API Documentation](/api.md).

0 commit comments

Comments
 (0)