The code examples demonstrate features which will enable you to make the most of the IPU. They are part of the Developer resources provided by Graphcore: https://www.graphcore.ai/developer.
Each of the examples contains its own README file with full instructions.
Efficiently use multiple IPUs and handle large models:
- Phased execution: this example shows how to run a network over two IPUs by splitting it in several execution phases.
- Pipelining: a simple model made of two dense layers, pipelined over two IPUs.
- Recomputing: a demonstration of manual and automatic recomputing on the IPU.
- Sharding: a simple model sharded on two IPUs.
Exchange data between host and IPU efficiently:
- Callbacks: a simple computation graph that uses callbacks to feed data and retrieve the results.
Define custom operators:
- Custom operators: two implementations of custom operators (leaky ReLU and cube).
Exchange data between host and IPU efficiently:
- Prefetch: a demonstration of prefetching data when a program runs several times.
Demonstrate advanced features of Poplar:
- Advanced example: an example demonstrating several advanced features of Poplar, including saving and restoring Poplar executables, moving I/O into separate Poplar programs, and using our PopLibs framework.
Debugging and analysis:
- Inspecting tensors: an example that shows how outfeed queues can be used to return activation and gradient tensors to the host for inspection.
Use estimators:
- IPU Estimator: an example showing how to use the IPUEstimator to train and evaluate a simple CNN.
Specific layers:
-
Embeddings: an example of a model with an embedding layer and an LSTM, trained on the IPU to predict the sentiment of an IMDB review.
-
Recomputation Checkpoints: an example demonstrating the checkpointing of intermediate values to reduce live memory peaks with a simple Keras LSTM model.
Efficiently use multiple IPUs and handle large models:
- PopDist: an example showing how to make an application ready for distributed training and inference by using the PopDist library, and how to launch it with the PopRun distributed launcher.
Define custom operators:
- Custom operators: an example showing how to make a PopART custom operator available to PopTorch and how to use it in a model.
Specific layers:
- Octconv: an example showing how to use Octave Convolutions in PopTorch training and inference models.