Skip to content

Commit 59678bc

Browse files
authored
Update README and doc (bytedance#417)
Update README and doc
1 parent 5be2968 commit 59678bc

File tree

28 files changed

+464
-906
lines changed

28 files changed

+464
-906
lines changed

README.md

Lines changed: 220 additions & 79 deletions
Large diffs are not rendered by default.
File renamed without changes.

docs/examples.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# LightSeq Examples
2+
3+
## Table of Contents
4+
- [Cpp Examples](#cpp-examples)
5+
- [Python Examples](#python-examples)
6+
- [Train the models](#train-the-models)
7+
- [Export and infer the models](#export-and-infer-the-models)
8+
- [Deploy using Tritonbackend](#deploy-using-tritonbackend)
9+
10+
## Cpp Examples
11+
We provide multiple cpp examples of LightSeq inference.
12+
13+
First you should use the training examples in the following to train a model, and then export it to protobuf or HDF5 format.
14+
15+
Then use the cpp examples to infer the models:
16+
1. Uncomment the `add_subdirectory(examples/inference/cpp)` in the [CMakeLists.txt](../CMakeLists.txt).
17+
2. Build the LightSeq. Refer to [build.md](./build.md) for more details.
18+
3. Switch to `build/temp.linux-xxx/examples/inference/cpp`, and then run `sudo make` to compile the cpp example.
19+
4. Run the cpp examples by `./xxx_example MODEL_PATH`.
20+
21+
## Python Examples
22+
We provide a series of Python examples to show how to use LightSeq to do model training and inference.
23+
24+
### Train the models
25+
Currently, LightSeq supports training from [Fairseq](../examples/training/fairseq/README.md), [Hugging Face](../examples/training/huggingface/README.md), [DeepSpeed](../examples/training/deepspeed/README.md) and [from scratch](../examples/training/custom/README.md). For more training details, please refer to the respective README.
26+
27+
### Export and infer the models
28+
First export the models training by Fairseq, Hugging Face or LightSeq to protobuf or HDF5 format. Then test the results and speeds using the testing scripts.
29+
30+
Refer to [here](../examples/inference/python/README.md) for more details.
31+
32+
## Deploy using Tritonbackend
33+
Refer to [here](../examples/triton_backend/README.md) for more details.

docs/guide.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,20 @@
11
# A Guide of LightSeq Training and Inference
2+
3+
## Table of Contents
4+
- [Introduction](#introduction)
5+
- [Training](#training)
6+
- [Custom integration](#custom-integration)
7+
- [Hugging Face](#hugging-face)
8+
- [Fairseq](#fairseq)
9+
- [DeepSpeed](#deepspeed)
10+
- [Inference](#inference)
11+
- [Export](#export)
12+
- [Fairseq](#fairseq)
13+
- [Hugging Face](#hugging-face)
14+
- [LightSeq Transformer](#lightseq-transformer)
15+
- [Custom models](#custom-models)
16+
- [Inference in three lines of codes!](#inference-in-three-lines-of-codes)
17+
218
## Introduction
319
This document mainly introduces the detailed process of LightSeq training and inference. In short, the process can be divided into the following three steps:
420
1. Train models integrated with LightSeq training modules, and save the checkpoints.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)