From 661ca38219900c5cdc41a0442e17405d16642816 Mon Sep 17 00:00:00 2001 From: Zhimin Li <46835311+zml-ai@users.noreply.github.com> Date: Fri, 14 Jun 2024 15:13:50 +0800 Subject: [PATCH] Update README.md --- README.md | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 72aac22..f7e15a5 100644 --- a/README.md +++ b/README.md @@ -374,10 +374,10 @@ All models will be automatically downloaded. For more information about the mode To leverage DeepSpeed in training, you have the flexibility to control **single-node** / **multi-node** training by adjusting parameters such as `--hostfile` and `--master_addr`. For more details, see [link](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). ```shell - # Single Resolution Data Preparation + # Single Resolution Training PYTHONPATH=./ sh hydit/train.sh --index-file dataset/porcelain/jsons/porcelain.json - # Multi Resolution Data Preparation + # Multi Resolution Training PYTHONPATH=./ sh hydit/train.sh --index-file dataset/porcelain/jsons/porcelain.json --multireso --reso-step 64 ``` @@ -385,6 +385,13 @@ All models will be automatically downloaded. For more information about the mode We provide training and inference scripts for LoRA, detailed in the [guidances](./lora/README.md). + ```shell + # Training for porcelain LoRA. + PYTHONPATH=./ sh lora/train_lora.sh --index-file dataset/porcelain/jsons/porcelain.json + + # Inference using trained LORA weights. + python sample_t2i.py --prompt "青花瓷风格,一只小狗" --no-enhance --lora_ckpt log_EXP/001-lora_porcelain_ema_rank64/checkpoints/0001000.pt + ``` ## 🔑 Inference