From f20c7340f5ceac948e0a412b4ba2eedeb85e2b31 Mon Sep 17 00:00:00 2001 From: Jason Ren Date: Thu, 30 Jan 2025 13:59:26 -0800 Subject: [PATCH] Fix bug in training command in README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index d3d2834..ccb3612 100644 --- a/README.md +++ b/README.md @@ -75,12 +75,12 @@ DiT models, but it can be easily modified to support other types of conditioning To launch DiT-XL/2 (256x256) training with `1` GPUs on one node: ```bash -accelerate launch --mixed_precision fp16 train.py --model DiT-XL/2 --features-path /path/to/store/features +accelerate launch --mixed_precision fp16 train.py --model DiT-XL/2 --feature-path /path/to/store/features ``` To launch DiT-XL/2 (256x256) training with `N` GPUs on one node: ```bash -accelerate launch --multi_gpu --num_processes N --mixed_precision fp16 train.py --model DiT-XL/2 --features-path /path/to/store/features +accelerate launch --multi_gpu --num_processes N --mixed_precision fp16 train.py --model DiT-XL/2 --feature-path /path/to/store/features ``` Alternatively, you have the option to extract and train the scripts located in the folder [training options](train_options).