From 1f329f351aaf7f9c01b32b4e5db2db84e50a533a Mon Sep 17 00:00:00 2001 From: rpan Date: Tue, 2 Apr 2024 19:54:27 +0800 Subject: [PATCH] Add explanation for LISA with model-parallelism --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 881602e8f..010805298 100644 --- a/README.md +++ b/README.md @@ -124,6 +124,8 @@ cd data && ./download.sh alpaca && cd - --lisa_interval_steps 20 ``` +We are still working on integrating official model-parallelism support for LISA. Please stay tuned :smile: + ### Finetuning (LoRA) LoRA is a parameter-efficient finetuning algorithm and is more efficient than full finetuning. ```sh