From 0a69c78e25b2a83495e784e316e18d98c9a85006 Mon Sep 17 00:00:00 2001
From: shizhediao <654745845@qq.com>
Date: Tue, 24 Dec 2024 10:43:41 -0800
Subject: [PATCH] update README
---
README.md | 39 +--------------------------------------
1 file changed, 1 insertion(+), 38 deletions(-)
diff --git a/README.md b/README.md
index 97be53b5..7669f930 100644
--- a/README.md
+++ b/README.md
@@ -70,28 +70,10 @@ To process your own dataset, please refer to our [doc](https://optimalscale.gith
LoRA is a parameter-efficient finetuning algorithm and is more efficient than full finetuning.
```sh
-cd data && ./download.sh alpaca && cd -
-
-bash ./scripts/run_finetune_with_lora.sh \
- --model_name_or_path facebook/galactica-1.3b \
- --dataset_path data/alpaca/train_conversation \
- --output_lora_path output_models/finetuned_galactica_lora
+bash run_finetune_with_lora.sh
```
> [!TIP]
-> Llama-2-7B conversation dataset example
->
->```bash
->cd data && ./download.sh alpaca && cd -
->
->bash ./scripts/run_finetune_with_lora.sh \
-> --model_name_or_path meta-llama/Llama-2-7b-hf \
-> --dataset_path data/alpaca/train_conversation \
-> --conversation_template llama2 \
-> --output_model_path output_models/finetuned_llama2_7b_lora \
->```
->
->
> Merge LoRA Weight
>
>Merge LoRA weight and the base model into one using:
@@ -103,25 +85,6 @@ bash ./scripts/run_finetune_with_lora.sh \
>```
>
-### Inference
-After finetuning, you can run the following command to chat with the model.
-```sh
-bash ./scripts/run_chatbot.sh output_models/finetuned_gpt2
-```
-
-> [!TIP]
-> We recommend using vLLM for faster inference.
->
-> Faster inference using vLLM
->
->```bash
->bash ./scripts/run_vllm_inference.sh \
-> --model_name_or_path Qwen/Qwen2-0.5B \
-> --dataset_path data/alpaca/test_conversation \
-> --output_dir data/inference_results \
->```
->
-
### Evaluation
[TODO]