From 1ccacacd5c4b7c27d6d34aed8754dbebd2379052 Mon Sep 17 00:00:00 2001 From: shizhediao <654745845@qq.com> Date: Tue, 24 Dec 2024 10:44:23 -0800 Subject: [PATCH] update README --- README.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/README.md b/README.md index 7669f930..7f95c192 100644 --- a/README.md +++ b/README.md @@ -9,8 +9,6 @@ An extensible, convenient, and efficient toolbox for finetuning large machine le - [Setup](#setup) - [Prepare Dataset](#prepare-dataset) - [Training](#training) - - [LoRA](#lora) - - [Inference](#inference) - [Evaluation](#evaluation) - [Support](#support) - [License](#license) @@ -65,9 +63,6 @@ For sanity check, we provide [a small dataset](./data/wikitext-2-raw-v1/test) fo To process your own dataset, please refer to our [doc](https://optimalscale.github.io/LMFlow/examples/DATASETS.html). ### Training - -#### LoRA - LoRA is a parameter-efficient finetuning algorithm and is more efficient than full finetuning. ```sh bash run_finetune_with_lora.sh