Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/llama factory llama2 pipeline #89

Merged
merged 12 commits into from
Nov 8, 2023

Conversation

Jasonqi146
Copy link
Member

@Jasonqi146 Jasonqi146 commented Nov 7, 2023

Closes #88

📑 Description

Added training script for training llama-2-13b.
Added and registered dummy dataset in /data directory.
Added and registered training dataset in /data directory.
Modified original sft to incorporate wandb.
Added global rank checking for determining logger.

✅ Checks

  • My pull request adheres to the code style of this project
  • My code requires changes to the documentation
  • I have updated the documentation as required
  • All the tests have passed
  • Branch name follows type/descript (e.g. feature/add-llm-agents)
  • Ready for code review

ℹ Additional Information

@Jasonqi146 Jasonqi146 merged commit 6a603ca into main Nov 8, 2023
ruiyiw pushed a commit that referenced this pull request Nov 13, 2023
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
ruiyiw pushed a commit that referenced this pull request Nov 14, 2023
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Nov 16, 2023
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Nov 17, 2023
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 13, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
Signed-off-by: Haofei Yu <[email protected]>
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
Signed-off-by: Haofei Yu <[email protected]>
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k

(cherry picked from commit c769638)
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
lwaekfjlk pushed a commit that referenced this pull request Mar 14, 2024
* added llama-factory under llm_rl

* added sft training bash

* added datasets from llama-factory; will delete later

* finished llama-2-13b train and inference

* fixed minor errors

* changed config

* added deepspeed config

* added more training config to train bash

* adding fix for wandb tags and distributed ranks

* added fastchat data to replicate training for 2k
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEAT]: Migrate llama-2-13b and mistral-7b to llama factory
1 participant