Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SFT Training update tutorials #769

Merged
merged 14 commits into from
Jan 29, 2025
Merged

SFT Training update tutorials #769

merged 14 commits into from
Jan 29, 2025

Conversation

tengomucho
Copy link
Collaborator

@tengomucho tengomucho commented Jan 28, 2025

What does this PR do?

This PR revisits the SFT Training tutorial of Llama3-8B. Few highlights of the changes:

  • correct scripts, remove references to unused configurations and tools,
  • update TOC,
  • update wording, compile script and separate shell script to launch training,
  • add model merge after consolidation, to obtain a model that can be loaded for evaluation,
  • add dependencies on trl and peft.

Note that there is still an issue with the training: the loss does NOT decrease during fine-tune (cc @michaelbenayoun).

Before submitting

  • This PR fixes a typo or improves the docs

Copy link
Collaborator

@dacorvo dacorvo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thansk you for the pull-request. Looks good to me except for the dependencies.

setup.py Outdated
@@ -19,6 +19,8 @@
"huggingface_hub >= 0.20.1",
"numpy>=1.22.2, <=1.25.2",
"protobuf>=3.20.3, <4",
"trl == 0.11.4",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the global, bare minimum dependencies: we should not add optional components here.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Comment on lines +59 to +62
5. Make sure you have the `training` extra installed, to get all the necessary dependencies:
```bash
python -m pip install .[training]
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!!

@@ -63,7 +73,7 @@ Example:
"context": "",
"response": (
"World of warcraft is a massive online multi player role playing game. "
"It was released in 2004 by blizarre entertainment"
"It was released in 2004 by bizarre entertainment"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"It was released in 2004 by bizarre entertainment"
"It was released in 2004 by Blizzard Entertainment"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, that is what is actually in the Dolly dataset! See here 🤷

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a former big player of WoW I feel attacked.

--gradient_accumulation_steps $GRADIENT_ACCUMULATION_STEPS \
--gradient_checkpointing true \
--bf16 \
--zero_1 false \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be removed

@@ -50,9 +50,15 @@
"hf_doc_builder @ git+https://github.com/huggingface/doc-builder.git",
]

TRAINING_REQUIRES = [
"trl == 0.11.4",
"peft == 0.14.0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add neuronx_distributed as well?

@tengomucho tengomucho force-pushed the training-update-tutorials branch from 544da8d to 15408fe Compare January 29, 2025 12:45
Copy link
Member

@michaelbenayoun michaelbenayoun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
That's awesome, thanks for taking care of that!

@@ -63,7 +73,7 @@ Example:
"context": "",
"response": (
"World of warcraft is a massive online multi player role playing game. "
"It was released in 2004 by blizarre entertainment"
"It was released in 2004 by bizarre entertainment"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a former big player of WoW I feel attacked.

@tengomucho tengomucho merged commit 43ad4be into main Jan 29, 2025
9 of 11 checks passed
@tengomucho tengomucho deleted the training-update-tutorials branch January 29, 2025 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants