Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama3 conversion scripts 🦙 #7

Merged
merged 11 commits into from
Jul 2, 2024

Conversation

TJ-Solergibert
Copy link
Collaborator

From huggingface#174.

Message 1:

Hello,

In this PR, I include the scripts to convert the checkpoints of Llama3 8B & 70B to Nanotron. Although there are still some details to be polished, the current status is as follows:

  • Conversion from HF to Nanotron of the 8B model
  • Conversion from Nanotron to HF of the 8B model
  • Conversion from HF to Nanotron of the 70B model
  • Conversion from Nanotron to HF of the 70B model
  • Conversion on CPU
  • TP Topology agnostic

All conversions are carried out in BFLOAT16 and on the CPU, but we will need at least one GPU because the ParallelContext requires it. The 8B model fits on a GPU with 80GB, but the 70B model does not. Even so, in ALL conversions, we will set DP=PP=TP=1. I have confirmed that Nanotron supports changing the TP topology, although while waiting for GPUs in my cluster, I developed a fancy script with broadcast, scatter, and gathers to perform the conversion with TP>1. I have also tried a dummy Finetune with TP=2 from the TP=1 8B converted checkpoint to store it back with TP=2, checked the results in Nanotron (correct, results below), and then converted it back to HF with the result still being correct. I have attempted to experiment with all possible cases I think.

Included

  • convert_hf_to_nanotron.py to convert the weights from the HF checkpoint to Nanotron
  • convert_nanotron_to_hf.py to convert the weights from the Nanotron checkpoint to HF
  • generate_hf_predictions.py to test the logits of the HF model with a prompt
  • generate_nanotron_predictions.py to test the logits of the Nanotron model with a prompt
    • The generation scripts are just for debugging reasons, we should delete them before merging

Results & Precision

It is impossible for the two models (HF & Nanotron) to produce exactly the same logits with a level of precision capable of passing the assert_close test. This is true both at the model level and at the layer level because, despite having the same parameters, the two models perform different operations. Different in the sense of...

  • Shapes: For TP in the Attention Layer, the QKV matrices are fused in qkv_proj and the projections are computed with a single GEMM, whereas in the HF implementation, it is done in three (Even in the Meta model, it is done the same way, although they also have TensorParallelLayers). By changing the shape of the matrices, the result is different because, in the GEMM, the order of operations is non-deterministic, and in reduced 16-bit types, the difference becomes more noticeable when accumulating the result. The same happens in the MLP layer with gate_up_proj.
  • Operators: RoPE, LayerNorm. Nanotron uses different implementations for both RoPE and LayerNorm (TritonRMSNorm), which produce results that are not exactly the same as those of the HF implementation.

I have a (somewhat catastrophic) notebook where the differences at each operation level are evident. But what is really important is not so much the logits but the predictions and their order. To verify this, I developed the generate_XXXX.py scripts that, from the same prompt for the desired tokens, print the 10 most probable predictions and print an accuracy value of all the sequence. I chose a fixed prompt to 1. Ensure manually that the predictions makes sense 2. Compare through the different converted models. The following table shows the accuracy results for different configurations.

Experiment Backend Size TP Accuracy
OG HF HF 8 1 0.83
OG HF --> Nanotron Nanotron 8 1 0.83
OG HF --> Nanotron --> HF HF 8 1 0.83
OG HF HF 70 2 0.89
OG HF --> Nanotron Nanotron 70 2 0.83
OG HF --> Nanotron --> HF HF 70 2 0.89
HF -> Nanotron -> Dummy Finetune to change TP=2 -> HF HF 8 1 --> 2 0.83

It is worth noting that:

  1. For the 70B model, when using the HF backend with AutoModelForCausalLM.from_pretrained() there is NO tensor parallelism, while in Nanotron there is.
  2. The accuracy values are from the prediction of 512 tokens.

Details

This PR is build with huggingface#168 FA2 kernel, which is the same as in the HF implementation.

After extensive reverse engineering, I found a critical point that was significantly different from the HuggingFace implementation: RoPE. After numerous tests, even transferring the RoPE from the HF implementation, it turns out that there are 2 fundamental parameters of the FlashRotaryEmbedding layer:

  • interleaved: The default value in Nanotron is True, but it must be False.
  • rope_theta: The default value is 10000.0, but for Llama3, it is 500000.0.

I have included both values in LlamaConfig, with the OLD values as defaults, although I propose at least changing the interleaved default to False.

In essence, to perform the conversions, we initialize the two implementations (HuggingFace & Nanotron) and copy the parameters layer by layer. After trying several methods to copy the weights, I opted for the copy_ method, because this way we preserve the ShardedInfo & TiedInfo of all the NanotronParameters.

The conversion from HF to Nanotron is fast, taking 2 and 16 minutes for the 8B and 70B models respectively. However, the conversion from Nanotron to HF extends to 5 and 51 minutes respectively. This is due to the initialization of the HF model (AutoModelForCausalLM.from_config()).

When converting to Nanotron, we also store the tokenizer (as in the HF models) and generate a config.yaml with the basic configurations and parameters to start training from the checkpoint. Additionally, the conversions include assertions on all parameters to ensure that we are copying the parameters correctly and making the process as explicit as possible for the conversion of future models.

TODO

  • Check torch.no_grad() in conversions
  • Improve logging, log_rank of Nanotron was not working correctly
  • Add README
  • Add push_to_hub flag in the Nanotron to HF conversion script

Instructions

In the header of all the files there are instructions, I recommend the following commands to launch the evaluations and conversions.

torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path  meta-llama/Meta-Llama-3-8B-Instruct
torchrun --nproc-per-node 1 tools/llama3/convert_hf_to_nanotron.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama3-8B --pretrained-model-name-or-path meta-llama/Meta-Llama-3-8B-Instruct
torchrun --nproc-per-node 2 tools/llama3/generate_nanotron_predictions.py --tp 2 --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama3-8B
torchrun --nproc-per-node 1 tools/llama3/convert_nanotron_to_hf.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama3-8B --hugging-face-checkpoint-path hf_checkpoints/ConvertedNanotronLlama3-8B
torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path hf_checkpoints/ConvertedNanotronLlama3-8B

Message 2:

Hi @xrsrke ,

After your comments about exploding gradient issues I've run the following:

  1. Preprocessed the DKYoon/SlimPajama-6B dataset to use Nanoset
  2. Changed the TXT prompt of the generate_XXX.py scripts to a prompt generated by meta-llama/Meta-Llama-3-8B. I do this in order to get a good accuracy in the tests in order to detect flaws more easily (If we perform bad and after we perform worse it's difficult from where does this decrease comes from).
  3. Run generate_hf_predictions.py for the base Llama-3-8B model and we get 0.888671875 of accuracy:
    torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path models/Meta-Llama-3-8B
  4. Convert the checkpoint to Nanotron, 2 minutes:
    torchrun --nproc-per-node 1 tools/llama3/convert_hf_to_nanotron.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B --pretrained-model-name-or-path models/Meta-Llama-3-8B
  5. Generate Nanotron predictions with generate_nanotron_predictions.py with TP = 1 & TP = 2:
    torchrun --nproc-per-node 1 tools/llama3/generate_nanotron_predictions.py --tp 1 --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B
    We get 0.888671875 & 0.869140625 with TP = 1 & TP = 2 respectively. This difference is due to TP and what I explained about shapes and GEMMs.
  6. Run a fine-tune for 500 steps with TP = 2 and 256000 tokens. The logs of the run are here. I don't see any issues.
  7. Then I run generate_nanotron_predictions.py with the new checkpoint with TP = 2. The accuracy is very very low. Something is happening.
    • First I rerun the experiment for just 5 steps. Accuracy is still very very low.
    • I try with PP = 2 & TP = 1 to try if it's a problem with TP. This doesn't makes much sense because as I've said, we can run the Nanotron generations with different TP sizes and also the 70B model is converted to a TP = DP = PP = 1 checkpoint and it works converting the model in both directions + the generations. The accuracy still sucks.
    • Finally, I reduce the learning rate. This was the actual problem, as I was using the default one. I set a very low value and train for 100 iterations. The logs are also in W&B.
  8. Run predictions with the fine-tuned model. We get 0.876953125 & 0.86328125 with TP = 1 and TP = 2 respectively.
    torchrun --nproc-per-node 2 tools/llama3/generate_nanotron_predictions.py --tp 2 --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B-finetuned/100
  9. Convert back to HuggingFace:
    torchrun --nproc-per-node 1 tools/llama3/convert_nanotron_to_hf.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B-finetuned/100 --hugging-face-checkpoint-path models/Meta-Llama-3-8B-finetuned
  10. Run HuggingFace generations and got 0.880859375 accuracy:
    torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path models/Meta-Llama-3-8B-finetuned

So I haven't experienced any problem, let me know if I should look into anything more!

Toni

PD: We could upload Nanotron Llama3 checkpoints to the Hub, right?
PPD: In W&B I've included the results of a dummy run with 5000 steps.

@ischlag ischlag merged commit c104c34 into swiss-ai:main Jul 2, 2024
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants