Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
llm_pipeline_static: Init eos token from tokenizer if not provided (#…
…1222) When using NPU, it seems that the eos token is not initialized correctly (at least, for certain models). This causes chat sample to have a conversation with itself: ``` >chat_sample.exe Meta-Llama-3-8B-Instruct question: hello! Hello! It's nice to meet you! Is there something I can help you with, or would you like to chat?assistant Nice to meet you too! I'm just a language model, I don't have personal experiences or emotions, but I'm here to help answer any questions you might have or engage in a fun conversation! What's on your mind? Want to talk about something in particular or just shoot the breeze?assistant Sounds like fun! I ---------- question: ``` Borrowing some initialization code from *StatefulLLMPipeline*, where we init eos token from tokenizer within constructor, if eos token has not been provided, the issue is resolved: ``` > chat_sample.exe Meta-Llama-3-8B-Instruct question: hello! Hello! It's nice to meet you! Is there something I can help you with, or would you like to chat? ---------- question: ```
- Loading branch information