Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(0.5.1): add tokenizer.get_vocab() | Diff generation endpoint / text.novelai.net or api.novelai.net #76

Merged
merged 3 commits into from
Sep 26, 2024

Conversation

sudoskys
Copy link
Member

No description provided.

🔧 refactor: add exception detail in decode error log in __init__.py
- Update status code check to include 201 in generate_stream.py
- Add endpoint normalization for specific models in generate/__init__.py
- Change logger level from debug to trace for out of range tokens
Added get_vocab method to SimpleTokenizer and total_tokens method to
NaiTokenizer for retrieving vocabulary size. Updated schema
validations in _schema.py for stricter type enforcement. Bumped
project version to 0.5.1 in pyproject.toml.
@sudoskys sudoskys marked this pull request as ready for review September 26, 2024 12:26
@sudoskys sudoskys merged commit 7c71f0c into main Sep 26, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant