Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for local models for llama.cpp in text #8

Open
ixaxaar opened this issue Feb 27, 2024 · 3 comments
Open

Add support for local models for llama.cpp in text #8

ixaxaar opened this issue Feb 27, 2024 · 3 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@ixaxaar
Copy link
Member

ixaxaar commented Feb 27, 2024

No description provided.

@ixaxaar ixaxaar converted this from a draft issue Feb 27, 2024
@ixaxaar ixaxaar added enhancement New feature or request good first issue Good for newcomers labels Feb 27, 2024
@jalotra
Copy link

jalotra commented Feb 28, 2024

the requirement is just to load models from a local path ?

@ixaxaar
Copy link
Member Author

ixaxaar commented Feb 28, 2024

Yep, currently we use .from_pretrained which only loads models from huggingface hub

@ixaxaar ixaxaar closed this as completed Feb 28, 2024
@github-project-automation github-project-automation bot moved this from Todo to Done in geniusrise Feb 28, 2024
@ixaxaar ixaxaar reopened this Feb 28, 2024
@ixaxaar
Copy link
Member Author

ixaxaar commented Feb 28, 2024

Closed by mistake :/

@ixaxaar ixaxaar moved this from Done to Todo in geniusrise Feb 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
Status: Todo
Development

No branches or pull requests

2 participants