-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Working with ollama or llama.cpp #60
Comments
hello @region23 yes it is possible to use a local model. What you'd need to do is:
change it to your local endpoint and make sure to update the prompt template |
For now ollama's API is not supported, it's on the todo list though! |
Also created an issue for llama.cpp : huggingface/llm-ls#28 |
This issue is stale because it has been open for 30 days with no activity. |
+1 |
This issue is stale because it has been open for 30 days with no activity. |
Is there a timeline for when feat: Add adaptors for ollama and openai #117 might be merged? |
Finishing the last touches of fixes on llm-ls and testing everything works as expected for |
With the publication of codellama, it became possible to run LLM on a local machine using ollama or llama.cpp.
How to configure your extension to work with local codellama?
The text was updated successfully, but these errors were encountered: