Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explain dependency on llama3.1 and how to use other models #77

Closed
jeremycaine opened this issue Jan 16, 2025 · 3 comments
Closed

Explain dependency on llama3.1 and how to use other models #77

jeremycaine opened this issue Jan 16, 2025 · 3 comments
Labels

Comments

@jeremycaine
Copy link

It is not clear from the bee-stack docs on the requirement or dependency for llama3.1 and how to use other models in stead of the default llama.

From a clean macOS. Using Podman and Ollama.

  1. clone bee stack

  2. Default install - only llama3.1 downloaded

ollama pull llama3.1 
./bee-stack.sh clean
./bee-stack.sh setup
# overwrite .env file
./bee-stack.sh start

Bee Stack comes up and going to Test Bee Assistant, the prompt "hello" gets a response.

Next,
2. Attempt to use Granite - only granite3-dense:8b downloaded

ollama rm llama3.1
ollama pull granite3-dense:8b
/bee-stack.sh clean
./bee-stack.sh setup
# overwrite .env file
./bee-stack.sh start

Bee Stack comes up but when prompt "hello" returns

An error occurred
	[cause]: Error: Internal server error

.env file

AI_BACKEND=ollama
EMBEDDING_BACKEND=ollama
OLLAMA_URL=http://host.docker.internal:11434
FEATURE_FLAGS='{"Knowledge":false,"Files":true,"TextExtraction":false,"FunctionTools":true,"Observe":true,"Projects":true}'
EXTRACTION_BACKEND=wdu

Checking the logs, in bee-api

bee-api-1  | {"level":"error",...."runs","failedReason":"model 'llama3.1' not found","data ....
  1. Other scenarios
    There is reference in the starter framework to the env var OLLAMA_MODEL - but this does not appear to work.
    With only granite model on my system I have tried various
OLLAMA_MODEL=granite3-dense:8b
OLLAMA_MODEL=granite3-dense
OLLAMA_MODEL="granite3-dense"
OLLAMA_MODEL="granite3-dense:8b"
@psschwei
Copy link
Contributor

There's instructions for using custom models here: https://github.com/i-am-bee/bee-stack?tab=readme-ov-file#custom-models

Though they are rather far down the page... perhaps we could add link in the usage section to the instructions for custom models (?)

@psschwei
Copy link
Contributor

#68 is already in flight to add this info

@jeremycaine
Copy link
Author

To be honest with its limited explanation I did not understand what those instructions do.

Q. Is that a POST configuration to the bee-api server to point to a different model?

Q. I assume ${BEE_API_KEY:-sk-proj-testkey}" comes from

BEE_API=http://localhost:4000
BEE_API_KEY=sk-proj-testkey

which needs to be put in your .env file?

Elsewhere in previous commits in other repos, there is reference to an env var OLLAMA_MODEL - I assume this no longer is a feature, but that looks like a simpler way for a user to configure for a different model.

--
I see that ^^ is then explained in #36 - but it still doesn't look certain that solving via .env file is resolution. I vote that's a clean and simple solution to this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants