You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
bee-api-1 | {"level":"error",...."runs","failedReason":"model 'llama3.1' not found","data ....
Other scenarios
There is reference in the starter framework to the env var OLLAMA_MODEL - but this does not appear to work.
With only granite model on my system I have tried various
Elsewhere in previous commits in other repos, there is reference to an env var OLLAMA_MODEL - I assume this no longer is a feature, but that looks like a simpler way for a user to configure for a different model.
--
I see that ^^ is then explained in #36 - but it still doesn't look certain that solving via .env file is resolution. I vote that's a clean and simple solution to this.
It is not clear from the
bee-stack
docs on the requirement or dependency forllama3.1
and how to use other models in stead of the default llama.From a clean macOS. Using Podman and Ollama.
clone bee stack
Default install - only llama3.1 downloaded
Bee Stack comes up and going to Test Bee Assistant, the prompt "hello" gets a response.
Next,
2. Attempt to use Granite - only granite3-dense:8b downloaded
Bee Stack comes up but when prompt "hello" returns
.env
fileChecking the logs, in
bee-api
There is reference in the starter framework to the env var OLLAMA_MODEL - but this does not appear to work.
With only granite model on my system I have tried various
The text was updated successfully, but these errors were encountered: