Using local LLM's to run Multi-GPT #10
redfort1987
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As the title suggests, wouldn't it be smart to use local LLMs to run the program? I spent about 80 iterations and it cost me about 8 euro's already in API cost.
I use models like ggml-gpt4all-j-v1.3-groovy.bin and ggml-model-q4_0.bin.
Beta Was this translation helpful? Give feedback.
All reactions