Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

configure model options that can be used on the command line #11

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mvtango
Copy link

@mvtango mvtango commented Jun 8, 2024

Allows to specify max_output_tokens and other parameters on the command line, issue #3 , like so:

llm prompt --model gemini-pro  --option max_output_tokens 100  "Help me to find names for my new bicycle!"

@mvtango
Copy link
Author

mvtango commented Jun 9, 2024

For a start, I've added only four of the options that are available, the ones I needed. If I chose the right way to do it, I'll add the other ones and add to the documentation.

@xeb
Copy link

xeb commented Aug 17, 2024

+1 for this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants