Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meet LLM cost problem #1449

Closed
jasonhp opened this issue Dec 30, 2024 · 5 comments
Closed

Meet LLM cost problem #1449

jasonhp opened this issue Dec 30, 2024 · 5 comments

Comments

@jasonhp
Copy link
Contributor

jasonhp commented Dec 30, 2024

I am trying to add a new model provider (Novita AI) to Skyvern-AI. When I was debugging with model meta-llama/llama-3.1-70b-instruct, the llm_api_handler method calls litellm.completion_cost and it threw an error:

This model isn't mapped yet. model=meta-llama/llama-3.1-70b-instruct, custom_llm_provider=openai. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json

Then I went to LiteLLM and discovered that this JSON file was updated by a GitHub action, which requests https://openrouter.ai/api/v1/models to update models' info. But this action has been failing for a long time.

Can I jump over this problem? Or should my models be listed on LiteLLM before I can use them on Skyvern?

@suchintan
Copy link
Contributor

@jasonhp I think they need to be listed in LiteLLM for them to work in Skyvern, although we shouldn't block Skyvern if cost information is not available.

Any chance you can add a try-catch around this code and continue gracefully?

@jasonhp
Copy link
Contributor Author

jasonhp commented Jan 7, 2025

@suchintan Hi thank you for the suggestion!

I’m not sure what the appropriate approach is here. Should I set llm_cost to 0 and update the database?

I think the final solution is to fetch the model information from the /models API provided by LLM providers? For now, it looks like I need to manually specify them in setup.sh, is that correct?

Copy link
Contributor

#1 - I think setting it to 0 is a good workaround in this situation!
#2 - That's sort of correct. The setup.sh config is for others to use it out of the box — the biggest change that's needed is updating LLMConfig

if settings.ENABLE_AZURE_GPT4O_MINI:

@jasonhp
Copy link
Contributor Author

jasonhp commented Jan 7, 2025

Done and created a PR #1508. Please have a check @suchintan.

@suchintan
Copy link
Contributor

Got this merged in -- thank you so much @jasonhp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants