-
Notifications
You must be signed in to change notification settings - Fork 822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Meet LLM cost problem #1449
Comments
@jasonhp I think they need to be listed in LiteLLM for them to work in Skyvern, although we shouldn't block Skyvern if cost information is not available. Any chance you can add a try-catch around this code and continue gracefully? |
@suchintan Hi thank you for the suggestion! I’m not sure what the appropriate approach is here. Should I set I think the final solution is to fetch the model information from the |
Done and created a PR #1508. Please have a check @suchintan. |
Got this merged in -- thank you so much @jasonhp |
I am trying to add a new model provider (Novita AI) to Skyvern-AI. When I was debugging with model meta-llama/llama-3.1-70b-instruct, the
llm_api_handler
method callslitellm.completion_cost
and it threw an error:Then I went to LiteLLM and discovered that this JSON file was updated by a GitHub action, which requests https://openrouter.ai/api/v1/models to update models' info. But this action has been failing for a long time.
Can I jump over this problem? Or should my models be listed on LiteLLM before I can use them on Skyvern?
The text was updated successfully, but these errors were encountered: