-
Notifications
You must be signed in to change notification settings - Fork 822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add new model provider Novita AI #1508
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Reviewed everything up to 2d35731 in 1 minute and 46 seconds
More details
- Looked at
263
lines of code in6
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. skyvern/forge/sdk/api/llm/api_handler_factory.py:148
- Draft comment:
The logging statement uses an incorrect keyword argumenterror
. Useexc_info=True
to log the exception details.
LOG.error("Failed to calculate LLM cost", exc_info=True)
- Reason this comment was not posted:
Confidence changes required:50%
The try-except block for calculating LLM cost is correctly implemented, but the logging statement uses an incorrect keyword argument 'error'.
2. skyvern/forge/sdk/api/llm/api_handler_factory.py:287
- Draft comment:
The logging statement uses an incorrect keyword argumenterror
. Useexc_info=True
to log the exception details.
LOG.error("Failed to calculate LLM cost", exc_info=True)
- Reason this comment was not posted:
Confidence changes required:50%
The try-except block for calculating LLM cost is correctly implemented, but the logging statement uses an incorrect keyword argument 'error'.
Workflow ID: wflow_y5grCcWYZoI2ySC8
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
@@ -142,7 +142,11 @@ async def llm_api_handler_with_router_and_fallback( | |||
observer_thought=observer_thought, | |||
) | |||
if step: | |||
llm_cost = litellm.completion_cost(completion_response=response) | |||
try: | |||
llm_cost = litellm.completion_cost(completion_response=response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This error handling block is duplicated. Consider extracting this into a helper function to avoid code duplication.
- LLM cost calculation error handling (api_handler_factory.py)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I fix this? The origin code is duplicated, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no it's OK
try: | ||
llm_cost = litellm.completion_cost(completion_response=response) | ||
except Exception as e: | ||
LOG.error("Failed to calculate LLM cost", error=str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should use LOG.exception here instead as that will log the stacktrace automatically
try: | ||
llm_cost = litellm.completion_cost(completion_response=response) | ||
except Exception as e: | ||
LOG.error("Failed to calculate LLM cost", error=str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOG.error("Failed to calculate LLM cost", error=str(e)) | |
LOG.exception("Failed to calculate LLM cost", error=str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
try: | ||
llm_cost = litellm.completion_cost(completion_response=response) | ||
except Exception as e: | ||
LOG.error("Failed to calculate LLM cost", error=str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOG.error("Failed to calculate LLM cost", error=str(e)) | |
LOG.exception("Failed to calculate LLM cost", error=str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great
@jasonhp are you open to fixing the mypy + ruff errors? |
I'll get this merged right after :) it's in a really good state |
Ruff error fixed |
Thank you for your first commit!! |
litellm.completion_cost
. Will set the cost to 0 if model information from LiteLLM is missing.Important
Adds Novita AI as a new model provider and implements error handling for missing model information in cost calculation.
config_registry.py
.api_handler_factory.py
forlitellm.completion_cost
to set cost to 0 if model info is missing..env.example
andconfig.py
to include Novita AI settings.setup.sh
to configure Novita AI during setup.README.md
to list Novita AI as a supported provider.This description was created by for 2d35731. It will automatically update as commits are pushed.