You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know if a crowdsourced source of truth for all model capabilities exist that can be easily used as model registry.
litellm ships with an endpoint that serves supported model lists that can conveniently be used in tools. Ideally, we'd have sth. similar, hosted on S3/CDN.
Example errors
Note: truncated output for extra models integrated via litellm.
$ llm -m o3-mini --option temperature 0.5 "write a poem"
OpenAIException - Error code: 400 - {\'error\': {\'message\': "Unsupported parameter: \'temperature\' is not supported with this model.", \'type\': \'invalid_request_error\', \'param\': \'temperature\', \'code\': \'unsupported_parameter\'}}
$ llm -m o3-mini --option max_tokens 50 "write a poem"
OpenAIException - Error code: 400 - {\'error\': {\'message\': "Unsupported parameter: \'max_tokens\' is not supported with this model. Use \'max_completion_tokens\' instead.", \'type\': \'invalid_request_error\', \'param\': \'max_tokens\', \'code\': \'unsupported_parameter\'}}
The text was updated successfully, but these errors were encountered:
2a355ea#diff-72376d0b487d7231cf31959bf266f6aea098e899743bae02e36d176cef4db476R528 added a copy of the parameter list from o1 model.
OpenAI returns exceptions if at least one of the following is used (I may not have tested all combinations):
temperature
max_tokens
(insteadmax_completion_tokens
should be used as mentioned in OpenAI o1 models requiremax_completion_tokens
instead ofmax_tokens
#724)top_p
frequency_penalty
presence_penalty
Model capability registry
I don't know if a crowdsourced source of truth for all model capabilities exist that can be easily used as model registry.
litellm ships with an endpoint that serves supported model lists that can conveniently be used in tools. Ideally, we'd have sth. similar, hosted on S3/CDN.
Example errors
Note: truncated output for extra models integrated via litellm.
The text was updated successfully, but these errors were encountered: