You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Chances the number of inference providers is beyond our capacity to support.
What if we provide either an inference provider to OpenRouter or via LiteLLM?
💡 Why is this needed? What if we don't build it?
This way users can access the majority of our inference providers through one of these providers, without us having to support every single one - which would become unsustainable and make us the bottleneck of the ecosystem.
Other thoughts
Wdyt?
The text was updated successfully, but these errors were encountered:
🚀 Describe the new functionality needed
Chances the number of inference providers is beyond our capacity to support.
What if we provide either an inference provider to OpenRouter or via LiteLLM?
💡 Why is this needed? What if we don't build it?
This way users can access the majority of our inference providers through one of these providers, without us having to support every single one - which would become unsustainable and make us the bottleneck of the ecosystem.
Other thoughts
Wdyt?
The text was updated successfully, but these errors were encountered: