Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support LLM/Model Caching #3536

Open
1 of 5 tasks
warlockedward opened this issue Jan 7, 2025 · 0 comments
Open
1 of 5 tasks

Support LLM/Model Caching #3536

warlockedward opened this issue Jan 7, 2025 · 0 comments

Comments

@warlockedward
Copy link

例行检查

  • 我已确认目前没有类似 features
  • 我已确认我已升级到最新版本
  • 我已完整查看过项目 README,已确定现有版本无法满足需求
  • 我理解并愿意跟进此 features,协助测试和提供反馈
  • 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 features 可能会被无视或直接关闭

功能描述
A semantic cache for Large Language Models (LLMs) that reduces response time for similar requests and improves user experience by caching pre-generated model results.

应用场景
Introducing a caching mechanism to optimize services helps enterprises and research institutions to reduce inference deployment costs, improve model performance and efficiency, and provide scalable services for large models.

相关示例
similar to gptcache, modelcache such open source projects

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant