Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow different model for Recorder post-processing #97

Open
ChrisBBBB opened this issue Jan 31, 2025 · 1 comment
Open

Allow different model for Recorder post-processing #97

ChrisBBBB opened this issue Jan 31, 2025 · 1 comment

Comments

@ChrisBBBB
Copy link

I use the Recorder function and have a second prompt for after-transcription processing. It hadn't occurred to me that the model it uses for that second processing is whatever the model is set for the brain/chat. So when I've been experimenting with DeepSeek, all my transcriptions were taking much, much longer to come back to me. And that's because the post-processing was being sent to DeepSeek which was completely overthinking it.

In some future version, would it be possible to allow us to set different models for the recorder function? We can already set which transcription model it goes to, but if we could also set which post-processing model it goes to please (and maybe some of its basic model settings like temperature too). For my use case, I would want a fast, cheap LLM for the post-processing, but use something much more complex for my chat model.

Thanks.

Copy link

mentatbot bot commented Jan 31, 2025

If you would like me to solve this issue, either tag me in a comment or check this box:

  • Solve Issue

You can disable automatic comments on my settings page

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant