You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use the Recorder function and have a second prompt for after-transcription processing. It hadn't occurred to me that the model it uses for that second processing is whatever the model is set for the brain/chat. So when I've been experimenting with DeepSeek, all my transcriptions were taking much, much longer to come back to me. And that's because the post-processing was being sent to DeepSeek which was completely overthinking it.
In some future version, would it be possible to allow us to set different models for the recorder function? We can already set which transcription model it goes to, but if we could also set which post-processing model it goes to please (and maybe some of its basic model settings like temperature too). For my use case, I would want a fast, cheap LLM for the post-processing, but use something much more complex for my chat model.
Thanks.
The text was updated successfully, but these errors were encountered:
I use the Recorder function and have a second prompt for after-transcription processing. It hadn't occurred to me that the model it uses for that second processing is whatever the model is set for the brain/chat. So when I've been experimenting with DeepSeek, all my transcriptions were taking much, much longer to come back to me. And that's because the post-processing was being sent to DeepSeek which was completely overthinking it.
In some future version, would it be possible to allow us to set different models for the recorder function? We can already set which transcription model it goes to, but if we could also set which post-processing model it goes to please (and maybe some of its basic model settings like temperature too). For my use case, I would want a fast, cheap LLM for the post-processing, but use something much more complex for my chat model.
Thanks.
The text was updated successfully, but these errors were encountered: