You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I use a model with lm-studio (Mistral) and get this error message:
"2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Received OPTIONS request. {}
2024-12-17 06:18:03 [INFO]
Received POST request to /v1/chat/completions with body: {
"model": "mistral-7b-instruct-v0.3",
"messages": [
{
"role": "system",
"content": "You are the universe's most intelligent assistant. Your responses are succinct, problem solving, and concise. You don't limit yourself to something as unimportant as a simple AI; you're much more than that - an entity built on human history, ecompassing knowledge from all depths of what has been created thus far."
},
{
"role": "user",
"content": "Hello"
},
{
"role": "assistant",
"content": ""
},
{
"role": "user",
"content": "hi"
},
{
"role": "assistant",
"content": ""
},
{
"role": "user",
"content": "Hi"
}
],
"stream": true,
"temperature": 0.2,
"max_tokens": 4096
}
2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Running chat completion on conversation with 6 messages.
2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Streaming response...
2024-12-17 06:18:03 [ERROR] Only user and assistant roles are supported!. Error Data: n/a, Additional Data: n/a
2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Client disconnected. Stopping generation..
2024-12-17 06:18:03 [DEBUG]
1 Error predicting: Error: Only user and assistant roles are supported!
at /Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:121414
at J.value (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:120496)
at te.evaluateCallExpression (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:114734)
at te.evaluate (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:119767)
at te.evaluateBlock (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:114336)
at te.evaluateIf (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:116793)
at te.evaluate (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:119084)
at te.evaluateBlock (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:114336)
at te.evaluateIf (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:116793)
at te.evaluate (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:119084)
Info
Inference
Load
Code Snippets
Model Information
Error is solved. Maybe it has to do with the network option or with other AI Plugins in Obsidian? I use now Mistral Nemo and it works like a charme. Great work
So glad to hear that it works. I am doing a full refactor right now so problems like this won't occur in the future. That's why I've kind of been radio silent here on the github issues. If you think everything works well right now, wait until this next update.
Describe the bug
Hi, I use a model with lm-studio (Mistral) and get this error message:
"2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Received OPTIONS request. {}
2024-12-17 06:18:03 [INFO]
Received POST request to /v1/chat/completions with body: {
"model": "mistral-7b-instruct-v0.3",
"messages": [
{
"role": "system",
"content": "You are the universe's most intelligent assistant. Your responses are succinct, problem solving, and concise. You don't limit yourself to something as unimportant as a simple AI; you're much more than that - an entity built on human history, ecompassing knowledge from all depths of what has been created thus far."
},
{
"role": "user",
"content": "Hello"
},
{
"role": "assistant",
"content": ""
},
{
"role": "user",
"content": "hi"
},
{
"role": "assistant",
"content": ""
},
{
"role": "user",
"content": "Hi"
}
],
"stream": true,
"temperature": 0.2,
"max_tokens": 4096
}
2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Running chat completion on conversation with 6 messages.
2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Streaming response...
2024-12-17 06:18:03 [ERROR] Only user and assistant roles are supported!. Error Data: n/a, Additional Data: n/a
2024-12-17 06:18:03 [INFO] [LM STUDIO SERVER] Client disconnected. Stopping generation..
2024-12-17 06:18:03 [DEBUG]
1 Error predicting: Error: Only user and assistant roles are supported!
at /Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:121414
at J.value (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:120496)
at te.evaluateCallExpression (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:114734)
at te.evaluate (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:119767)
at te.evaluateBlock (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:114336)
at te.evaluateIf (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:116793)
at te.evaluate (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:119084)
at te.evaluateBlock (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:114336)
at te.evaluateIf (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:116793)
at te.evaluate (/Applications/LM Studio.app/Contents/Resources/app/.webpack/main/llmworker.js:28:119084)
Info
Inference
Load
Code Snippets
Model Information
Model
lmstudio-community/Mistral-7B-Instruct-v0.3-GGUF
File
Mistral-7B-Instruct-v0.3-Q4_K_M.gguf
Format
GGUF
Quantization
Q4_K_M
Arch
llama
Domain
llm
Size on disk
4.37 GB
API Usage
This model's API identifier
mistral-7b-instruct-v0.3
✅ The local server is reachable at this address
http://192.168.1.82:1234"
Environment
To Reproduce
Sending a chat prompt in Obsidian
The text was updated successfully, but these errors were encountered: