-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 多轮对话历史中如果包含tool_calls,则response中的tool_calls arguments会被额外嵌套一层string #3058
Comments
It should have no impact on function_call since every json cotent should be used after
|
其实我想说的问题是,按照openai API的定义, |
|
这里 |
这里是因为原始内容就是个 json 加载后仍旧为 str 的内容。理论上 json.loads 后再执行一次 json.dumps,会恢复原本的内容。 |
Checklist
Describe the bug
Model:Qwen2.5-32B-Instruct-AWQ
尝试在多轮对话的历史中保留带有tool_calls的message,结果新的response中tool_calls arguments被额外嵌套了一层string。
我根据Tools Calling文档中的Qwen2.5 demo复现了一下,结果如下:
History Messages:
Respose:
可以看到
arguments='"{\\"location\\": \\"Beijing, China\\"}"'
,这会导致后续调用tool的时候arguments解析失败。我查看了一下源码,发现lmdeploy在拼接prompt template的时候会为tool_calls arguments额外进行一次json.dumps,除了qwen2.5以外,Internlm2也做了类似的处理,请问下这里是否是个bug?
https://github.com/InternLM/lmdeploy/blob/3f8b079224d109aaa1b512867203a47ea600aa7d/lmdeploy/model.py#L1066C11-L1077C34
Reproduction
Environment
Error traceback
The text was updated successfully, but these errors were encountered: