Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Bug: Kernel Function plugin not working with AzureAssistantAgent #10141

Open
vslepakov opened this issue Jan 9, 2025 · 1 comment
Open
Assignees
Labels
bug Something isn't working python Pull requests for the Python Semantic Kernel

Comments

@vslepakov
Copy link
Member

Describe the bug
Testing the setup described here with a bugfix released in 1.18.0

To Reproduce
See the setup here.

Expected behavior
AzureAssistantAgent with a kernel function plugin works as part of AgentGroupChat

Platform

  • OS: Windows
  • IDE: VS Code
  • Language: Python
  • Source: semantic-kernel==1.18.0

Additional context

ERROR:

semantic_kernel.exceptions.service_exceptions.ServiceResponseException: ("<class 'semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AzureChatCompletion'> service failed to complete the prompt", BadRequestError('Error code: 400 - {\'error\': {\'message\': "An assistant message with \'tool_calls\' must be followed by tool messages responding to each \'tool_call_id\'. The following tool_call_ids did not have response messages: call_74vVFw3smVjsnsoCwcbrUNaN", \'type\': \'invalid_request_error\', \'param\': \'messages.[3].role\', \'code\': None}}'))

According to this, the tool_call_id should be included in messages with AuthorRole.TOOL. I believe this should be handled in semantic kernel

Part of the stack trace:

...
 File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\group_chat\agent_group_chat.py", line 144, in invoke
    async for message in super().invoke_agent(selected_agent):
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\group_chat\agent_chat.py", line 144, in invoke_agent
    async for is_visible, message in channel.invoke(agent):
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\channels\chat_history_channel.py", line 71, in invoke
    async for response_message in agent.invoke(self):
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\agents\chat_completion\chat_completion_agent.py", line 111, in invoke
    messages = await chat_completion_service.get_chat_message_contents(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\chat_completion_client_base.py", line 142, in get_chat_message_contents
    return await self._inner_get_chat_message_contents(chat_history, settings)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\utils\telemetry\model_diagnostics\decorators.py", line 83, in wrapper_decorator
    return await completion_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\open_ai\services\open_ai_chat_completion_base.py", line 88, in _inner_get_chat_message_contents
    response = await self._send_request(settings)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\open_ai\services\open_ai_handler.py", line 59, in _send_request
    return await self._send_completion_request(settings)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\<snip>\Projects\semantic_kernel_agents\.venv\Lib\site-packages\semantic_kernel\connectors\ai\open_ai\services\open_ai_handler.py", line 99, in _send_completion_request
    raise ServiceResponseException(
semantic_kernel.exceptions.service_exceptions.ServiceResponseException: ("<class 'semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AzureChatCompletion'> service failed to complete the prompt", BadRequestError('Error code: 400 - {\'error\': {\'message\': "An assistant message with \'tool_calls\' must be followed by tool messages responding to each \'tool_call_id\'. The following tool_call_ids did not have response messages: call_74vVFw3smVjsnsoCwcbrUNaN", \'type\': \'invalid_request_error\', \'param\': \'messages.[3].role\', \'code\': None}}'))
@vslepakov vslepakov added the bug Something isn't working label Jan 9, 2025
@markwallace-microsoft markwallace-microsoft added python Pull requests for the Python Semantic Kernel triage labels Jan 9, 2025
@moonbox3
Copy link
Contributor

moonbox3 commented Jan 9, 2025

Hi @vslepakov, it looks like one of the tool calls may be failing and we're not sending back a result for that particular tool call? Are you able to enable logging so we can get some more information about the number of tool calls being made, and what else could be going on?

@moonbox3 moonbox3 self-assigned this Jan 10, 2025
@moonbox3 moonbox3 removed the triage label Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working python Pull requests for the Python Semantic Kernel
Projects
Status: No status
Development

No branches or pull requests

3 participants