You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Motivation - Chaining bot actions are currently needed for the enhancement LLM integration to make the conversation feel more natural when we want to personalize a response from another service function. Currently, getting the result from a previous service function and having it be available for the openai service function seems to be difficult as even if the openai service function is triggered afterwards in the code, both make their requests asynchronously.
Specification - A suggestion was made to change the bot model so that a bot action can trigger another bot action after it has returned.
Finalised state - For the personalize openai wrapper function, responses from service function could be personalized in addition to simple text responses.
The text was updated successfully, but these errors were encountered:
Currently, the bot model describes the sequential behaviour of IncomingMessage_A->BotAction->IncomingMessage_B
as IncomingMessage_A->BotAction and IncomingMessage_A->IncomingMessage_B. This would however suggest that IncomingMessage_B can also happen before BotAction and even if the BotAction fails, which is not the case.
Thus I think we should model this as IncomingMessage_A---leadsTo--->BotAction---leadsTo--->IncomingMessage_B This would reflect the sequential behaviour and also allow us to chain BotActions as proposed in this enhancement issue
The text was updated successfully, but these errors were encountered: