Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we use chat history in workflow? #1868

Open
alaap001 opened this issue Jan 23, 2025 · 2 comments
Open

Can we use chat history in workflow? #1868

alaap001 opened this issue Jan 23, 2025 · 2 comments

Comments

@alaap001
Copy link

Hey,

A simple question: I tried using read_chat_history=True in a workflow and when in the playground I told Agent my name, and in the next message I asked it what was my name butt it couldn't answer.
Replied: "I'm sorry, I don't retain conversation-specific context between interactions. How can I assist you today?"

Is there any way we can maintain chat/message history for workflow sessions?

Since a user can have a complex run method with multiple agents communicating we define the sequence they work on, something like: #1612 (comment)

But then we also want it to work as a chatbot, but with complex logic in place. How can we do that in a workflow? Knowing chat history is important as well for chatbot to feel natural and work properly.

Appreciate any help.
Thanks :)

@dirkbrnd
Copy link
Contributor

dirkbrnd commented Jan 24, 2025

Are you sure you are storing the chat history? Are you passing your request to the team leader?

If you can supply a code snippet we can check.

@alaap001
Copy link
Author

alaap001 commented Jan 24, 2025

Sure so I'll explain in detail.
Let's say we have a simple workflow as below:

from pydantic import BaseModel, Field
from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.workflow.workflow import Workflow
from phi.tools.postgres import PostgresTools
from phi.run.response import RunResponse
from phi.utils.pprint import pprint_run_response
from phi.model.ollama import Ollama
from dotenv import load_dotenv
from phi.embedder.ollama import OllamaEmbedder
from phi.document.chunking.agentic import AgenticChunking
from marker_chunking import MarkerChunking
import re
from phi.storage.agent.postgres import PgAgentStorage


load_dotenv()

class AnalysisWorkflow(Workflow):
    description: str = "SQL Analysis Workflow"
    db_url: str = "postgresql+psycopg://ai:ai@localhost:5432/ai"

    intent_recog: Agent = Agent(
        name="Smart Assistant",
        description="""You are SmartAssistant.""",
        instructions=[
            "You will understand the user question and understand the intent behind the question",
            "then give the intent"
            ],
            read_chat_history=True,
            # add_datetime_to_instructions=True,
            # storage=PgAgentStorage(table_name="agent_sessions", db_url=db_url),
            # add_history_to_messages=True,
            # add_name_to_instructions=True,
            # model=Ollama(id="qwen2.5:32b-instruct-q8_0"),
            model=Ollama(id="deepseek-r1:32b-qwen-distill-q8_0"),
            # model=Ollama(id="phi4:14b-q8_0"),
            # model=OpenAIChat(id="gpt-4o-mini", temperature=0.1),
            # response_model=IntentRecog,
            # structured_outputs=True,
            markdown=True
    )

    def run(self, question: str) -> RunResponse:

        intent_recog_result = self.intent_recog.run(question)
        
        return RunResponse(content=intent_recog_result.content) 


from phi.playground import Playground, serve_playground_app
from phi.storage.workflow.sqlite import SqlWorkflowStorage
from phi.tools.postgres import PostgresTools

analysis = AnalysisWorkflow(
    debug_mode=True,
    workflow_id="analysis",
)

app = Playground(
    workflows=[analysis]
).get_app()

if __name__ == "__main__":
    serve_playground_app("playgroundHistCheck:app", reload=True)

If I run this with read_chat_history=True

below is the ourput:

Image

if instead I use:

storage=PgAgentStorage(table_name="agent_sessions", db_url=db_url),
add_history_to_messages=True,```

Then it works.

Image

so 2 questions here.

What is the difference btw the 2? and shouldn't read chat history work in the workflow playground as well? since it works in normal chat.

Also read_chat_history uses a tool making it a limitation because now I cannot use small and good models like phi4 14b with read_chat_history because it doesn't support tools. So that limits many open source models from using chat history.

Thanks for the quick assistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants