James
James

Reputation: 1769

How to use StreamlitCallbackHandler with Langgraph?

I'm trying to use Streamlit to show the final output to the user together with the step-by-step thoughts. For it, I'm using StreamlitCallbackHandler copying from the MRKL example.

However, it is not working, and the error messages are unclear to me.

The only other reference that I found about the problem is the GitHub issue, but as pointed out in the comment, the example code is not using langgraph, so I'm unsure the problem is the same, even with very similar error messages.

This is the relevant part of my code (pretty much the same as multi-agent collaboration example inside the Streamlight container from MRKL example):

#...Other Langgraph code from the example

workflow = StateGraph(AgentState)

workflow.add_node("Researcher", research_node)
workflow.add_node("Chart Generator", chart_node)
workflow.add_node("call_tool", tool_node)

workflow.add_conditional_edges(
    "Researcher",
    router,
    {"continue": "Chart Generator", "call_tool": "call_tool", "end": END},
)
workflow.add_conditional_edges(
    "Chart Generator",
    router,
    {"continue": "Researcher", "call_tool": "call_tool", "end": END},
)

workflow.add_conditional_edges(
    "call_tool",
    # Each agent node updates the 'sender' field
    # the tool calling node does not, meaning
    # this edge will route back to the original agent
    # who invoked the tool
    lambda x: x["sender"],
    {
        "Researcher": "Researcher",
        "Chart Generator": "Chart Generator",
    },
)
workflow.set_entry_point("Researcher")
graph = workflow.compile()

#... Other Streamlit configurations

with st.form(key="form"):
    user_input = st.text_input("Define the task")
    submit_clicked = st.form_submit_button("Execute")

output_container = st.empty()
if with_clear_container(submit_clicked):
    output_container = output_container.container()
    output_container.chat_message("user").write(user_input)

    answer_container = output_container.chat_message("assistant", avatar="🦜")
    st_callback = StreamlitCallbackHandler(answer_container)
    cfg = RunnableConfig()
    cfg["callbacks"] = [st_callback]
    cfg["recursion_limit"] = 100

    answer = graph.invoke({
            "messages": [
                HumanMessage(
                    content=user_input
                )
            ],
        }, cfg)

    answer_container.write(answer["content"])

Errror messages:

2024-02-18 13:30:17.030 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_tool_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
2024-02-18 13:30:18.630 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')

Any tip on how to use StreamlitCallbackHandler with Langgraph?

Upvotes: 1

Views: 2517

Answers (1)

shiv248
shiv248

Reputation: 11

The issue you are facing is due to StreamlitCallbackHandler losing context when the callback is passed through the graph nodes, and LLMThought isn't receiving data from LLM unintentionally. you can fix it by creating this helper function below:

from typing import Callable, TypeVar
import inspect

from streamlit.runtime.scriptrunner import add_script_run_ctx, get_script_run_ctx
from streamlit.delta_generator import DeltaGenerator

from langchain_core.callbacks.base import BaseCallbackHandler
from langchain_community.callbacks.streamlit import StreamlitCallbackHandler

def get_streamlit_cb(parent_container: DeltaGenerator) -> BaseCallbackHandler:
    fn_return_type = TypeVar('fn_return_type')
    def add_streamlit_context(fn: Callable[..., fn_return_type]) -> Callable[..., fn_return_type]:
        ctx = get_script_run_ctx()

        def wrapper(*args, **kwargs) -> fn_return_type:
            add_script_run_ctx(ctx=ctx)
            return fn(*args, **kwargs)

        return wrapper

    st_cb = StreamlitCallbackHandler(parent_container)

    for method_name, method_func in inspect.getmembers(st_cb, predicate=inspect.ismethod):
        if method_name.startswith('on_'):
            setattr(st_cb, method_name, add_streamlit_context(method_func))
    return st_cb

and then calling the function like this:

graph_runnable.invoke({"messages": HumanMessage(content="write me a story about Harry Potter in Hogsmeade")},
                      config={"callbacks" :[get_streamlit_cb(st.empty())]})

here is an example project of StreamlitCallbackHandler being used with LangGraph. And has comments with explanations throughout.

Happy to answer any clarifying questions!

Upvotes: 1

Related Questions