barteloma
barteloma

Reputation: 6875

langchain chat chain invoke does not return an object?

I have a simple example about langchain runnables. From https://python.langchain.com/v0.1/docs/expression_language/interface/

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model
print(chain.invoke({"topic": "chickens"}))

It should return like following in web site:

AIMessage(content="Why don't bears wear shoes? \n\nBecause they have bear feet!")

But it returns unstructured response:

content="Why don't bears wear shoes? \n\nBecause they have bear feet!" response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 13, 'total_tokens': 32}, 'model_name': 'gpt-4-0613', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-bd7cda7e-dee2-4107-af3f-97282faa9fa4-0' usage_metadata={'input_tokens': 13, 'output_tokens': 19, 'total_tokens': 32}

how can I fix this issue?

Upvotes: 1

Views: 698

Answers (2)

Rodrigo Caus
Rodrigo Caus

Reputation: 1

The output you are observing is the str representation of the AIMessage object, which is inherently structured. If you want to see the structured representation with AIMessage(...) markup, you can utilize the repr function in Python. Here is an example:

# Using repr to get the structured representation of the AIMessage
print(repr(chain.invoke({"topic": "chickens"})))
# Expected Output: AIMessage(content="Why don't bears wear shoes? \n\nBecause they have bear feet!")

In this scenario, repr provides a more detailed representation of the AIMessage object, which includes both the type and the content.

For debugging purposes, it is often useful to inspect the type of the object you are dealing with. This can provide insights into the nature of the object and help confirm that it is indeed an instance of AIMessage. Here's how you can do that:

# Invoking the chain with a specified topic
response = chain.invoke({"topic": "chickens"})

# Printing out the type and the content of the response
print(type(response), "-", response)
# Expected Output: <class 'AIMessage'> - content="Why don't bears wear shoes? \n\nBecause they have bear feet!"

By using type(response), you can verify that the response is an instance of AIMessage, which can help you understand and debug the code more effectively.

To access the language model generated output, you can use response.content attribute.

Upvotes: 0

j3ffyang
j3ffyang

Reputation: 2470

One way you can do is to use StrOutputParser. For example,

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

model = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model | StrOutputParser()
print(chain.invoke({"topic": "chickens"}))

Doc reference > https://python.langchain.com/docs/concepts/#output-parsers

Upvotes: 0

Related Questions