Reputation: 4575
I have the python 3 langchain code below that I'm using to create a conversational agent and define a tool for it to use. The tool returns the accuracy score for a pre-trained model saved at a given path. The model is scored on data that is saved at another path. The score_tool is a tool I define for the LLM that uses a function named llm_model_score to return the accuracy score. The llm_model_score funtion takes a string as input that it then parses by splitting on ",". To calculate the accuracy the path to the saved model and the path to the data are required. So two inputs are actually required but by default langchain.agents tool is defined to work with only one input. So I added the extra step in the function of taking a single input string with the paths separated by a "," and then parsed the string. Is there an alternative possibly better way to define a tool for langchain.agents that can take multiple inputs? Can you please suggest how I could modify my code below to accomplish that?
code:
# ### Basic Agent
# - example of basic chat agent using a custom tool
# - custom tool scores pre-trained classifier on previously unseen data
from sklearn.linear_model import LogisticRegression
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict
from sklearn.metrics import make_scorer
# chat agent
from config import api_key
apikey=api_key
import os
os.environ['OPENAI_API_KEY'] = apikey
from langchain.agents import Tool
def llm_model_score(string):
test_data_path,model_path=string.split(",")
import pickle
# load test data
test_data = pickle.load(open(test_data_path, 'rb'))
X_test=test_data[[x for x in test_data.columns if x!='priceRange']]
y_test=test_data['priceRange']
# load model
loaded_model = pickle.load(open(model_path, 'rb'))
result = loaded_model.score(X_test, y_test)
return str(result)
score_tool = Tool(
name='llm_model_score',
func= llm_model_score,
description="Useful for when you need to score a trained model on test data. The input to this tool should be a comma separated list of length 2 of strings representing the path to the saved test data and model. For example 'saved_data.sav','saved_model.sav'. "
)
from langchain import OpenAI
from langchain.chat_models import ChatOpenAI
# Set up the turbo LLM
turbo_llm = ChatOpenAI(
temperature=0,
model_name='gpt-3.5-turbo'
)
# In[74]:
from langchain.agents import initialize_agent
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
tools = [score_tool]
# conversational agent memory
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=3,
return_messages=True
)
# create our agent
conversational_agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=turbo_llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=memory,
handle_parsing_errors=True
)
question="""What is the score of the pretrained model saved at path best_model.sav on the new test data saved at path test_data.sav?"""
manual_react = f"""Question: What is the score of the pretrained model on the new test data?
Thought: I need to score the pretrained model that is saved at model path best_model1.sav on the new test data saved at the path test_data1.sav.
Action: score_tool['test_data1.sav','best_model1.sav']
Observation: 0.75.
Thought: The score returned was 0.75 so the model score on the test data is 0.75.
Action: Finish[model score 0.75]
Question:{question}"""
conversational_agent(manual_react)
output:
> Entering new AgentExecutor chain...
{
"action": "llm_model_score",
"action_input": "test_data.sav,best_model.sav"
}
Observation: 0.608
Thought:{
"action": "Final Answer",
"action_input": "The score of the pretrained model saved at path best_model.sav on the new test data saved at path test_data.sav is 0.608."
}
> Finished chain.
{'input': "Question: What is the score of the pretrained model on the new test data?\nThought: I need to score the pretrained model that is saved at model path best_model1.sav on the new test data saved at the path test_data1.sav.\nAction: score_tool['test_data1.sav','best_model1.sav']\nObservation: 0.75.\nThought: The score returned was 0.75 so the model score on the test data is 0.75.\nAction: Finish[model score 0.75]\n\nQuestion:What is the score of the pretrained model saved at path best_model.sav on the new test data saved at path test_data.sav?",
'chat_history': [],
'output': 'The score of the pretrained model saved at path best_model.sav on the new test data saved at path test_data.sav is 0.608.'}
Upvotes: 1
Views: 5393
Reputation: 1
you can use agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, by first defining tool of llm_model_score. You should try using agents and tools in Langchian. Hope this link will help : [1]: https://www.pinecone.io/learn/series/langchain/langchain-tools/
Upvotes: 0