Reputation:
I'm developing a script using CrewAI and LangChain for an interview simulation, where I have agents generating and evaluating questions based on user responses. However, I'm encountering issues with handling user input correctly and the script not functioning as expected.
Code Snippet:
import os
from crewai import Agent, Task, Crew, Process
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
load_dotenv()
gemini_llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0)
# Define agents
question_generator = Agent(
role='Question Generator Agent',
goal='Generate initial and follow-up questions based on user responses and behavioral analysis.',
verbose=True,
memory=True,
backstory=(
"Expert in behavioral analysis and interview techniques, always ready with probing and engaging questions."
),
llm=gemini_llm,
)
evaluator = Agent(
role='Evaluator Agent',
goal='Evaluate the user’s responses for relevance and sufficiency.',
verbose=True,
memory=True,
backstory=(
"Experienced evaluator with a keen eye for detail, ensuring responses meet the required criteria."
),
llm=gemini_llm
)
# Define tasks
initial_question_task = Task(
description="Generate an initial question for the user based on behavioral analysis.",
expected_output='A single question for the user to respond to.',
agent=question_generator,
)
evaluate_response_task = Task(
description="Evaluate the user's response for relevance and sufficiency.",
expected_output='Feedback on whether the response is satisfactory or if a follow-up question is needed.',
agent=evaluator,
)
follow_up_question_task = Task(
description="Generate a follow-up question based on the user's previous response and the evaluator's feedback.",
expected_output='A follow-up question that probes deeper into the user\'s previous answer.',
agent=question_generator,
)
# Define the crew and process
crew = Crew(
agents=[question_generator, evaluator],
tasks=[initial_question_task, evaluate_response_task, follow_up_question_task],
process=Process.sequential
)
# Function to run the interview process
def run_interview():
for question_number in range(10):
print(f"\nQuestion {question_number + 1}:")
# Generate the initial question
crew.process = Process.sequential
result = crew.kickoff(inputs={'task_name': 'initial_question_task'})
initial_question = result['output']
print(initial_question)
user_response = input("Your response: ")
# Evaluate the response
crew.process = Process.sequential
result = crew.kickoff(inputs={'task_name': 'evaluate_response_task', 'response': user_response})
evaluator_feedback = result['output']
if "satisfactory" in evaluator_feedback:
print("Response is satisfactory.")
continue
# Handle follow-up questions
for follow_up_number in range(2):
crew.process = Process.sequential
result = crew.kickoff(inputs={'task_name': 'follow_up_question_task', 'previous_response': user_response})
follow_up_question = result['output']
print(f"Follow-up question {follow_up_number + 1}: {follow_up_question}")
user_response = input("Your response: ")
crew.process = Process.sequential
result = crew.kickoff(inputs={'task_name': 'evaluate_response_task', 'response': user_response})
evaluator_feedback = result['output']
if "satisfactory" in evaluator_feedback:
print("Response is satisfactory.")
break
else:
print("Response is not satisfactory. Generating another follow-up question...")
# Start the interview process
run_interview()
Issue Encountered:
The script fails to handle user input properly during the interview simulation. Specifically, after generating initial questions and receiving user responses, it doesn't correctly evaluate responses or generate follow-up questions based on the evaluator's feedback. Instead, it throws errors or provides incorrect outputs.
Expected Outcome:
I expect the script to smoothly generate questions, evaluate responses, and handle follow-up questions based on the evaluator's feedback in a sequential manner.
What I've Tried:
Upvotes: 0
Views: 448