Reputation: 1
I´m currently working on a local Chatbot using Langchain and Ollama. The Chatbot should answer questions based on the data on my FROST-Server.
My implementation so far is able to send out a GET-request based on the small API-Documentation I wrote and even gives me the right data. The problem I face right now is that the llm is only using the last four entries of the JSON for it´s answer.
Furthermore the llm has a hard time aggregating data. I´m getting wrong answers for questions like "How many Things are in Pankow?" or "Which Things were registered in 2015?". I´ve tried out different approaches using Langchain but kinda got lost and frustrated with the documentation. Is there maybe a parameter or something limiting the entries?
I´ve tried different approaches but this is the code that worked the best so far:
from langchain_ollama.chat_models import ChatOllama
from langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit
from langchain_community.utilities.requests import TextRequestsWrapper
from langgraph.prebuilt import create_react_agent
api_spec = """
info:
title: FROST-Server
version: 1.0.0
servers:
- http://localhost:8082/FROST-Server/v1.1
endpoints:
"/{entity}":
method: "GET"
description: "Uses GET request to retrieve information on any entities. The 'entity' is a URL parameter which is the actual name of the entity."
entities:
Things:
description: "Ein 'Thing' ist ein Objekt der physischen oder informationellen Welt, das identifiziert und in Kommunikationsnetzwerke integriert werden kann."
properties:
ID: "Eindeutiger Identifikator des Thing."
Description: "Eine lesbare Beschreibung des Thing."
Properties: "Zusätzliche strukturierte Metadaten, die mit dem Thing verbunden sind."
Locations:
description: "Repräsentiert den geografischen Standort eines Thing."
properties:
ID: "Eindeutiger Identifikator des Location.",
Name: "Ein lesbarer Name für den Standort.",
Location: "Die geografische Position (z.B. unter Verwendung von GeoJSON).",
EncodingType: "Gibt den Kodierungstyp für die Standortdaten an (z.B. 'application/vnd.geo+json')."
HistoricalLocations:
description: "Repräsentiert die Historie der Standorte eines Thing."
properties:
ID: "Eindeutiger Identifikator des Historical Location."
Time: "Der Zeitpunkt, zu dem das Thing an dem spezifischen Standort war."
Thing: "Referenz auf die zugehörige Thing-Entität."
Datastreams:
description: "Repräsentiert eine Sammlung von Observations, die nach dem gleichen ObservedProperty und Sensor gruppiert sind."
properties:
ID: "Eindeutiger Identifikator des Datastream."
Name: "Ein lesbarer Name für den Datastream."
ObservationType: "Der Typ der durchgeführten Beobachtung."
Sensor: "Referenz auf den Sensor, der mit diesem Datastream verbunden ist."
Sensors:
description: "Repräsentiert ein Instrument, das ein Phänomen beobachtet und einen beobachteten Wert zurückgibt."
properties:
ID: "Eindeutiger Identifikator des Sensors."
Name: "Ein lesbarer Name für den Sensor."
Metadata: "Zusätzliche strukturierte Metadaten über den Sensor."
EncodingType: "Gibt den Kodierungstyp für die Sensormetadaten an."
ObservedProperties:
description: "Repräsentiert das Phänomen, das von einem Sensor beobachtet wird."
properties:
ID: "Eindeutiger Identifikator der Observed Property."
Name: "Ein lesbarer Name für die Observed Property."
Definition: "Eine URI, die die Definition der Observed Property bereitstellt."
Observations:
description: "Repräsentiert den Akt des Messens oder Bestimmens des Wertes einer Observed Property."
properties:
ID: "Eindeutiger Identifikator der Observation."
PhenomenonTime: "Der Zeitpunkt, zu dem die Beobachtung gemacht wurde."
ResultTime: "Der Zeitpunkt, zu dem das Ergebnis verfügbar wurde."
Result: "Der geschätzte Wert der Observed Property."
Basic Requests:
Get all of type: /entity
Get one of type: /entity(id)
Get A Related Object: /entity(id)/entity
Entities are:
Things
Locations
Datastreams
Observations
"""
ALLOW_DANGEROUS_REQUEST = True
toolkit = RequestsToolkit(
requests_wrapper=TextRequestsWrapper(headers={}),
allow_dangerous_requests=ALLOW_DANGEROUS_REQUEST,
)
# Füge das JsonToolkit zu deinen bestehenden Tools hinzu
tools = toolkit.get_tools()
system_message = """
You are an AI assistant with access to an API to help answer user queries.
Only use the data from the API call to answer questions.
Given the user's query, you must decide what to do with it based on the
list of tools provided to you. Always use 'http://localhost:8082/FROST-Server/v1.1' as base url.
"""
#Initialize the LLM
llm = ChatOllama(
model="llama3.1",
temperature=0,
max_tokens=None,
timeout=None,
)
query = "How many Things are there?"
agent_executor = create_react_agent(llm, tools, state_modifier=system_message)
events = agent_executor.stream(
{"messages": [("user", query)]},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
I want to add, that I´ve tried to replace :
events = agent_executor.stream(
{"messages": [("user", query)]},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
With:
chain = APIChain.from_llm_and_api_docs(
llm,
api_docs=api_spec,
verbose=True,
limit_to_domains=["*"],
system=prompt,
)
response = chain.invoke(
{"question": "What kind of things are there?"}
)
But the resulted in an error:
raise InvalidSchema(f"No connection adapters were found for {url!r}") requests.exceptions.InvalidSchema: No connection adapters were found for 'Based on the provided documentation and the question "What kind of things are there?", I would construct the following API URL:\n\nhttp://localhost:8082/FROST-Server/v1.1/Things\n\nThis URL calls the "/{entity}" endpoint with the entity parameter set to "Things", which will return a list of all Things, along with their properties (ID, Description, and Properties). This should provide sufficient information to answer the question about what kind of things are there.\n\nNote that I've excluded unnecessary pieces of data by not including any specific IDs or related objects in the URL.'
Thanks for the help!
Upvotes: 0
Views: 135