Sergey Agapov
Sergey Agapov

Reputation: 1

How to use langchain load_evaluator() with local llm?

I try to use langchain load_evaluator() with local LLM Ollama. But I don't understand which model I should use.

from langchain.evaluation import load_evaluator
from langchain.chat_models import ChatOllama
from langchain.llms import Ollama
from langchain.embeddings import HuggingFaceEmbeddings
#This is work
evaluator = load_evaluator("labeled_score_string", llm=ChatOllama(model="llama2"))
evaluator = load_evaluator("pairwise_string",  llm=Ollama(model="llama2"))
#This is not
evaluator = load_evaluator("pairwise_embedding_distance",  llm=HuggingFaceEmbeddings())
evaluator = load_evaluator("pairwise_embedding_distance",  llm=Ollama(model="llama2"))

Upvotes: 0

Views: 227

Answers (1)

Dehan
Dehan

Reputation: 21

I think we have same problem and I found this

embedding_function = OllamaEmbeddings(model="llama3.2:3b")
evaluator = load_evaluator("pairwise_embedding_distance", embeddings=embedding_function)

instead asking for "llm" parameter, it should provide "embeddings" param

source: https://python.langchain.ac.cn/docs/guides/productionization/evaluation/comparison/pairwise_embedding_distance/

Upvotes: 0

Related Questions