Yilmaz
Yilmaz

Reputation: 49679

what is the difference between llm and llm chain in langchain?

this is llm:

question=st.text_input("your question")
llm=OpenAI(temperature=0.9)
if prompt:
    response=llm(prompt)
    st.write(response)

then if we need to execute a prompt we have to crate llm chain:

from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

question=st.text_input("your question")
llm=OpenAI(temperature=0.9)

template="Write me something about {topic}"
topic_template=PromptTemplate(input_variables=['topic'],template=template)

topic_chain=LLMChain(llm=llm,prompt=topic_template)

if prompt:
    response=topic_chain.run(question)
    st.write(response)

I am confused because we used llm(prompt) in the first example, but we created LLMChain(llm=llm,prompt=topic_template) in the second example. Could you please explain the difference between these two approaches and when it's appropriate to use one over the other?

Upvotes: 4

Views: 11001

Answers (3)

Yilmaz
Yilmaz

Reputation: 49679

I was confused about llm chain (I know that chain creates complex pipeline) because it was always used with the prompt template. it turns out chain requires two components:

1- Prompt template

2- language model

From here

An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.

If you think about it, when you create a chain combining 2 tools, you have to pass the output of the first tool to the second tool as input and that is why we are using prompt templating

Upvotes: 3

AHK
AHK

Reputation: 115

In the examples, llm is used for direct, simple interactions with a language model, where you send a prompt and receive a response directly.

On the other hand, LLMChain in langchain is used for more complex, structured interactions, allowing you to chain prompts and responses using a PromptTemplate, and is especially useful when you need to maintain context or sequence between different prompts and responses.

Upvotes: 0

ZKS
ZKS

Reputation: 2876

In simple terms

LLM is the base class for interacting with language models like GPT-3, BLOOM, etc. Handles lower-level tasks like tokenizing prompts, calling the API, handling retries, etc.

   from langchain import OpenAI

   llm = OpenAI() 
   llm("Hello world!")

LLMChain is a chain that wraps an LLM to add additional functionality. HANDLES PROMPT FORMATTING, input/output parsing, conversations, etc. Used extensively by higher level LangChain tools.

    from langchain import PromptTemplate, LLMChain

    template = "Hello {name}!"
    llm_chain = LLMChain(llm=llm, prompt=PromptTemplate(template))

    llm_chain(name="Bot :)")

So in summary:

LLM -> Lower level client for accessing a language model LLMChain -> Higher level chain that builds on LLM with additional logic

Upvotes: 8

Related Questions