Daremitsu
Daremitsu

Reputation: 643

ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents']

Getting the error while trying to run a langchain code.

ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents'].
Traceback:
File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
File "D:\Python Projects\POC\Radium\Ana\app.py", line 49, in <module>
    answer = question_chain.run(formatted_prompt)
File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\langchain\chains\base.py", line 106, in run
    f"`run` not supported when there is not exactly one input key, got ['question', 'documents']."

My code is as follows.

import os
from apikey import apikey

import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
#from langchain.memory import ConversationBufferMemory
from docx import Document

os.environ['OPENAI_API_KEY'] = apikey

# App framework
st.title('🦜🔗 Colab Ana Answering Bot..')
prompt = st.text_input('Plug in your question here')


# Upload multiple documents
uploaded_files = st.file_uploader("Choose your documents (docx files)", accept_multiple_files=True, type=['docx'])
document_text = ""

# Read and combine Word documents
def read_docx(file):
    doc = Document(file)
    full_text = []
    for paragraph in doc.paragraphs:
        full_text.append(paragraph.text)
    return '\n'.join(full_text)

for file in uploaded_files:
    document_text += read_docx(file) + "\n\n"

with st.expander('Contextual Prompt'):
    st.write(document_text)

# Prompt template
question_template = PromptTemplate(
    input_variables=['question', 'documents'],
    template='Given the following documents: {documents}. Answer the question: {question}'
)

# Llms
llm = OpenAI(temperature=0.9)
question_chain = LLMChain(llm=llm, prompt=question_template, verbose=True, output_key='answer')

# Show answer if there's a prompt and documents are uploaded
if prompt and document_text:
    formatted_prompt = question_template.format(question=prompt, documents=document_text)
    answer = question_chain.run(formatted_prompt)
    st.write(answer['answer'])

I have gone through the documentations and even then I am getting the same error. I have already seen demos where multiple prompts are being taken by langchain.

Upvotes: 4

Views: 7673

Answers (2)

andrew_reece
andrew_reece

Reputation: 21274

For a prompt with multiple inputs, use predict() instead of run(), or just call the chain directly. (Note: Requires Python 3.8+)

prompt_template = "Tell me a {adjective} joke and make it include a {profession}"
llm_chain = LLMChain(
    llm=OpenAI(temperature=0.5),
    prompt=PromptTemplate.from_template(prompt_template)
)

# Option 1
llm_chain(inputs={"adjective": "corny", "profession": "plumber"})

# Option 2
llm_chain.predict(adjective="corny", profession="plumber")

Also note that you only need to assign the PromptTemplate at the moment you're instantiating the LLMChain - after that you're just passing in the template variables - in your case, documents and question (instead of passing in the formatted template, as you have currently).

Upvotes: 4

peebee
peebee

Reputation: 1

I got the same error while on python 3.7.1 but when I upgraded my python to 3.10 and langchain to latest version I could get rid of that error. I noticed this since on colab it was running fine but locally it wasn't.

Upvotes: 0

Related Questions