Germule
Germule

Reputation: 21

LangChain.llms AttributeError when trying to prompt ChatGPT

I am trying to get a basic langchain.llms object to successfully ask ChatGPT a basic question and recieve a response back. But I keep getting this error:

ValueError("Argument prompt is expected to be a string. Instead found " f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " "generate instead.")

With final error: AttributeError: module 'openai' has no attribute 'error'

Here is my code:

import os
os.environ["OPENAI_API_KEY"] = "my_api_key"

from langchain.llms import OpenAI
import openai

#create object using environment variable
llm = OpenAI()

#was object creation successful?
print(llm)

#prompt ChatGPT
prompt = "Tell me a joke"
print(llm(prompt))

I am creating the llm object using OPENAI_API_KEY as an environment variable, which I am setting at the top of the script using Python os library.

This is the output I get, which makes it seem like the llm object is fine, but the call is not:

Output:

OpenAI
Params: {'model_name': 'text-davinci-003', 'temperature': 0.7, 'max_tokens': 256, 
'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'request_timeout': None, 'logit_bias': {}}

Cell In[4], line 12
 10 #prompt ChatGPT
 11 prompt = "Tell me a joke"
---> 12 print(llm(prompt))

raise ValueError(
         "Argument `prompt` is expected to be a string. Instead found "
         f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
         "`generate` instead."

Final error I get: 
AttributeError: module 'openai' has no attribute 'error'

Any help would be greatly appreciated, thanks!

All the tutorials I have followed on langchain have followed a very similar structure, and I have been unable to get any of them working.

Upvotes: 2

Views: 1839

Answers (1)

Brian Horakh
Brian Horakh

Reputation: 617

This error is generic and unhelpful, it is thrown when the API server sends probably any bad response. Here is a full dump of the error I got.

I am not an Azure OpenAI expert, but I can say definitively that this code works fine in US East 1 on gpt-35-turbo-instruct version 0914. The basic issue I was having is that the model gpt-35-turbo, gpt-4 etc. are all available in my Australia East region, but NOT the instruct one. I couldn't find any model in Australia that would work!

After 2 hrs I finally broke down and created a new Cognitive Account in US East, I deployed the -instruct model and it worked!

The important part of the error is OperationNotSupported the link it suggests in the error below is basically useless.

File /opt/conda/lib/python3.11/site-packages/langchain_core/language_models/llms.py:948, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
    941 if not isinstance(prompt, str):
    942     raise ValueError(
    943         "Argument `prompt` is expected to be a string. Instead found "
    944         f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
    945         "`generate` instead."
    946     )
    947 return (
--> 948     self.generate(
    949         [prompt],
    950         stop=stop,
    951         callbacks=callbacks,
    952         tags=tags,
    953         metadata=metadata,
...
   (...)
    940     stream_cls=stream_cls,
    941 )

BadRequestError: Error code: 400 - {'error': {'code': 'OperationNotSupported', 'message': 'The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.'}}

😁 Screen shot with Dad joke for proof!

enter image description here

Upvotes: 0

Related Questions