Sameera Sadaqat
Sameera Sadaqat

Reputation: 1

How to get the combine result from multiple vectors stored in Pinecone?

We have generated vector embeddings using OpenAI from our custom data file which is in .xlsx format and stored the vectors in Pinecone, we are now trying to put query using pinecone index and get the result from the stored vectors whatever the best suitable matching answer, but we are not getting appropriate answer from the given context.

we are using Pinecone for storing vector embeddings, we are using EMBEDDING_MODEL- "text-embedding-ada-002", to refine our query we are using OpenAI model "gpt-3.5-turbo-instruct".

We need to get the result from multiple vectors together with the matching context.

def create_embedding(text_to_embed):
  # Embed a line of text
  response = openai.embeddings.create(
      model= EMBEDDING_MODEL,
      input=text_to_embed
  )
  # Extract the AI output embedding as a list of floats
  embedding = response.data[0].embedding
  return embedding

pinecone.init(
   api_key=PINECONE_API_KEY, 
   environment='gcp-starter'
   )
index_name = "chatbot"
if index_name not in pinecone.list_indexes():
    # we create a new index
    pinecone.create_index(name=index_name, metric="cosine", dimension=1536)
index = pinecone.Index(index_name)

# Describe the index statistics
stats = index.describe_index_stats()
if stats['total_vector_count'] < 1:
 embeddings = []
 for chunk in splitted_docs:
    id = uuid.uuid4().hex
    embedding = create_embedding(chunk.page_content)
    embeddings.append([(id, embedding, {"text": chunk.page_content})])

 for embedding in embeddings:
    index.upsert(embedding)

def query_refiner(conversation, query):
    response = openai.completions.create(
    model="gpt-3.5-turbo-instruct",
    prompt=f"Given the following user query and conversation log, formulate a question that would be the most relevant to provide the user with an answer from a knowledge base.\n\nCONVERSATION LOG: \n{conversation}\n\nQuery: {query}\n\nRefined Query:",
    temperature=0.7,
    max_tokens=256,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0
    )
    return response.choices[0].text

def find_match(input):          
   query_embedding = create_embedding(input) 
   num_results = 1
   results = index.query(query_embedding, k=num_results, top_k=1, include_metadata=True)
   for match in results["matches"]:
        return match['metadata']['text']

Upvotes: 0

Views: 315

Answers (2)

sahil
sahil

Reputation: 9

Upvotes: 0

Mit Shah
Mit Shah

Reputation: 117

You are facing this issue because in index.query() you are taking only one vector for match and that is possible that data you want is in different vectors.

The answer for your question is,

  • If you want answer from different vectors you need to update your top-k parameter in index.query().
  • Let me give you one more solution, if you want perfect answer from just one vector, you just need to give

chunkOverlap with chunkSize

chunkSize: 1000, chunkOverlap: 100,

this will divide your data into chunks like 0-1000, 900-1900, 2800-3800, ...

so this will not break the continuity and you can get perfect answer.

Upvotes: 0

Related Questions