Reputation: 49
I just want to make sure I am following the idea of hybrid search correctly and consequently the qdrant implementation.
Hybrid search on a vectorized database: Perform a search on dense embeddings and a search of keyword match, this last one can be done on sparse vectors.
Once that simple definition is made, we can create a vectorized database in Qdrant and as qdrant does not support substring search, or is not optimized for that, we create a collection with dense and sparse vectors, such as:
baseURL = "http://localhost:6333"
client = QdrantClient(url=baseURL)
collection_name = "example_collection"
client.create_collection(
collection_name=collection_name,
vectors_config={
"text-dense": models.VectorParams(
size=1536, # OpenAI Embeddings
distance=models.Distance.COSINE,
)
},
sparse_vectors_config={
"text-sparse": models.SparseVectorParams(
index=models.SparseIndexParams(
on_disk=False,
)
)
},
)
Therefore, asuming we have upserted some points with:
points = [
PointStruct(
id=1,
vector={
"text-dense": [0.1] * 1536,
"text-sparse": models.SparseVector(indices=[6, 7], values=[1.0, 2.0]),
},
payload={"name": f"Example_{1}"},
)
]
client.upsert(
collection_name=collection_name,
wait=True,
points=points,
)
We can now perform an hybrid search, which could be achieved in a very rudimentary way, for example with two independent queries to the data base, one for the dense embeddings and the other one for the sparse one, such as:
# Query example once treated with specific models.
user_query_dense_vector = [0.099] * 1536
user_query_sparse_vector = {6: 2.0, 7: 3.0}
# Dense search
dense_results = client.search(
collection_name=collection_name,
query_vector=("text-dense", user_query_dense_vector),
with_vectors=True,
)
# Sparse search
sparse_results = client.search(
collection_name=collection_name,
query_vector=models.NamedSparseVector(
name="text-sparse",
vector=models.SparseVector(
indices=list(user_query_sparse_vector.keys()),
values=list(user_query_sparse_vector.values()),
),
),
with_vectors=True,
)
Asuming user_query_dense_vector and user_query_sparse_vector are the consequent vectors of the user query once treated by a model, we can perform a fusion method such as Reciprocal Rank Fusion (RRF): 1 / (k + rank). Where we basically iterate through both results, dense and sparse and calculate the score for each chunk retrieved.
combined_scores = {}
for rank, point in enumerate(dense_results, start=1):
if point.id not in combined_scores:
combined_scores[point.id] = 0
combined_scores[point.id] += 1 / (k + rank)
for rank, point in enumerate(sparse_results, start=1):
if point.id not in combined_scores:
combined_scores[point.id] = 0
combined_scores[point.id] += 1 / (k + rank)
fused_results = sorted(combined_scores.items(), key=lambda x: x[1], reverse=True)
Where the results are:
[(1, 0.03278688524590164)]
Which basically returns only the first point as there is only one.
Now, the question is, if we follow Qdrant documentation, they use a prefetch method to achieve an hybrid search, and if we ommit the Matryoshka branch, the first integer search (for faster retrival) and the last late interaction reranking, we should basically achieve the same results as the above code, where we search seprately and then fuse them.
sparse_dense_rrf_prefetch = models.Prefetch(
prefetch=[
models.Prefetch(
query=user_query_dense_vector,
using="text-dense",
limit=25,
),
models.Prefetch(
query=models.SparseVector(
indices=list(user_query_sparse_vector.keys()),
values=list(user_query_sparse_vector.values()),
),
using="text-sparse",
limit=25,
),
],
# RRF fusion
query=models.FusionQuery(
fusion=models.Fusion.RRF,
),
)
client.query_points(
collection_name=collection_name,
prefetch=[sparse_dense_rrf_prefetch],
query=user_query_dense_vector,
using="text-dense",
with_payload=True,
limit=10,
)
Obviously, it returns the only point it's in the database.
QueryResponse(points=[ScoredPoint(id=1, version=0, score=0.9999997, payload={'name': 'point_1'}, vector=None, shard_key=None, order_value=None)])
I think I was expecting a score similar to the one implemented above from the RRF, not the cosine similarity between the sparse vector from the query and the sparse vector, which is specifically what i am achieving with just sparse search (sparse_results).
I don't know why, but it seems a little bit off to me. Does someone has implmented something similar and can corroborate that the prefetch is indeed correctly implmented ?
Thank you in advance!
Upvotes: 0
Views: 315
Reputation: 1
There are two issues with the code you provided.
First, the k value should be changed. You didn't share the part of your code where you defined k but it appears you are using k = 60. That should be changed to k = 2.
Second, you are initializing the rank at 1, when it should init at 0.
To make these changes take this code:
combined_scores = {}
k = ?????
for rank, point in enumerate(dense_results, start=1):
if point.id not in combined_scores:
combined_scores[point.id] = 0
combined_scores[point.id] += 1 / (k + rank)
for rank, point in enumerate(sparse_results, start=1):
if point.id not in combined_scores:
combined_scores[point.id] = 0
combined_scores[point.id] += 1 / (k + rank)
fused_results = sorted(combined_scores.items(), key=lambda x: x[1], reverse=True)
and change it to this:
combined_scores = {}
k = 2
for rank, point in enumerate(dense_results):
if point.id not in combined_scores:
combined_scores[point.id] = 0
combined_scores[point.id] += 1 / (k + rank)
for rank, point in enumerate(sparse_results):
if point.id not in combined_scores:
combined_scores[point.id] = 0
combined_scores[point.id] += 1 / (k + rank)
Upvotes: 0