Reputation: 50372
I would like to search my database for sets that intersect with my search set. I would like for the results to be returned to me in order of the size of the intersection.
The sets inside the database row will be on the order of about 10,000. The search sets are on the order of about 500. The number of rows in the database is about 1,000,000.
EXAMPLE QUERY:
search_set = [ This set has 500 id's ] SELECT rows WHERE "find_set" INTERSECTS "search_set" ORDER BY "size of the intersection"
EXAMPLE DATABASE:
index find_set 1 [set with 10,000 ids] 2 [set with 5,000 ids] ... 1,000,000 [set with 15,000 ids]
Thanks so much!
Upvotes: 4
Views: 171
Reputation: 46408
The performance of this query depends strongly on the database optimization engine and the way you perform the query.
First of all databases don't generally have tables with 15,000 ids in a column. Instead you'll need something like this pair of tables:
set
---
id
set_entry
-----------
id
set_id
entry
The first table will have a million rows. The second more like 10 billion. Put an index on set_entry.entry
.
The best way generally to arrange your query is to have a temporary table of some sort whose rows are the values of your query set. Then execute a query like this:
SELECT set_entry.id, COUNT(*)
FROM set_entry
JOIN query_entry
ON set_entry.entry = query_entry.entry
GROUP BY set_entry.id
ORDER BY count(*) DESC
The query plan that you want is that for each of your elements it should do a lookup on the index, pull back all matching rows, then proceed to do a grouping operation to figure out how many there are for each set you intersect. On the first step you'd do 500 lookups, then pull back somewhere between 0 and 500 million rows. Let's say you're pulling back 5 million. The grouping operation will be done either by building a hash or sorting the data (databases can do it either way), both of which should be plenty fast.
There are a lot of unknowns, but this plan is likely to take a few seconds.
What you want to be careful about is a query like this:
SELECT set_entry.id, COUNT(*)
FROM set_entry
WHERE entry IN (id1, id2, ....)
GROUP BY set_entry.id
ORDER BY count(*) DESC
In my experience most database engines look at this, then decide that they cannot use the index. Instead they will scan through all of set_entry
(which had 10 billion rows), and for each one scan through that set of 500 elements, doing pairwise comparisons. This means an initial step of about 5 trillion pairwise comparisons. This plan will easily keep your CPU busy for hours.
Upvotes: 1