Reputation: 17798
I have following problem:
I'm searching for similarities. Therefore I have a big source table with 200000 entries and second table with 10000 entries. Now I'm retrieving a entry set for each table and compare every row in the source table with every row in the second table in java (I'm using some NeedleMan Gotoh algorithm and similar more complex algorithms). That means 1 billion comparisons and that's too much and too slow...
The goal is a table with all similarities (id from source table, id from second table and a similarity value) or at least something like the best match (or best x matches) for every entry...
Could anyone give me some advice to do such calculations in a "normal" time?
EDIT
Main Table
---+------+-------------+---------+-------+
id | name | address | country | plz | ...
---+------+-------------+---------+-------+
20 | Sony | Main Str. 1 | US | 10000 |
---+------+-------------+---------+-------+
Second Table
---+------+-------------+---------+-------+
id | name | address | country | plz | ...
---+------+-------------+---------+-------+
30 | Soni | MainStr. 1 | US | 10000 |
---+------+-------------+---------+-------+
Goal (similarity table):
---+---------------+--------------+-----------+
id | id_source_tbl | id_second_tbl| similarity|
---+---------------+--------------+-----------+
1 | 20 | 30 | 0.99 |
---+---------------+--------------+-----------+
simil_value is a value that indicates, how likely the company in the source table is the same as the company in the second table
the result indicates, that the two rows are representing the same company... the two entries just differ because of small typos... (0.99 is the similarity and is very high => companies are the same) Similarity is calculated with a needleman wunsch gotoh algorithm (comparing char for char and considering position in string and so on... typos should result in a high similarity value)
Upvotes: 1
Views: 215
Reputation: 17798
Actually, I made the problem myself...
Solution for me was following:
1) don't reuse connections, always close them with the corresponing ResultSet
2) use transactions
3) split work to threads
4) if you, like me, have results for single rows (ALL similarities for one single entry) and want to calc something on this subresult (like in my case, for all similarities I wanted to calc the rank), do this in java and use the subresult!!!! instead of doing it afterwards in mysql
The result for me is about 1 day of calculation time instead of 3 weeks...
thanks for the help
Upvotes: 0
Reputation: 1269543
In SQL, you would express this as:
select t1.id as id1, t2.id as id2, calculate_similarity(t1.name, t2.name) as similarity from t1 cross join t2
Now, you want to define the similarity table as:
create table similarity (
SimilarityID int not null auto_increment,
id1 int,
id2 int,
similarity float
)
Then do the insert as:
insert into similarity(id1, id2, similarity)
select t1.id as id1, t2.id as id2,
calculate_similarity(t1.name, t2.name) as similarity
from t1 cross join
t2
The SQL engine should do the cross join in parallel as well as the similarity calculation. Perhaps you have a way to limit the query, such as requiring that the companies be in the same state or start with the same letter.
Upvotes: 0
Reputation: 6260
It usually makes more sense to allow MySQL to perform data selection rather than to retrieve a massive data set and then use your own algorithms to filter it. It sounds like all you're doing is a fairly simple join operation e.g.:
SELECT source_id_column, second_id_column, similarity_column
FROM source_table, second_table
WHERE source_table.similarity_column = second_table.similarity;
Upvotes: 1
Reputation: 2958
This sounds like an embarrassingly parallel problem, so as a first step, you could do your analyses on multiple cores and machines.
Upvotes: 1