ℕʘʘḆḽḘ
ℕʘʘḆḽḘ

Reputation: 19405

how to reduce the processing cost of comparing many strings together in Python?

I have two datasets, A and B, that contain a string variable similar to a headline.

example : "this is a very nice string".

Both datasets are large (millions of observations).

I need to see whether the strings in A also appear somewhere in B. I was wondering if there is a specific Python library that would reduce the computational cost of comparing some many strings together?

Maybe via some smart indexing of the datasets before running the comparison? Any idea/suggestion is welcome.

Important problem: matching should be fuzzy, because I can have the following headlines

A: "this is an apple" B: "this is a red apple"

they dont match perfectly, but they are really close. If there is not better matching (such as exact matching) then I consider they are the same.

Many thanks

Upvotes: 1

Views: 108

Answers (2)

Below the Radar
Below the Radar

Reputation: 7635

Whoosh is a fast, featureful full-text indexing and searching library implemented in pure Python. Programmers can use it to easily add search functionality to their applications and websites. Every part of how Whoosh works can be extended or replaced to meet your needs exactly.

Documentation: Whoosh package documentation

Home Page: http://bitbucket.org/mchaput/whoosh

Upvotes: 1

nikihub
nikihub

Reputation: 321

One option is to convert the two datasets to python set and check whether the set of A is subset of the set of B. You should experiment what is the complexity, but I believe python code is pretty optimized.

Other option is to build trie of the strings in B. This will take O(|B| * max_str_len_in_B). After that you will iterate over the strings in A and check if everyone of them is in the trie. This will cost you O(|A| * max_str_len_in_A).

Upvotes: 1

Related Questions