ChrisArmstrong
ChrisArmstrong

Reputation: 2521

Better fuzzy matching performance?

I'm currently using method get_close_matches method from difflib to iterate through a list of 15,000 strings to get the closest match against another list of approx 15,000 strings:

a=['blah','pie','apple'...]
b=['jimbo','zomg','pie'...]

for value in a:
    difflib.get_close_matches(value,b,n=1,cutoff=.85)

It takes .58 seconds per value which means it will take 8,714 seconds or 145 minutes to finish the loop. Is there another library/method that might be faster or a way to improve the speed for this method? I've already tried converting both arrays to lower case, but it only resulted in a slight speed increase.

Upvotes: 7

Views: 16107

Answers (6)

do-me
do-me

Reputation: 2208

Benchmarks in 2022

tl;dr: RapidFuzz was fastest.

Test: Pick the best string match from 1.000.000 elements. Tested on my old i7 notebook with 32gb RAM.

Best to worst:

  1. RapidFuzz (drop-in replacement for TheFuzz): ~20ms
  2. fuzzyset2: ~320ms
  3. TheFuzz (ex fuzzywuzzy): ~7s

Upvotes: 4

Alexey Trofimov
Alexey Trofimov

Reputation: 5027

RapidFuzz

is the super-fast lib for fuzzy string matching. It has the same API as famous fuzzywuzzy, but times faster and MIT licensed.

Upvotes: 5

Shalini Baranwal
Shalini Baranwal

Reputation: 3008

I had tried few methods for fuzzy match. the best one was cosine similarity, with threshold as per your need (i kept 80% fuzzy match).

Upvotes: 0

hobs
hobs

Reputation: 19299

fuzzyset indexes strings by their bigrams and trigrams so it finds approximate matches in O(log(N)) vs O(N) for difflib. For my fuzzyset of 1M+ words and word-pairs it can compute the index in about 20 seconds and find the closest match in less than a 100 ms.

Upvotes: 9

tmyklebu
tmyklebu

Reputation: 14225

Perhaps you can build an index of the trigrams (three consecutive letters) that appear in each list. Only check strings in a against strings in b that share a trigram.

You might want to look at the BLAST bioinformatics tool; it does approximate sequence alignments against a sequence database.

Upvotes: 3

Igglyboo
Igglyboo

Reputation: 845

Try this

https://code.google.com/p/pylevenshtein/

The Levenshtein Python C extension module contains functions for fast computation of - Levenshtein (edit) distance, and edit operations - string similarity - approximate median strings, and generally string averaging - string sequence and set similarity It supports both normal and Unicode strings.

Upvotes: 1

Related Questions