baobobs
baobobs

Reputation: 703

Slow Python regex without catastrophic backtracking

I have 2 csv files. The first, input, consists of input street addresses with various errors. The second, ref is a clean street address table. Records within input need to be matched to records within ref. Converting the files to lists with unique records is fast, but once I get to the matching process, it's dreadfully slow, taking a full 85 seconds just to match two addresses within input to ref without any regular expressions! I realize that the size of ref is the issue here; it is over 1 million records in length and the file size is 30 MB. I was anticipating some performance issues with these kinds of sizes, but taking this long for only two records is unacceptable (realistically, I may have to match up to 10,000 records or more. Additionally, I will eventually need to embed some regex to ref items to allow for more flexible matching. Testing the new regex module is even worse, taking a whopping 185 seconds for the same two input records. Does anybody know the best way to speed things up substantially? Can I somehow index by zip code, for example?

Here are sample addresses from input and ref, respectively (after preprocessing):

60651 N SPRINGFIELD AVE CHICAGO
60061 BROWNING CT VERNON HILLS

Here is what I have so far. (being a novice, I realize that there is probably all kinds of inefficiencies with my code, but that's not the issue) :

import csv, re

f = csv.reader(open('/Users/benjaminbauman/Documents/inputsample.csv','rU'))

columns = zip(*f)

l = list(columns)

inputaddr = l[0][1:]

f = csv.reader(open('/Users/benjaminbauman/Documents/navstreets.csv','rU'))
f.next()

reffull = []
for row in f:
    row = str(row[0:7]).strip(r'['']').replace("\'","")
    if not ", , , , ," in row: reffull.append(row) 

input = list(set(inputaddr))

ref1 = list(set(reffull))
ref2 = ref1

input_scrub = []
for i in inputaddr:
    t = i.replace(',',' ')
    input_scrub.append(' '.join(t.split()))

ref_scrub = []

for i in ref1:
    t = i.replace(',',' ')
    ref_scrub.append(' '.join(t.split()))

output_iter1 = dict([ (i, [ r for r in ref_scrub if re.match(r, i) ]) for i in input_scrub ])

unmatched_iter1 = [i for i, j in output_iter1.items() if len(j) < 1]
matched_iter1 = {i: str(j[0][1]).strip(r'['']') for i, j in output_iter1.items() if len(j) is 1}
tied_iter1 = {k: zip(*(v))[1] for k,v in output_iter1.iteritems() if len(v) > 1}

Upvotes: 0

Views: 386

Answers (2)

baobobs
baobobs

Reputation: 703

It occurred to me why the line

output_iter1 = dict([ (i, [ r for r in ref_scrub if re.match(r, i) ]) for i in input_scrub ])

was taking so long. The matching process is searching for a match for every item within the exceptionally large list, ref to items within the smaller list, input, as opposed to the other way around. Unfortunately, I wanted it structured this way so that I could embed regular expressions to items within ref, as these items are tokenized by address attribute to allow for easy anchoring. I suppose there are two workarounds given my limited understanding of sql. The first could use the idea brought up in my last comment per eyquem's suggestion. The second could use a lookup (index?) by city and zip code attributes using an equals to statement before doing more complicated matching, either with regex or difflib.

I've split items within input and ref so that the city and zip code attributes are separate items within a list, such as the following:

ref ('COVE POINTE CT', 'BLOOMINGTON, 61704')
input ('S EBERHART', 'CHICAGO, 60628')

The following allows me to narrow the search to the portion of ref that shares the same city and zip code. This narrows the length of time down to 56 seconds for an input file containing over 1000 records. This is substantially better.

matchaddr = []
refaddr = []
unmatched = []
for i in ref:
    for t in input:
        if t[1] == i[1]:
            if re.match(i[0],t[0]):
                matchaddr.append(t[0] + ', ' + t[1])
                refaddr.append(i[0] + ', ' + i[1]) 

Now I can use my beloved regex again (pending that the expressions don't cause additional problems, such as catastrophic backtracking). Also, this code's speed is because perfect matches are found first with city and zip code attributes. If I try to allow for flexible matching with city and zip code as well, speed will likely be greatly sacrificed. Unfortunately, it may have to come to that point (input contains messy city and zip code attributes as well).

Upvotes: 0

eyquem
eyquem

Reputation: 27585

Instead of fuzzy regex in the new module, maybe you could use the difflib module, if the execution time is acceptable:

import difflib


REF = ['455 Gateway Dr, Brooklyn, NY 11239',
       '10 Devoe St, Brooklyn, NY 11211',
       '8801 Queens Blvd, Elmhurst, NY 11373 ',
       '342 Wythe Ave, Brooklyn, NY 11249 ',
       '4488 E Live Oak Ave, Arcadia, CA 91006',
       '1134 N Vermont Ave, Los Angeles, CA 90029',
       '1101 17th St NW, Washington, DC 20036 ',
       '3001 Syringa St, Hopeful-City, AL 48798',
       '950 Laurel St, Minneapolis, KS 67467']


INPUT = ['4554 Gagate Dr, Brooklyn, NY 11239',
         '10 Devoe St, Brooklyn, NY 11211',
         '8801 Queens Blvd, Elmhurst, NY 11373 ',
         '342 Wythe Ave, Brooklyn, NY 11249 ',
         '4488 E Live Oak Ave, Arcadia, CA 91006',
         '1134 N Vermont Ave, Los Angeles, CA 90029',
         '1101 17th St NW, Washington, DC 20036 ',
         '3001 Syrinuy St, Hopeful Dam, AL 48798',
         '950 Laurel St, Minneapolis, KS 67467',
         '455 Gateway Doctor, Forgotten Place, NY 11239',
         '10 Devoe St, Brook., NY 11211',
         '82477 Queens Blvd, Elmerst, NY 11373 ',
         '342 Waithe Street, Brooklyn, MN 11249 ',
         '4488 E Live Poke Ave, Arcadia, CA 145',
         '1134 N Vermiculite Ave, Liz Angelicas, CA 90029',
         '1101 1st St NW, Washing, DC 20036 ']


def treatment(inp,reference,crit,gcm = difflib.get_close_matches):
    for input_item in inp:
        yield (input_item,gcm(input_item,reference,1000,crit))


for a,b in treatment(INPUT,REF,0.65):
    print '\n- %s\n     %s' % (a, '\n     '.join(b))

the result is:

- 4554 Gagate Dr, Brooklyn, NY 11239
     455 Gateway Dr, Brooklyn, NY 11239
     342 Wythe Ave, Brooklyn, NY 11249 

- 10 Devoe St, Brooklyn, NY 11211
     10 Devoe St, Brooklyn, NY 11211

- 8801 Queens Blvd, Elmhurst, NY 11373 
     8801 Queens Blvd, Elmhurst, NY 11373 

- 342 Wythe Ave, Brooklyn, NY 11249 
     342 Wythe Ave, Brooklyn, NY 11249 
     455 Gateway Dr, Brooklyn, NY 11239

- 4488 E Live Oak Ave, Arcadia, CA 91006
     4488 E Live Oak Ave, Arcadia, CA 91006

- 1134 N Vermont Ave, Los Angeles, CA 90029
     1134 N Vermont Ave, Los Angeles, CA 90029

- 1101 17th St NW, Washington, DC 20036 
     1101 17th St NW, Washington, DC 20036 

- 3001 Syrinuy St, Hopeful Dam, AL 48798
     3001 Syringa St, Hopeful-City, AL 48798

- 950 Laurel St, Minneapolis, KS 67467
     950 Laurel St, Minneapolis, KS 67467

- 455 Gateway Doctor, Forgotten Place, NY 11239
     455 Gateway Dr, Brooklyn, NY 11239

- 10 Devoe St, Brook., NY 11211
     10 Devoe St, Brooklyn, NY 11211

- 82477 Queens Blvd, Elmerst, NY 11373 
     8801 Queens Blvd, Elmhurst, NY 11373 

- 342 Waithe Street, Brooklyn, MN 11249 
     342 Wythe Ave, Brooklyn, NY 11249 
     455 Gateway Dr, Brooklyn, NY 11239

- 4488 E Live Poke Ave, Arcadia, CA 145
     4488 E Live Oak Ave, Arcadia, CA 91006

- 1134 N Vermiculite Ave, Liz Angelicas, CA 90029
     1134 N Vermont Ave, Los Angeles, CA 90029

- 1101 1st St NW, Washing, DC 20036 
     1101 17th St NW, Washington, DC 20036 

Upvotes: 1

Related Questions