Reputation: 16070
For very large strings (spanning multiple lines) is it faster to use Python's built-in string search or to split the large string (perhaps on \n
) and iteratively search the smaller strings?
E.g., for very large strings:
for l in get_mother_of_all_strings().split('\n'):
if 'target' in l:
return True
return False
or
return 'target' in get_mother_of_all_strings()
Upvotes: 5
Views: 15320
Reputation: 692
If you are only matching once to see if the substring is in the string at all, then both methods are about the same, and you get more overhead for splitting it into separate line by line searches; so the large string search is a bit faster.
If you have to do multiple matches, then I would tokenize the string and stuff them into a dictionary or set and store it in memory.
s = 'SOME REALLY LONG STRING'
tokens = set(s.split())
return substring in tokens
Upvotes: 1
Reputation: 7780
Probably Certainly the second, I don't see any difference in doing a search in a big string or many in small strings. You may skip some chars thanks to the shorter lines, but the split operation has its costs too (searching for \n
, creating n different strings, creating the list) and the loop is done in python.
The string __contain__
method is implemented in C and so noticeably faster.
Also consider that the second method aborts as soon as the first match is found, but the first one splits all the string before even starting to search inside it.
This is rapidly proven with a simple benchmark:
import timeit
prepare = """
with open('bible.txt') as fh:
text = fh.read()
"""
presplit_prepare = """
with open('bible.txt') as fh:
text = fh.read()
lines = text.split('\\n')
"""
longsearch = """
'hello' in text
"""
splitsearch = """
for line in text.split('\\n'):
if 'hello' in line:
break
"""
presplitsearch = """
for line in lines:
if 'hello' in line:
break
"""
benchmark = timeit.Timer(longsearch, prepare)
print "IN on big string takes:", benchmark.timeit(1000), "seconds"
benchmark = timeit.Timer(splitsearch, prepare)
print "IN on splitted string takes:", benchmark.timeit(1000), "seconds"
benchmark = timeit.Timer(presplitsearch, presplit_prepare)
print "IN on pre-splitted string takes:", benchmark.timeit(1000), "seconds"
The result is:
IN on big string takes: 4.27126097679 seconds
IN on splitted string takes: 35.9622690678 seconds
IN on pre-splitted string takes: 11.815297842 seconds
The bible.txt file actually is the bible, I found it here: http://patriot.net/~bmcgin/kjvpage.html (text version)
Upvotes: 14
Reputation: 96258
The second one is a lot faster, here are some measurement data:
def get_mother_of_all_strings():
return "abcdefg\nhijklmnopqr\nstuvwxyz\naatargetbb"
first: 2.00
second: 0.26
Upvotes: 1
Reputation: 6156
import timeit
a = #a really long string with multiple lines and target near the end
timeit.timeit(stmt='["target" in x for x in a.split("\\n")]', setup='a = """%s"""'%a)
23.499058284211792
timeit.timeit(stmt='"target" in a', setup='a = """%s"""'%a)
5.2557157624293325
So the large string is MUCH faster to search through than a split list of smaller ones.
Upvotes: 1
Reputation: 2046
The second way is faster, the slitting adds one more O(n) iteration of searching and delimiting, the alocating memory for each sublist, then close to O(n^2) to iter each sublist and search a string in them. while just O(n) to search the bigger string.
Upvotes: 0
Reputation: 97281
for loop in python is slow, and split a large string is also slow. so search the large string is much faster.
Upvotes: 0