Reputation: 5759
I am currently creating a list from an input file like this:
list = inputFile.read().splitlines()
Then after that manually iterating through and making a second list of the items/lines I care about (which are lines 2,6,10,14,18...). Is there a faster way to do this just with splitlines()
so automatically, list
contains only the lines I care about?
Upvotes: 3
Views: 1795
Reputation: 20339
You can use enumerate
also
f = inputFile.read()
to_read = [ 2,6,10,14,18]
for i,j in enumerate(f, 1):
if i in to_read
#your code here
Enumerate is a built in Python function to give an index to each object in an iterable.
>>> l = iter(["a", "b", "c"])
>>> [x for x in enumerate(l)]
[(0, 'a'), (1, 'b'), (2, 'c')]
Upvotes: 1
Reputation: 3550
Try doing this if you want to read in the whole file.
readfile = file.read()
for x in range(2,len(readfile.split("\n"),4)):
line = readfile[x]
#Do stuff with x
Running through the file line by line instead of reading it in is better for memory.
count = 1
for line in file:
if count <= 2:
continue
if count % 4 != 2 or (count % 4 == 0 and count <= 4):
continue
count += 1
count += 1
#Work with line
Upvotes: 1
Reputation: 107287
The more pythonic way is that don't read all the lines at once. You can use enumerate
to iterate over your file object and keep the expected lines :
with open(file_name) as f:
list_of lines=[line for index,line in enumerate(f) if index in set_of_indices]
Note that here its better to put your line numbers in a set
object which its membership checking order is O(1).
As mentioned in comment if you have a huge set of indices as a more optimized way in terms of memory use you can use a generator to preserve your lines, instead of a list comprehension :
with open(file_name) as f:
list_of lines=(line for index,line in enumerate(f) if index in set_of_indices)
Upvotes: 1
Reputation: 473833
itertools.islice(iterable, start, stop[, step])
is the tool for the job:
from itertools import islice
for line in islice(inputFile, 2, None, 4):
print line
Upvotes: 6