Reputation: 15953
I have a data file that I need certain lines skipped.
(1 1),skip,this
skip,this,too
1,2,3
4,5,6
7,8,9
10,11,12
(1 2),skip,this
skip,this,too
...
They do repeat every after 4 data entries. I tried the ones from this post Pandas: ignore all lines following a specific string when reading a file into a DataFrame but the line nothing is being skipped and it's turning the dataframe into a MultiIndex.
I tried looping by using startswith()
and appending to a list, however, the data is being inputted into a single column.
I'm trying to obtain this output:
1,2,3
4,5,6
7,8,9
10,11,12
There's multiple files each one containing over 7M rows. I'm looking for a fast, memory efficient way to accomplish this.
Something I tried was create a list to skip row 0,1
, then again 6, 7
. Is it possible to achieve through that?
Upvotes: 2
Views: 1628
Reputation: 180461
presuming you want to take sections of four lines coming after the two lines to skip, just skip two lines and take a slice of four rows from a csv reader obejct:
from itertools import islice, chain
import pandas as pd
import csv
def parts(r):
_, n = next(r), next(r)
while n:
yield islice(r, 4)
_, n = next(r, ""), next(r, "")
_, n = next(r, ""), next(r, "")
with open("test.txt")as f:
r = csv.reader(f)
print(pd.DataFrame(list(chain.from_iterable(parts(r)))))
Output:
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
Or pass the chain object to pd.DataFrame.from_records
:
with open("test.txt")as f:
r = csv.reader(f)
print(pd.DataFrame.from_records(chain.from_iterable(parts(r))))
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
Or a more general approach using a function using the consume recipe to skip lines:
from itertools import islice, chain
from collections import deque
import pandas as pd
import csv
def consume(iterator, n):
"Advance the iterator n-steps ahead. If n is none, consume entirely."
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
deque(iterator, maxlen=0)
else:
# advance to the empty slice starting at position n
next(islice(iterator, n, n), None)
def parts(r, sec_len, skip):
consume(r,skip)
for sli in iter(lambda: list(islice(r, sec_len)), []):
yield sli
consume(r, skip)
with open("test.txt")as f:
r = csv.reader(f)
print(pd.DataFrame.from_records((chain.from_iterable(parts(r, 4, 2)))))
The last option is to write to an StringIo object and pass that:
from io import StringIO
def parts(r, sec_len, skip):
consume(r, skip)
for sli in iter(lambda: list(islice(r, sec_len)), []):
yield "".join(sli)
consume(r, skip)
with open("test.txt")as f:
so = StringIO()
so.writelines(parts(f, 4, 2))
so.seek(0)
print(pd.read_csv(so, header=None))
Upvotes: 1
Reputation: 690
I would pipe out to awk to do the filtering on the fly:
import subprocess
import pandas as pd
cmd = "awk '(NR - 1) % 6 > 1' test.csv"
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
df = pd.read_csv(proc.stdout, header=None)
The awk command will skip the first two lines out of every group of 6. Since the filtering is done on a streaming basis, this is very memory efficient, and since the filtering is done in a separate process, it'll be fast as well.
Upvotes: -1
Reputation: 353309
Since you have a repeating pattern (toss two lines, keep four) I might do something like this:
from io import BytesIO
with open("skipper.csv", "rb") as fp:
lines = (line for i,line in enumerate(fp) if i % 6 >= 2)
df = pd.read_csv(BytesIO(b''.join(lines)), header=None)
which gives me
>>> df
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
Upvotes: 1
Reputation: 394159
One method is to just generate the list of row numbers to skip, so determine the number of rows in the file using method here: Count how many lines are in a CSV Python?
then do the following:
In [16]:
import io
import pandas as pd
t="""(1 1),skip,this
skip,this,too
1,2,3
4,5,6
7,8,9
10,11,12
(1 2),skip,this
skip,this,too"""
# generate initial list, using 10 her but you can get the number of rows using another metho
a = list(range(10))
# generate the pairs of rows to skips in steps
rows = sorted(a[::6] + a[1::6])
# now read it in
pd.read_csv(io.StringIO(t), skiprows=rows, header=None)
Out[16]:
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
Upvotes: 1
Reputation: 8769
My suggestion would be to just pre-scrub the file before hand:
with open("file.csv") as rp, open("outfile.csv", 'w') as wp:
for line in rp:
if 'skip' not in line:
wp.write(line)
Upvotes: 1