Reputation: 11
I am dealing with an extremely large text file (around 3.77 GB), and trying to extract all the sentences a specific word occurs in and write out to a text file.
So the large text file is just many lines of text:
line 1 text ....
line 2 text ....
I have also extracted the unique word list from the text file, and want to extract all the sentences each word occurs in and write out the context associated with the word. Ideally, the output file will take the format of
word1 \t sentence 1\n sentence 2\n sentence N\n
word2 \t sentence 1\n sentence 2\n sentence M\n
The current code I have is something like this :
fout=open('word_context_3000_4000(4).txt','a')
for x in unique_word[3000:4000]:
fout.write('\n'+x+'\t')
fin=open('corpus2.txt')
for line in fin:
if x in line.strip().split():
fout.write(line)
else:
pass
fout.close()
Since the unique word list is big, so I process the word list chunk by chunk. But, somehow, the code failed to get the context for all the words, and only returned the context for the first hundreds of words in the unique word list.
Does any one have worked on the similar problem before? I am using python, btw.
Thanks a lot.
Upvotes: 1
Views: 860
Reputation: 80771
First problem, you never close fin
.
Maybe you should try something like this :
fout=open('word_context_3000_4000(4).txt','a')
fin=open('corpus2.txt')
for x in unique_word[3000:4000]:
fout.write('\n'+x+'\t')
fin.seek(0) # go to the begining of the file
for line in fin:
if x in line.strip().split():
fout.write(line)
else:
pass
fout.close()
fin.close()
Upvotes: 1