kabaname
kabaname

Reputation: 265

Python: Multiple Text Files to Dataframe

I'm a little stuck on how exactly to proceed, so a little nudge would be very helpful.

I have ~1800 text files, emails actually, that are in a repeated format.

The structure of each file is as follows:

From: Person-1 [[email protected]]
Sent: Tuesday, April 18, 2017 11:24 AM
To: [email protected]
Subject: Important Subject

User, 

Below is your search alert.

Target: text

Attribute: text

Label: abcdef

Time: Apr 18, 2017 11:24 EDT

Full Text: Text of various length exists here. Some files even have links. I'm not sure how I would capture a varied length field.

Recording: abcde & fghijk lmnop

That's the gist of it.

I would like to write that into a DF I can store as a CSV.

I would like to end up with maybe something like this?

| Target | Attribute |  Label  |  Time  |  Full Text  | Recording | Filename |
|--------|-----------|---------|--------|-------------|-----------|----------|
|    text|       text|   abcdef| (date) |(Full text..)|abcde & f..| 1111.txt |
|   text2|      text2|  abcdef2| (date) |(Full text..)|abcde & f..| 1112.txt |

Where the 2nd row is another text file.

I have code to go through all of the text files and print them. Here's that code:

# -*- coding: utf-8 -*-
import os
import sys

# Take all text files in workingDirectory and put them into a DF.
def convertText(workingDirectory, outputDirectory):
    if workingDirectory == "": workingDirectory = os.getcwd() + "\\" # Returns current working directory, if workingDirectory is empty.
    i = 0
    for txt in os.listdir(workingDirectory): # Iterate through text filess in workingDirectory
        print("Processing File: " + str(txt))
        fileExtension = txt.split(".")[-1]
        if fileExtension == "txt":
            textFilename = workingDirectory + txt # Becomes: \PATH\example.text
            f = open(textFilename,"r")
            data = f.read() # read what is inside
            print data # print to show it is readable

            #RegEx goes here?

            i += 1 # counter
    print("Successfully read " + str(i) + " files.")


def main(argv):
    workingDirectory = "../Documents/folder//" # Put your source directory of text files here
    outputDirectory = "../Documents//" # Where you want your converted files to go.

    convertText(workingDirectory, outputDirectory)

if __name__ == "__main__":
    main(sys.argv[1:])

I guess I would need RegEx, maybe, to parse the files? What would you recommend?

I am not opposed to using R or something else, if it makes more sense.

Thank You.

Upvotes: 1

Views: 1336

Answers (1)

Alessi 42
Alessi 42

Reputation: 1162

Regex should be sufficient for your use case. Using the regex expression r"\sTarget:(.*) you can match everything on the line that matches with Target:, then by creating a list of all the fields you wish to match and iterating over them, you build up a dictionary object that stores the values of each field.

Using the Python CSV library you can create a CSV file and for each .txt file in your directory push a row of the matched dictionary fields with writer.writerow({'Target':'','Attribute':'','Time':'','Filename':'','Label':''})

Example:

import os
import sys
import re
import csv 

# Take all text files in workingDirectory and put them into a DF.
def convertText(workingDirectory, outputDirectory):
    with open(outputDirectory+'emails.csv', 'w') as csvfile: # opens the file \PATH\emails.csv
      fields = ['Target','Attribute','Label','Time','Full Text'] # fields you're searching for with regex
      csvfield = ['Target','Attribute','Label','Time','Full Text','Filename'] # You want to include the file name in the csv header but not find it with regex
      writer = csv.DictWriter(csvfile, delimiter=',', lineterminator='\n', fieldnames=fields)
      writer.writeheader() # writes the csvfields list to the header of the csv

      if workingDirectory == "": workingDirectory = os.getcwd() + "\\" # Returns current working directory, if workingDirectory is empty.
      i = 0
      for txt in os.listdir(workingDirectory): # Iterate through text filess in workingDirectory
          print("Processing File: " + str(txt))
          fileExtension = txt.split(".")[-1]
          if fileExtension == "txt":
              textFilename = workingDirectory + txt # Becomes: \PATH\example.text
              f = open(textFilename,"r")
              data = f.read() # read what is inside

              #print(data) # print to show it is readable
              fieldmatches = {}
              for field in fields:
                regex = "\\s" + field + ":(.*)" # iterates through each of the fields and matches using r"\sTarget:(.*) that selects everything on the line that matches with Target:
                match = re.search(regex, data)
                if match:
                  fieldmatches[field] = match.group(1)
              writer.writerow(fieldmatches) # for each file creates a dict of fields and their values and then adds that row to the csv
              i += 1 # counter
      print("Successfully read " + str(i) + " files.")


def main(argv):
    workingDirectory = "../Documents/folder//" # Put your source directory of text files here
    outputDirectory = "../Documents//" # Where you want your converted files to go.

    convertText(workingDirectory, outputDirectory)

if __name__ == "__main__":
    main(sys.argv[1:])

For processing files this should be fast enough on my machine it took less than a second

Successfully read 1866 files.
Time: 0.6991933065852838

Hope this helps!

Upvotes: 1

Related Questions