slm
slm

Reputation: 237

How to edit .csv in python to proceed NLP

Hello i am not very familiar with programming and found Stackoverflow while researching my task. I want to do natural language processing on a .csv file that looks like this and has about 15.000 rows

    ID | Title        | Body
    ----------------------------------------
    1  | Who is Jack? | Jack is a teacher... 
    2  | Who is Sam?  | Sam is a dog.... 
    3  | Who is Sarah?| Sarah is a doctor...
    4  | Who is Amy?  | Amy is a wrestler... 

I want to read the .csv file and do some basic NLP operations and write the results back in a new or in the same file. After some research python and nltk seams to be the technologies i need. (i hope thats right). After tokenizing i want my .csv file to look like this

    ID | Title                 | Body
    -----------------------------------------------------------
    1  | "Who" "is" "Jack" "?" | "Jack" "is" "a" "teacher"... 
    2  | "Who" "is" "Sam" "?"  | "Sam" "is" "a" "dog".... 
    3  | "Who" "is" "Sarah" "?"| "Sarah" "is" "a" "doctor"...
    4  | "Who" "is" "Amy" "?"  | "Amy" "is" "a" "wrestler"... 

What i have achieved after a day of research and putting pieces together looks like this

    ID | Title                 | Body
    ----------------------------------------------------------
    1  | "Who" "is" "Jack" "?" | "Jack" "is" "a" "teacher"... 
    2  | "Who" "is" "Sam" "?"  | "Jack" "is" "a" "teacher"...
    3  | "Who" "is" "Sarah" "?"| "Jack" "is" "a" "teacher"...
    4  | "Who" "is" "Amy" "?"  | "Jack" "is" "a" "teacher"... 

My first idea was to read a specific cell in the .csv ,do an operation and write it back to the same cell. And than somehow do that automatically on all rows. Obviously i managed to read a cell and tokenize it. But i could not manage to write it back in that specific cell. And i am far away from "do that automatically to all rows". I would appreciate some help if possible.

My code:

    import csv
    from nltk.tokenize import word_tokenize 

    ############Read CSV File######################
    ########## ID , Title, Body#################### 

    line_number = 1 #line to read (need some kind of loop here)
    column_number = 2 # column to read (need some kind of loop here)
    with open('test10in.csv', 'rb') as f:
        reader = csv.reader(f)
        reader = list(reader)
        text = reader[line_number][column_number] 


        stringtext = ''.join(text) #tokenizing just work on strings 
        tokenizedtext = (word_tokenize(stringtext))
        print(tokenizedtext)

    #############Write back in same cell in new CSV File######

    with open('test11out.csv', 'wb') as g:
        writer = csv.writer(g)
        for row in reader:
            row[2] = tokenizedtext
            writer.writerow(row)

I hope i asked the question correctly and someone can help me out.

Upvotes: 3

Views: 2811

Answers (2)

alexis
alexis

Reputation: 50220

You first need to parse your file and then process (tokenize, etc.) each field separately.

If our file really looks like your sample, I wouldn't call it a CSV. You could parse it with the csv module, which is specifically for reading all sorts of CSV files: Add delimiter="|" to the arguments of csv.reader(), to separate your rows into cells. (And don't open the file in binary mode.) But your file is easy enough to parse directly:

with open('test10in.csv', encoding="utf-8") as fp:  # Or whatever encoding is right
    content = fp.read()
    lines = content.splitlines()
    allrows = [ [ fld.strip() for fld in line.split("|") ] for line in lines ]

    # Headers and data:
    headers = allrows[0]
    rows = allrows[2:]

You can then use nltk.word_tokenize() to tokenize each field of rows, and go on from there.

Upvotes: 1

Randy
Randy

Reputation: 14847

The pandas library will make all of this much easier.

pd.read_csv() will handle the input much more easily, and you can apply the same function to a column using pd.DataFrame.apply()

Here's a quick example of how the key parts you'll want work. In the .applymap() method, you can replace my lambda function with word_tokenize() to apply that across all elements instead.

In [58]: import pandas as pd

In [59]: pd.read_csv("test.csv")
Out[59]:
                     0                          1
0  wrestler Amy dog is         teacher dog dog is
1      is wrestler ? ?  Sarah doctor teacher Jack
2        a ? Sam Sarah           is dog Sam Sarah
3       Amy a a doctor             Amy a Amy Jack

In [60]: df = pd.read_csv("test.csv")

In [61]: df.applymap(lambda x: x.split())
Out[61]:
                          0                               1
0  [wrestler, Amy, dog, is]         [teacher, dog, dog, is]
1      [is, wrestler, ?, ?]  [Sarah, doctor, teacher, Jack]
2        [a, ?, Sam, Sarah]           [is, dog, Sam, Sarah]
3       [Amy, a, a, doctor]             [Amy, a, Amy, Jack]

Also see: http://pandas.pydata.org/pandas-docs/stable/basics.html#row-or-column-wise-function-application

Upvotes: 2

Related Questions