JohnnySemicolon
JohnnySemicolon

Reputation: 265

Pandas NLTK - Tokenizing all rows in a column for natural language processing

==Using Juypter Notebooks==

I got NLTK working on a single string of text.

    Text= 'Hey. I got some text here'
def preprocess(sent):
    sent = nltk.word_tokenize(sent)
    sent = nltk.pos_tag(sent)
    return sent
sent = preprocess(Text)
sent

Output:

[('Hey', 'NNP'),
 ('.', '.'),
 ('I', 'PRP'),
 ('got', 'VBD'),
 ('some', 'DT'),
 ('text', 'NN'),
 ('here', 'RB')]

This is okay, but not that useful because I would like automate this on many rows in a data frame.

Basically tokenize the words while maintaining an index key so I can reassemble the tokens I want in a new field. For example I'm looking for human names in particular excel column that contains over 1,000 rows.

When I try this out on a dataframe this is the problem i run into.

print(desdf)


           Description
0  some text here John
1      Other cool text
2            John Paul

Running the code with this data frame I get TypeError: expected string or bytes-like object

def preprocess(sent):
    sent = nltk.word_tokenize(sent)
    sent = nltk.pos_tag(sent)
    return sent
sent = preprocess(desdf)
sent

Is this not possible or is there some conversion command that needs to happen? Thanks for the help.

ERROR:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-23-b7b2a604215b> in <module>
      3     sent = nltk.pos_tag(sent)
      4     return sent
----> 5 sent = preprocess(desdf)
      6 sent

<ipython-input-23-b7b2a604215b> in preprocess(sent)
      1 def preprocess(sent):
----> 2     sent = nltk.word_tokenize(sent)
      3     sent = nltk.pos_tag(sent)
      4     return sent
      5 sent = preprocess(desdf)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\__init__.py in word_tokenize(text, language, preserve_line)
    142     :type preserve_line: bool
    143     """
--> 144     sentences = [text] if preserve_line else sent_tokenize(text, language)
    145     return [
    146         token for sent in sentences for token in _treebank_word_tokenizer.tokenize(sent)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\__init__.py in sent_tokenize(text, language)
    104     """
    105     tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
--> 106     return tokenizer.tokenize(text)
    107 
    108 

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in tokenize(self, text, realign_boundaries)
   1275         Given a text, returns a list of the sentences in that text.
   1276         """
-> 1277         return list(self.sentences_from_text(text, realign_boundaries))
   1278 
   1279     def debug_decisions(self, text):

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in sentences_from_text(self, text, realign_boundaries)
   1329         follows the period.
   1330         """
-> 1331         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1332 
   1333     def _slices_from_text(self, text):

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in <listcomp>(.0)
   1329         follows the period.
   1330         """
-> 1331         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1332 
   1333     def _slices_from_text(self, text):

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in span_tokenize(self, text, realign_boundaries)
   1319         if realign_boundaries:
   1320             slices = self._realign_boundaries(text, slices)
-> 1321         for sl in slices:
   1322             yield (sl.start, sl.stop)
   1323 

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in _realign_boundaries(self, text, slices)
   1360         """
   1361         realign = 0
-> 1362         for sl1, sl2 in _pair_iter(slices):
   1363             sl1 = slice(sl1.start + realign, sl1.stop)
   1364             if not sl2:

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in _pair_iter(it)
    316     it = iter(it)
    317     try:
--> 318         prev = next(it)
    319     except StopIteration:
    320         return

~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\tokenize\punkt.py in _slices_from_text(self, text)
   1333     def _slices_from_text(self, text):
   1334         last_break = 0
-> 1335         for match in self._lang_vars.period_context_re().finditer(text):
   1336             context = match.group() + match.group('after_tok')
   1337             if self.text_contains_sentbreak(context):

TypeError: expected string or bytes-like object

Upvotes: 0

Views: 778

Answers (1)

jezrael
jezrael

Reputation: 862641

Select column and use Series.apply for processing function per column:

sent = desdf['Description'].apply(preprocess)

Upvotes: 1

Related Questions