itsMe
itsMe

Reputation: 785

Tokenizing based on certain pattern with Python

I have to tokenize certain patterns from sentences that have Sentences like abc ABC - - 12 V and ab abc 1,2W. Here both 12 V and 1,2W are values with units. So I want to tokenize as abc,ABC and 12 V. For the other case: ab , abc , 1,2W. How can I do that ? Well nltk word_tokenizer is an option but I can not insert any pattern, or can I ? word_tokenize(test_word)

Upvotes: 1

Views: 126

Answers (1)

RMPR
RMPR

Reputation: 3521

If your input is predictable, in the sense that you know which characters appear between your tokens (in this case I see a space and a hyphen), you can use a regex to extract what you want:

import re

def is_float(s):
    return re.match(r'^-?\d+(?:\.|,\d+)?$', s) 

def extract_tokens(phrase, noise="-"):
    phrase_list = re.split("\s+", re.sub(noise, " ", phrase).strip())
    phrase_tokenized = []
    i, n = 0, len(phrase_list)
    while i < n:
        phrase_tokenized.append(phrase_list[i])
        if phrase_list[i].isdigit() or is_float(phrase_list[i]) and i < n-1:
            phrase_tokenized[-1] += " " + phrase_list[i+1]
            i += 1
        i += 1
    return phrase_tokenized

Sample test:

>>> extract_tokens("abc ABC - - 12 V")
['abc', 'ABC', '12 V']
>>> extract_tokens("ab abc 1,2W")
['ab', 'abc', '1,2W']

And to "insert a pattern" all you need to do is update the noise parameter according to what you want.

Upvotes: 2

Related Questions