Reputation: 9869
I have some tweets which I wish to split into words. Most of it works fine except when people combine words like: trumpisamoron
or makeamericagreatagain
. But then there are also things like password
which shouldn't be split up into pass
and word
.
I know that the nltk package has a punkt tokenizer
module which splits sentences up in a smart way. Is there something similar for words? Even if it isn't in the nltk package?
Note: The example of password -> pass + word
is much less of a problem than the splitting word problem.
Upvotes: 1
Views: 160
Reputation: 3153
Ref : My Answer on another Question - Need to split #tags to text.
Changes in this answer I made are - (1) Different corpus to get WORDS
and (2) Added def memo(f)
to speed up process. You may need to add/use corpus depending upon Domain you are working on.
Check - Word Segmentation Task from Norvig's work.
from __future__ import division
from collections import Counter
import re, nltk
from datetime import datetime
WORDS = nltk.corpus.reuters.words() + nltk.corpus.words.words()
COUNTS = Counter(WORDS)
def memo(f):
"Memoize function f, whose args must all be hashable."
cache = {}
def fmemo(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
fmemo.cache = cache
return fmemo
def pdist(counter):
"Make a probability distribution, given evidence from a Counter."
N = sum(counter.values())
return lambda x: counter[x]/N
P = pdist(COUNTS)
def Pwords(words):
"Probability of words, assuming each word is independent of others."
return product(P(w) for w in words)
def product(nums):
"Multiply the numbers together. (Like `sum`, but with multiplication.)"
result = 1
for x in nums:
result *= x
return result
def splits(text, start=0, L=20):
"Return a list of all (first, rest) pairs; start <= len(first) <= L."
return [(text[:i], text[i:])
for i in range(start, min(len(text), L)+1)]
@memo
def segment(text):
"Return a list of words that is the most probable segmentation of text."
if not text:
return []
else:
candidates = ([first] + segment(rest)
for (first, rest) in splits(text, 1))
return max(candidates, key=Pwords)
print segment('password') # ['password']
print segment('makeamericagreatagain') # ['make', 'america', 'great', 'again']
print segment('trumpisamoron') # ['trump', 'is', 'a', 'moron']
print segment('narcisticidiots') # ['narcistic', 'idiot', 's']
Sometimes, in case, word gets spilt into smaller token, there may be higher chances that word is not present in our WORDS
Dictionary.
Here in last segment, it broke narcisticidiots
into 3 tokens because token idiots
was not there in our WORDS
.
# Check for sample word 'idiots'
if 'idiots' in WORDS:
print("YES")
else:
print("NO")
You can add new user defined words to WORDS
.
.
.
user_words = []
user_words.append('idiots')
WORDS+=user_words
COUNTS = Counter(WORDS)
.
.
.
print segment('narcisticidiots') # ['narcistic', 'idiots']
For better solution than this you can use bigram/trigram.
More examples at : Word Segmentation Task
Upvotes: 1