sten
sten

Reputation: 7486

Force spacy not to parse punctuation?

Is there a way to force spacy not to parse punctuation as separate tokens ???

 nlp = spacy.load('en')

 doc = nlp(u'the $O is in $R')

  [ w for w in doc ]
  : [the, $, O, is, in, $, R]

I want :

  : [the, $O, is, in, $R]

Upvotes: 1

Views: 1380

Answers (2)

Mankind_2000
Mankind_2000

Reputation: 2218

Customize the prefix_search function for the spaCy's Tokenizer class. Refer documentation. Something like:

import spacy
import re
from spacy.tokenizer import Tokenizer

# use any currency regex match as per your requirement
prefix_re = re.compile('''^\$[a-zA-Z0-9]''')

def custom_tokenizer(nlp):
    return Tokenizer(nlp.vocab, prefix_search=prefix_re.search)

nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u'the $O is in $R')
print([t.text for t in doc])

# ['the', '$O', 'is', 'in', '$R']

Upvotes: 1

hkr
hkr

Reputation: 270

Yes, there is. For example,

import spacy
import regex as re
from spacy.tokenizer import Tokenizer

prefix_re = re.compile(r'''^[\[\+\("']''')
suffix_re = re.compile(r'''[\]\)"']$''')
infix_re = re.compile(r'''[\(\-\)\@\.\:\$]''') #you need to change the infix tokenization rules
simple_url_re = re.compile(r'''^https?://''')

def custom_tokenizer(nlp):
    return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
                     suffix_search=suffix_re.search,
                     infix_finditer=infix_re.finditer,
                     token_match=simple_url_re.match)

nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = custom_tokenizer(nlp)

doc = nlp(u'the $O is in $R')
print [w for w in doc] #prints

[the, $O, is, in, $R]

You just need to add '$' character to the infix regex (with an escape character '\' obviously).

Aside: Have included prefix and suffix to showcase the flexibility of spaCy tokenizer. In your case just the infix regex will suffice.

Upvotes: 1

Related Questions