Reputation: 915
I want to count percentage split of POS in a sentence using spacy, similiar to
Count verbs, nouns, and other parts of speech with python's NLTK
Currently able to detect and count POS. How to find percentage split.
from __future__ import unicode_literals
import spacy,en_core_web_sm
from collections import Counter
nlp = en_core_web_sm.load()
print Counter(([token.pos_ for token in nlp('The cat sat on the mat.')]))
Current output:
Counter({u'NOUN': 2, u'DET': 2, u'VERB': 1, u'ADP': 1, u'PUNCT': 1})
Expected output:
Noun: 28.5%
DET: 28.5%
VERB: 14.28%
ADP: 14.28%
PUNCT: 14.28%
How to write the output to pandas dataframe?
Upvotes: 0
Views: 2823
Reputation: 915
from __future__ import unicode_literals
import spacy,en_core_web_sm
from collections import Counter
nlp = en_core_web_sm.load()
c = Counter(([token.pos_ for token in nlp('The cat sat on the mat.')]))
sbase = sum(c.values())
for el, cnt in c.items():
print(el, '{0:2.2f}%'.format((100.0* cnt)/sbase))
Output:
(u'NOUN', u'28.57%')
(u'VERB', u'14.29%')
(u'DET', u'28.57%')
(u'ADP', u'14.29%')
(u'PUNCT', u'14.29%')
Upvotes: 0
Reputation: 16728
Something along these lines should give you what you need:
sbase = sum(c.values())
for el, cnt in c.items():
print(el, '{0:2.2f}%'.format((100.0* cnt)/sbase))
NOUN 28.57%
DET 28.57%
VERB 14.29%
ADP 14.29%
PUNCT 14.29%
Upvotes: 1