CentAu
CentAu

Reputation: 11160

Noun phrases with spacy

How can I extract noun phrases from text using spacy?
I am not referring to part of speech tags. In the documentation I cannot find anything about noun phrases or regular parse trees.

Upvotes: 38

Views: 43030

Answers (5)

Suzana
Suzana

Reputation: 4410

If you want to specify more exactly which kind of noun phrase you want to extract, you can use textacy's matches function. You can pass any combination of POS tags. For example,

textacy.extract.matches(doc, "POS:ADP POS:DET:? POS:ADJ:? POS:NOUN:+")

will return any nouns that are preceded by a preposition and optionally by a determiner and/or adjective.

Textacy was built on spacy, so they should work perfectly together.

Upvotes: 2

Talha Tayyab
Talha Tayyab

Reputation: 27425

You can also get noun from a sentence like this:

    import spacy
    nlp=spacy.load("en_core_web_sm")
    doc=nlp("When Sebastian Thrun started working on self-driving cars at "
    "Google in 2007, few people outside of the company took him "
    "seriously. “I can tell you very senior CEOs of major American "
    "car companies would shake my hand and turn away because I wasn’t "
    "worth talking to,” said Thrun, in an interview with Recode earlier "
    "this week.")
    #doc text is from spacy website
    for x in doc :
    if x.pos_ == "NOUN" or x.pos_ == "PROPN" or x.pos_=="PRON":
    print(x)
    # here you can get Nouns, Proper Nouns and Pronouns

Upvotes: 2

Talha Tayyab
Talha Tayyab

Reputation: 27425

from spacy.en import English may give you an error

No module named 'spacy.en'

All language data has been moved to a submodule spacy.lang in spacy2.0+

Please use spacy.lang.en import English

Then do all the remaining steps as @syllogism_ answered

Upvotes: 0

Victoria Stuart
Victoria Stuart

Reputation: 5082

import spacy
nlp = spacy.load("en_core_web_sm")
doc =nlp('Bananas are an excellent source of potassium.')
for np in doc.noun_chunks:
    print(np.text)
'''
  Bananas
  an excellent source
  potassium
'''

for word in doc:
    print('word.dep:', word.dep, ' | ', 'word.dep_:', word.dep_)
'''
  word.dep: 429  |  word.dep_: nsubj
  word.dep: 8206900633647566924  |  word.dep_: ROOT
  word.dep: 415  |  word.dep_: det
  word.dep: 402  |  word.dep_: amod
  word.dep: 404  |  word.dep_: attr
  word.dep: 443  |  word.dep_: prep
  word.dep: 439  |  word.dep_: pobj
  word.dep: 445  |  word.dep_: punct
'''

from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj])
print('np_labels:', np_labels)
'''
  np_labels: {416, 422, 429, 430, 439}
'''

https://www.geeksforgeeks.org/use-yield-keyword-instead-return-keyword-python/

def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            yield(word.dep_)

iter_nps(doc)
'''
  <generator object iter_nps at 0x7fd7b08b5bd0>
'''

## Modified method:
def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            print(word.text, word.dep_)

iter_nps(doc)
'''
  Bananas nsubj
  potassium pobj
'''

doc = nlp('BRCA1 is a tumor suppressor protein that functions to maintain genomic stability.')
for np in doc.noun_chunks:
    print(np.text)
'''
  BRCA1
  a tumor suppressor protein
  genomic stability
'''

iter_nps(doc)
'''
  BRCA1 nsubj
  that nsubj
  stability dobj
'''

Upvotes: 7

syllogism_
syllogism_

Reputation: 4287

If you want base NPs, i.e. NPs without coordination, prepositional phrases or relative clauses, you can use the noun_chunks iterator on the Doc and Span objects:

>>> from spacy.en import English
>>> nlp = English()
>>> doc = nlp(u'The cat and the dog sleep in the basket near the door.')
>>> for np in doc.noun_chunks:
>>>     np.text
u'The cat'
u'the dog'
u'the basket'
u'the door'

If you need something else, the best way is to iterate over the words of the sentence and consider the syntactic context to determine whether the word governs the phrase-type you want. If it does, yield its subtree:

from spacy.symbols import *

np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj]) # Probably others too
def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            yield word.subtree

Upvotes: 66

Related Questions