ℕʘʘḆḽḘ
ℕʘʘḆḽḘ

Reputation: 19405

understanding Spacy's noun-chunk parser

I am looking at Spacy's code to extract noun chunks (reproduced below) and I do not understand the part that comments:

Prevent nested chunks from being produced

for i, word in enumerate(doclike):
            if word.pos not in (NOUN, PROPN, PRON):
                continue
            # Prevent nested chunks from being produced
            if word.left_edge.i <= prev_end:
                continue

I get that we are trying to avoid nested chunks, but could someone please explain to me how this is achieved with the left_edge methods? How is this keeping track of the start/ending index of the noun-chunk?

Thanks!

https://github.com/explosion/spaCy/blob/260c29794a1caa70f8b0702c31fcfecad6bfdadc/spacy/lang/en/syntax_iterators.py

# coding: utf8
from __future__ import unicode_literals

from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors


def noun_chunks(doclike):
    """
    Detect base noun phrases from a dependency parse. Works on both Doc and Span.
    """
    labels = [
        "nsubj",
        "dobj",
        "nsubjpass",
        "pcomp",
        "pobj",
        "dative",
        "appos",
        "attr",
        "ROOT",
    ]
    doc = doclike.doc  # Ensure works on both Doc and Span.

    if not doc.is_parsed:
        raise ValueError(Errors.E029)

    np_deps = [doc.vocab.strings.add(label) for label in labels]
    conj = doc.vocab.strings.add("conj")
    np_label = doc.vocab.strings.add("NP")
    prev_end = -1
    for i, word in enumerate(doclike):
        if word.pos not in (NOUN, PROPN, PRON):
            continue
        # Prevent nested chunks from being produced
        if word.left_edge.i <= prev_end:
            continue
        if word.dep in np_deps:
            prev_end = word.i
            yield word.left_edge.i, word.i + 1, np_label
        elif word.dep == conj:
            head = word.head
            while head.dep == conj and head.head.i < head.i:
                head = head.head
            # If the head is an NP, and we're coordinated to it, we're an NP
            if head.dep in np_deps:
                prev_end = word.i
                yield word.left_edge.i, word.i + 1, np_label


SYNTAX_ITERATORS = {"noun_chunks": noun_chunks}

Upvotes: 2

Views: 640

Answers (1)

Sam H.
Sam H.

Reputation: 4359

Valid noun chunks can be part of larger noun chunks. Example:

>>> list(nlp("We went to the clean grocery store").noun_chunks)
[We, the clean grocery store]
>>> list(nlp("We went to clean grocery store").noun_chunks)
[We, clean grocery store]
>>> list(nlp("We went to grocery store").noun_chunks)
[We, grocery store]

So the code you ask about is preventing list(nlp("We went to the clean grocery store").noun_chunks) from returning [We, the clean grocery store, clean grocery store, grocery store]

Upvotes: 3

Related Questions