karlos
karlos

Reputation: 893

extract relationships using NLTK

This is a follow-up of my question. I am using nltk to parse out persons, organizations, and their relationships. Using this example, I was able to create chunks of persons and organizations; however, I am getting an error in the nltk.sem.extract_rel command:

AttributeError: 'Tree' object has no attribute 'text'

Here is the complete code:

import nltk
import re
#billgatesbio from http://www.reuters.com/finance/stocks/officerProfile?symbol=MSFT.O&officerId=28066
with open('billgatesbio.txt', 'r') as f:
    sample = f.read()

sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.batch_ne_chunk(tagged_sentences)

# tried plain ne_chunk instead of batch_ne_chunk as given in the book
#chunked_sentences = [nltk.ne_chunk(sentence) for sentence in tagged_sentences]

# pattern to find <person> served as <title> in <org>
IN = re.compile(r'.+\s+as\s+')
for doc in chunked_sentences:
    for rel in nltk.sem.extract_rels('ORG', 'PERSON', doc,corpus='ieer', pattern=IN):
        print nltk.sem.show_raw_rtuple(rel)

This example is very similar to the one given in the book, but the example uses prepared 'parsed docs,' which appears of nowhere and I don't know where to find its object type. I scoured thru the git libraries as well. Any help is appreciated.

My ultimate goal is to extract persons, organizations, titles (dates) for some companies; then create network maps of persons and organizations.

Upvotes: 10

Views: 10466

Answers (4)

user23299105
user23299105

Reputation: 1

import nltk from nltk import word_tokenize, pos_tag from nltk.chunk import ne_chunk from nltk.sem import relextract

def semantic_role_labeling(sentence): # Tokenize the sentence tokens = word_tokenize(sentence)

# Perform part-of-speech tagging
tagged_tokens = pos_tag(tokens)

# Extract named entities
entities = ne_chunk(tagged_tokens)

# Perform semantic role labeling
roles = relextract.extract_rels('PER', 'ORG', entities, corpus='ace', pattern='ie')

# Print a message before the loop
print("Semantic Roles:")

for role in roles:
    subjtext = role['subjtext'][0] if 'subjtext' in role else None
    filler = role['filler'][0] if 'filler' in role else None
    objtext = role['objtext'][0] if 'objtext' in role else None

    print(f"Agent: {subjtext}, Action: {filler}, Patient: {objtext}")

Example sentence

sentence = "John goes school"

Perform semantic role labeling

semantic_role_labeling(sentence) not giving result

Upvotes: 0

Mitu Vinci
Mitu Vinci

Reputation: 465

this is nltk version problem. your code should work in nltk 2.x but for nltk 3 you should code like this

IN = re.compile(r'.*\bin\b(?!\b.+ing)')
for doc in nltk.corpus.ieer.parsed_docs('NYT_19980315'):
    for rel in nltk.sem.relextract.extract_rels('ORG', 'LOC', doc,corpus='ieer', pattern = IN):
         print (nltk.sem.relextract.rtuple(rel))

NLTK Example for Relation Extraction Does not work

Upvotes: 0

cuneyttyler
cuneyttyler

Reputation: 1355

Here is the source code of nltk.sem.extract_rels function :

def extract_rels(subjclass, objclass, doc, corpus='ace', pattern=None, window=10):
"""
Filter the output of ``semi_rel2reldict`` according to specified NE classes and a filler pattern.

The parameters ``subjclass`` and ``objclass`` can be used to restrict the
Named Entities to particular types (any of 'LOCATION', 'ORGANIZATION',
'PERSON', 'DURATION', 'DATE', 'CARDINAL', 'PERCENT', 'MONEY', 'MEASURE').

:param subjclass: the class of the subject Named Entity.
:type subjclass: str
:param objclass: the class of the object Named Entity.
:type objclass: str
:param doc: input document
:type doc: ieer document or a list of chunk trees
:param corpus: name of the corpus to take as input; possible values are
    'ieer' and 'conll2002'
:type corpus: str
:param pattern: a regular expression for filtering the fillers of
    retrieved triples.
:type pattern: SRE_Pattern
:param window: filters out fillers which exceed this threshold
:type window: int
:return: see ``mk_reldicts``
:rtype: list(defaultdict)
"""
....

So if you pass corpus parameter as ieer, the nltk.sem.extract_rels function expects the doc parameter to be a IEERDocument object. You should pass corpus as ace or just don't pass it(default is ace). In this case it expects a list of chunk trees(that's what you wanted). I modified the code as below:

import nltk
import re
from nltk.sem import extract_rels,rtuple

#billgatesbio from http://www.reuters.com/finance/stocks/officerProfile?symbol=MSFT.O&officerId=28066
with open('billgatesbio.txt', 'r') as f:
    sample = f.read().decode('utf-8')

sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]

# here i changed reg ex and below i exchanged subj and obj classes' places
OF = re.compile(r'.*\bof\b.*')

for i, sent in enumerate(tagged_sentences):
    sent = nltk.ne_chunk(sent) # ne_chunk method expects one tagged sentence
    rels = extract_rels('PER', 'ORG', sent, corpus='ace', pattern=OF, window=7) # extract_rels method expects one chunked sentence
    for rel in rels:
        print('{0:<5}{1}'.format(i, rtuple(rel)))

And it gives the result :

[PER: u'Chairman/NNP'] u'and/CC Chief/NNP Executive/NNP Officer/NNP of/IN the/DT' [ORG: u'Company/NNP']

Upvotes: 5

bdk
bdk

Reputation: 4839

It looks like to be a "Parsed Doc" an object needs to have a headline member and a text member both of which are lists of tokens, where some of the tokens are marked up as trees. For example this (hacky) example works:

import nltk
import re

IN = re.compile (r'.*\bin\b(?!\b.+ing)')

class doc():
  pass

doc.headline=['foo']
doc.text=[nltk.Tree('ORGANIZATION', ['WHYY']), 'in', nltk.Tree('LOCATION',['Philadelphia']), '.', 'Ms.', nltk.Tree('PERSON', ['Gross']), ',']

for rel in  nltk.sem.extract_rels('ORG','LOC',doc,corpus='ieer',pattern=IN):
   print nltk.sem.relextract.show_raw_rtuple(rel)

When run this provides the output:

[ORG: 'WHYY'] 'in' [LOC: 'Philadelphia']

Obviously you wouldn't actually code it like this, but it provides a working example of the data format expected by extract_rels, you just need to determine how to do your preprocessing steps to get your data massaged into that format.

Upvotes: 6

Related Questions