Reputation: 81
Using NLTK's StanfordParser, I can parse a sentence like this:
os.environ['STANFORD_PARSER'] = 'C:\jars'
os.environ['STANFORD_MODELS'] = 'C:\jars'
os.environ['JAVAHOME'] ='C:\ProgramData\Oracle\Java\javapath'
parser = stanford.StanfordParser(model_path="C:\jars\englishPCFG.ser.gz")
sentences = parser.parse(("bring me a red ball",))
for sentence in sentences:
sentence
The result is:
Tree('ROOT', [Tree('S', [Tree('VP', [Tree('VB', ['Bring']),
Tree('NP', [Tree('DT', ['a']), Tree('NN', ['red'])]), Tree('NP',
[Tree('NN', ['ball'])])]), Tree('.', ['.'])])])
How can I use the Stanford parser to get typed dependencies in addition to the above graph? Something like:
Upvotes: 8
Views: 2064
Reputation: 2529
NLTK's StanfordParser module doesn't (currently) wrap the tree to Stanford Dependencies conversion code. You can use my library PyStanfordDependencies, which wraps the dependency converter.
If nltk_tree
is sentence
from the question's code snippet, then this works:
#!/usr/bin/python3
import StanfordDependencies
# Use str() to convert the NLTK tree to Penn Treebank format
penn_treebank_tree = str(nltk_tree)
sd = StanfordDependencies.get_instance(jar_filename='point to Stanford Parser JAR file')
converted_tree = sd.convert_tree(penn_treebank_tree)
# Print Typed Dependencies
for node in converted_tree:
print('{}({}-{},{}-{})'.format(
node.deprel,
converted_tree[node.head - 1].form if node.head != 0 else 'ROOT',
node.head,
node.form,
node.index))
Upvotes: 6