Hai
Hai

Reputation: 193

natural language logic in stanford corenlp

How does one use the natural logic component of Stanford CoreNLP?

I am using CoreNLP 3.9.1 and I fed natlog as an annotator in command line, but I don't seem to see any natlog result in the output, i.e. OperatorAnnotation and PolarityAnnotation, according to this link. Does that have anything to do with the outputFormat? I've tried xml and json, but neither has any output on natural logic. The other stuff (tokenization, dep parse) is in there though.

Here is my command:

./corenlp.sh -annotators tokenize,ssplit,pos,lemma,depparse,natlog -file natlog.test -outputFormat xml

Thanks in advance.

Upvotes: 0

Views: 217

Answers (2)

Hai
Hai

Reputation: 193

This code snippet works for me:

import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.pipeline.Annotation;
// this is the polarity annotation!
import edu.stanford.nlp.naturalli.NaturalLogicAnnotations.PolarityDirectionAnnotation;
// not the one below!
// import edu.stanford.nlp.ling.CoreAnnotations.PolarityAnnotation;
import edu.stanford.nlp.util.PropertiesUtils;

import java.io.*;
import java.util.*;

public class test {

    public static void main(String[] args) throws FileNotFoundException, UnsupportedEncodingException {

        // code from: https://stanfordnlp.github.io/CoreNLP/api.html#generating-annotations
        StanfordCoreNLP pipeline = new StanfordCoreNLP(
                PropertiesUtils.asProperties(
                // **add natlog here**
                        "annotators", "tokenize,ssplit,pos,lemma,parse,depparse,natlog", 
                        "ssplit.eolonly", "true",
                        "tokenize.language", "en"));

        // read some text in the text variable
        String text = "Every dog sees some cat";

        Annotation document = new Annotation(text);

        // run all Annotators on this text
        pipeline.annotate(document);

        // these are all the sentences in this document
        // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
        List<CoreMap> sentences = document.get(SentencesAnnotation.class);

        for(CoreMap sentence: sentences) {
            // traversing the words in the current sentence
            // a CoreLabel is a CoreMap with additional token-specific methods
            for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
                // this is the text of the token
                String word = token.get(TextAnnotation.class);
                // this is the POS tag of the token
                String pos = token.get(PartOfSpeechAnnotation.class);
                // this is the NER label of the token
                String ne = token.get(NamedEntityTagAnnotation.class);
                // this is the polarity label of the token
                String pol = token.get(PolarityDirectionAnnotation.class);
                System.out.print(word + " [" + pol + "] ");
            }
            System.out.println();
        }
    }
}

The output will be: Every [up] dog [down] sees [up] some [up] cat [up]

Upvotes: 0

StanfordNLPHelp
StanfordNLPHelp

Reputation: 8739

I don't think any of the output options show the natlog stuff. This is more designed if you have a Java system and are working with the Annotations themselves in Java code. You should be able to see them by looking at the CoreLabel for each token.

Upvotes: 1

Related Questions