Reputation: 199
I am using the nrc, bing and afinn lexicons for sentiment analysis in R.
Now I would like to remove some specific words form these lexicons, but I don't know how to do that, since the lexicons are not saved in my environment.
My code looks like this (for nrc as an example):
MyTextFile %>%
inner_join(get_sentiments("nrc")) %>%
count(sentiment, sort = TRUE)
Upvotes: 0
Views: 941
Reputation: 714
If you can make a data frame of words you'd like to remove you can exclude these using an anti_join:
word_list <- c("words","to","remove")
words_to_remove <- data.frame(words=word_list)
MyTextFile %>%
inner_join(get_sentiments("nrc")) %>%
anti_join(words_to_remove) %>%
count(sentiment, sort = TRUE)
Upvotes: 0
Reputation: 4294
Here are two ways to do this (there are undoubtedly more). Note first that there are 13901 words in the nrc
lexicon:
> library(tidytext)
> library(dplyr)
> sentiments <- get_sentiments("nrc")
> sentiments
# A tibble: 13,901 x 2
word sentiment
<chr> <chr>
1 abacus trust
2 abandon fear
3 abandon negative
4 abandon sadness
5 abandoned anger
6 abandoned fear
... and so on
You can filter out all words in a particular sentiment category (fewer words are left, at 12425):
> sentiments <- get_sentiments("nrc") %>% filter(sentiment!="fear")
> sentiments
# A tibble: 12,425 x 2
word sentiment
<chr> <chr>
1 abacus trust
2 abandon negative
3 abandon sadness
4 abandoned anger
5 abandoned negative
6 abandoned sadness
Or you can create your own list of dropwords
and remove them from the lexicon (fewer words are left, at 13884):
> dropwords <- c("abandon","abandoned","abandonment","abduction","aberrant")
> sentiments <- get_sentiments("nrc") %>% filter(!word %in% dropwords)
> sentiments
# A tibble: 13,884 x 2
word sentiment
<chr> <chr>
1 abacus trust
2 abba positive
3 abbot trust
4 aberration disgust
5 aberration negative
6 abhor anger
Then you would just do the sentiment analysis using sentiments
you have created:
> library(gutenbergr)
> hgwells <- gutenberg_download(35) # loads "The Time Machine"
> hgwells %>% unnest_tokens(word,text) %>%
inner_join(sentiments) %>% count(word,sort=TRUE)
Joining, by = "word"
# A tibble: 1,077 x 2
word n
<chr> <int>
1 white 236
2 feeling 200
3 time 200
4 sun 145
5 found 132
6 darkness 108
Hope this helps somewhat.
Upvotes: 1