A.joh
A.joh

Reputation: 89

ggplot(): Error in FUN(X[[i]], ...) : object not found

I wish to do a graph comparing the word use of Trump with Hillary Clinton and Obama in R. For this purpose I have followed the approach from this site: https://www.tidytextmining.com/tidytext.html#word-frequencies

ggplot(frequency, aes(x = proportion, y = `Donald Trump`, color = abs(`Donald Trump` - proportion))) +
  geom_abline(color = "gray40", lty = 2) +
  geom_jitter(alpha = 0.1, size = 2.5, width = 0.3, height = 0.3) +
  geom_text(aes(label = word), check_overlap = TRUE, vjust = 1.5) +
  scale_x_log10(labels = percent_format()) +
  scale_y_log10(labels = percent_format()) +
  scale_color_gradient(limits = c(0, 0.001), low = "darkslategray4", high = "gray75") +
  facet_wrap(~author, ncol = 2) +
  theme(legend.position="none") +
  labs(y = "Donald Trump", x = NULL)

My data frame look like this: enter image description here However, I keep getting the error

Error in FUN(X[[i]], ...) : object 'Donald Trump' not found

It seems like the error is related to ggplot(). However, I've tried to change this in several ways but I simply can't find the mistake. Hope you can help me out - in advance thanks!

Upvotes: 2

Views: 24206

Answers (1)

m.evans
m.evans

Reputation: 697

I think you just need to spread and gather the data again, as is done in the example you provided. Also note that a reprex would be helpful here, so I don't have to create one from the example, which may not be as relevant to you.

#creating fake data
library(gutenbergr)
library(tidytext)
library(dplyr)
library(janeaustenr)
library(stringi)
library(tidyr)

hgwells <- gutenberg_download(c(35, 36, 5230, 159))

tidy_hgwells <- hgwells %>%
  unnest_tokens(word, text) %>%
  anti_join(stop_words)


bronte <- gutenberg_download(c(1260, 768, 969, 9182, 767))

tidy_bronte <- bronte %>%
  unnest_tokens(word, text) %>%
  anti_join(stop_words)

original_books <- austen_books() %>%
  group_by(book) %>%
  mutate(linenumber = row_number(),
         chapter = cumsum(str_detect(text, regex("^chapter [\\divxlc]",
                                                 ignore_case = TRUE)))) %>%
  ungroup()

tidy_books <- original_books %>%
  unnest_tokens(word, text)

tidy_books <- tidy_books %>%
  anti_join(stop_words)

frequency <- bind_rows(mutate(tidy_bronte, author = "Hillary Clinton"),
                       mutate(tidy_hgwells, author = "Barack Obama"), 
                       mutate(tidy_books, author = "Donald Trump")) %>% 
  mutate(word = str_extract(word, "[a-z']+")) %>%
  count(author, word) %>%
  group_by(author) %>%
  mutate(proportion = n / sum(n)) %>% 
  select(-n) %>% 
  spread(author, proportion) %>% 
  gather(author, proportion, `Hillary Clinton`,`Barack Obama`)

It is the last two lines of the piped code that will resort your dataframe. What you are doing is spreading your data Clinton and Obama, while keeping a column of proportions that correspond to Trump only.

Here is an example of how your dataframe should look:

> head(frequency)
# A tibble: 6 x 4
  word       `Donald Trump` author          proportion
  <chr>               <dbl> <chr>                <dbl>
1 a              0.00000919 Hillary Clinton 0.0000319 
2 aback         NA          Hillary Clinton 0.00000398
3 abaht         NA          Hillary Clinton 0.00000398
4 abandon       NA          Hillary Clinton 0.0000319 
5 abandoned      0.00000460 Hillary Clinton 0.0000916 
6 abandoning    NA          Hillary Clinton 0.00000398

This will now plot fine.

ggplot(frequency, aes(x = proportion, y = `Donald Trump`, color = abs(`Donald Trump` - proportion))) +
  geom_abline(color = "gray40", lty = 2) +
  geom_jitter(alpha = 0.1, size = 2.5, width = 0.3, height = 0.3) +
  geom_text(aes(label = word), check_overlap = TRUE, vjust = 1.5) +
  scale_x_log10(labels = percent_format()) +
  scale_y_log10(labels = percent_format()) +
  scale_color_gradient(limits = c(0, 0.001), low = "darkslategray4", high = "gray75") +
  facet_wrap(~author, ncol = 2) +
  theme(legend.position="none") +
  labs(y = "Donald Trump", x = NULL)

Upvotes: 3

Related Questions