Neep Hazarika
Neep Hazarika

Reputation: 45

How to remove duplicates from a corpus using the tm package in R

I am trying to remove duplicates from a corpus using the tm package in R. For example, to remove ampersands, I use the following R statements:

removeAmp <- function(x) gsub("&amp\;", "", x)

myCorpus <- tm_map(myCorpus, removeAmp)

I then try to remove duplicates using the following:

removeDup <- function(x) unique(x)

myCorpus <- tm_map(myCorpus, removeDup)

I get the error message:

Error in match.fun(FUN) : argument "FUN" is missing, with no default

I have also tried

removeDup <- function(x) as.list(unique(unlist(x)))

but still get an error. Any help would be very much appreciated.

Upvotes: 0

Views: 3036

Answers (2)

tobiokanobi
tobiokanobi

Reputation: 21

This worked for me:

clean.corpus <- function(corpus) {
      #remove "mc.cores=1" for windows! (Only necessary for Macintosh)
      removeURL <- function(x) gsub("http[[:alnum:]]*", "", x)
      myStopwords <- c(stopwords(use.stopwords), "twitter", "tweets","tweet","tweeting", "retweet", "followme", "account", "available", "via")
      myStopwords <- c(myStopwords, "melinafollowme", "voten", "samier", "zsm", "hpa", "geraus", "vote", "gevotet", "dagibee", "berlin")
      myStopwords <- c(myStopwords, "mal","dass", "für", "votesami", "votedagi", "vorhersage", "\u2728\u2728\u2728\u2728\u2728", "\u2728\u2728\u2728")
     cleaned.corpus <- tm_map(corpus, stripWhitespace, lazy=TRUE)
     cleaned.corpus <- tm_map(cleaned.corpus, content_transformer(tolower), mc.cores=1)
     cleaned.corpus <- tm_map(cleaned.corpus, content_transformer(function(x) iconv(x, to='UTF-8-MAC', sub='byte')), lazy=TRUE)
     cleaned.corpus <- tm_map(cleaned.corpus, removePunctuation, lazy=TRUE)
     cleaned.corpus <- tm_map(cleaned.corpus, removeNumbers, lazy=TRUE) 
     cleaned.corpus <- tm_map(cleaned.corpus, removeURL)
     cleaned.corpus <- tm_map(cleaned.corpus, function(x) removeWords(x, myStopwords), mc.cores=1);

     cleaned.corpus <- tm_map(cleaned.corpus,          
     function(x)removeWords(x,stopwords(use.stopwords)), mc.cores=1)

     removeDup <- function(x) unique(x)
     cleaned.corpus <- tm_map(cleaned.corpus, removeDup, mc.cores=1)

     cleaned.corpus <- tm_map(cleaned.corpus, PlainTextDocument)
     return (cleaned.corpus)
 }

Upvotes: 0

tobiokanobi
tobiokanobi

Reputation: 21

Removing duplicated entries can be done with the following code.

First, convert the previously cleaned corpus back to a data frame.

df.tweets<-data.frame(text=unlist(sapply(tweet.corpus, `[`,"content")), stringsAsFactors=F)

Second, remove duplicates entries in the data frame

tweets.out.unique <- unique(df.tweets)

Third, convert it back to the a corpus (if needed) (assuming that the dataframe has one colum)

tweet.corpus.clean <- Corpus(DataframeSource(tweets.out.unique[1]))

I don't know if this is more elegant, but quite easy!

Upvotes: 1

Related Questions