Reputation: 31
I have got a file with 5 reviews. As follows :
text <- c("Orange is the new black", " I love smoking Marlboro black",
"I love oranges before they go black", "My diary is black, so is my hair",
"Is it okay to drink and smoke black")
Now what I want do is make a 5 X 5 matrix, which tells me which words are common between 2 different reviews.
The solution will look something like a table/ matrix with 5 column and 5 rows, having 25 elements. It's diagonal element will be 0.
Now, I have a basic idea of text mining. But how should i do this particular task.
This is just a test run, I actually have to make a matrix having 100 rows and 100 column.
Upvotes: 1
Views: 72
Reputation: 13581
Maybe something like this
all_words <- stringr::str_extract_all(text, "\\w+")
I <- expand.grid(seq_along(text), seq_along(text))
L <- map2(I$Var1, I$Var2, ~paste(intersect(all_words[[.x]], all_words[[.y]]), collapse="|"))
mat <- matrix(L, nrow=5)
diag(mat) <- NA
mat
# [,1] [,2] [,3] [,4] [,5]
# [1,] NA "black" "black" "is|black" "black"
# [2,] "black" NA "I|love|black" "black" "black"
# [3,] "black" "I|love|black" NA "black" "black"
# [4,] "is|black" "black" "black" NA "black"
# [5,] "black" "black" "black" "black" NA
For counts of common words do
L <- map2(I$Var1, I$Var2, ~length(intersect(all_words[[.x]], all_words[[.y]])))
mat <- matrix(L, nrow=5)
diag(mat) <- NA
mat
# [,1] [,2] [,3] [,4] [,5]
# [1,] NA 1 1 2 1
# [2,] 1 NA 3 1 1
# [3,] 1 3 NA 1 1
# [4,] 2 1 1 NA 1
# [5,] 1 1 1 1 NA
Upvotes: 1