Reputation: 359
The goal is to scrape multiple tweets of twitter, their likes etc. I somehow cannot find a way to do that with multiple different tweets, for one tweet it works perfectly.
I have already set up the scraping for individual tweets in R. Code is pasted below. But, I cannot implement that for multiple sites.
site <- "https://twitter.com/btspavedyou/status/1146055736130019334"
page <- read_html(site)
handles <- page %>%
html_nodes(".js-action-profile") %>%
html_text() %>%
sub(".*@", "", .) %>%
print()
text_new <- page %>%
html_nodes("p.TweetTextSize") %>%
html_text() %>%
print()
time <- page %>%
html_nodes("._timestamp") %>%
html_text() %>%
print()
all_data_tweet <- data.frame(
page=site,
author=handles,
text=text_new,
time=time
)
all_data_tweet
Now when trying the the same with the following ten pages it does not work (tried for looks and apply in association with functions.
multiple_pages <- c("https://twitter.com/Swiftandoned/status/1146494919344717824", "https://twitter.com/Swiftandoned/status/1146149790016688128","https://twitter.com/baylee_corbello/status/1146494887875022854","https://twitter.com/angiegon00/status/1146494850486820864", "https://twitter.com/gallica_/status/1146494826289999872", "https://twitter.com/RomuHDV/status/1146494814604673029","https://twitter.com/mathebula_boity/status/1146494779666178049","https://twitter.com/mathebula_boity/status/1146487751774285825","https://twitter.com/mathebula_boity/status/1146494417697681408","https://twitter.com/mathebula_boity/status/1146494307324575744")
The result should be that this what I have for one tweet is produced for multiple tweets:
page author text time
1 https://twitter.com/btspavedyou/status/1146055736130019334 KPOP_predict18 Sehun and Jisoo together in a drama, 2020. 2. Juli
2 https://twitter.com/btspavedyou/status/1146055736130019334 na1_27 Well i guess there is nothing about iKON AND HANBIN 2. Juli
3 https://twitter.com/btspavedyou/status/1146055736130019334 btspavedyou I'm sure he is 'okay' 2. Juli
4 https://twitter.com/btspavedyou/status/1146055736130019334 na1_27 I really hope so, thank you 2. Juli
Upvotes: 0
Views: 119
Reputation: 565
There are some ways to solve, but doing minor modifications I'd use bind_rows
from dplyr
:
readTweet <-function(url){
page <- read_html(url)
handles <- page %>%
html_nodes(".js-action-profile") %>%
html_text() %>%
sub(".*@", "", .)
text_new <- page %>%
html_nodes("p.TweetTextSize") %>%
html_text()
time <- page %>%
html_nodes("._timestamp") %>%
html_text()
all_data_tweet <- data.frame(
page = url,
author = handles,
text = text_new,
time = time
)
return(all_data_tweet)
}
df <- bind_rows(
lapply(list_of_urls, readTweet)
)
You don't need to create an .id since you have the page url as column.
Upvotes: 1