Reputation: 345
I am trying to extract the data that appears between the div tags from this site:
http://bigbashboard.com/rankings/bbl/batsmen
They appear on the left hand side like this:
Batsmen
1 Matthew Wade 125
2 Marcus Stoinis 120
3 D'Arcy Short 116
I also need the data that appears in the table to the right. I can get that by using the below code.
I have a csv file that cycles through the dates and then binds them together.
How can I extract the data between the div tags and then bind it together with the other data so that I have one data frame that looks like this:
Rank Name Points Dates I R HS Ave SR 4s 6s 100s 50s
1 Matthew Wade 125 22 Dec 2018 - 30 Jan 2020 23 943 130 44.90 155.10 78 36 1 9
2 Marcus Stoinis 120 21 Dec 2018 - 08 Feb 2020 30 1238 147 53.83 133.98 111 39 1 10
3 D'Arcy Short 116 22 Dec 2018 - 30 Jan 2020 24 994 103 49.70 137.10 93 36 1 9
The above is just a snap shot of the first 3 records but I would need all records that appear on each page.
I would also like to add the date from the page address to the table as the first column, so when the page address is for example:
http://bigbashboard.com/rankings/bbl/batsmen/2018/01/24
I would like to add the date of 24/1/2018 to the table like so:
Date Rank Name Points Dates I R HS Ave SR 4s 6s 100s 50s
24/01/18 1 Chris Lynn 167 21 Dec 2016 - 05 Jan 2018 9 436 98 87.20 173.02 33 32 0 4
24/01/18 2 D'Arcy Short 166 23 Dec 2016 - 20 Jan 2018 17 702 122 43.88 152.28 70 31 1 5
24/01/18 4 Alex Carey 102 18 Jan 2017 - 22 Jan 2018 10 400 100 57.14 138.89 39 12 1 2
My code:
library(rvest)
#load csv file with the dates
df <- read.csv('G:/dates.csv')
year <- df[[2]]
month <- df[[3]]
day <- df[[4]]
#add leading zeros to dates
month <- stringr::str_pad(month, 2, side="left", pad="0")
day <- stringr::str_pad(day, 2, side="left", pad="0")
site <- paste('http://bigbashboard.com/rankings/bbl/batsmen/', year, month, day, sep="/")
#get contents from first table that appears on the right of the page
dfList <- lapply(site, function(i) {
webpage <- read_html(i)
draft_table <- html_nodes(webpage, 'table')
draft <- html_table(draft_table)[[1]]
})
#attempt to get contents from second table that appears on the left between div tags
dfList2 <- lapply(site, function(i) {
webpage <- read_html(i)
draft_table <- html_nodes(webpage, 'div.col w25')
#draft <- html_table(draft_table)[[1]]
})
#attempt to bind both tables together
finaldf <- do.call(rbind, dfList1, dfList2)
Upvotes: 0
Views: 444
Reputation: 8844
Consider the following workflow instead
library(rvest)
library(xml2)
library(dplyr)
library(furrr)
batsmen <- function(x) {
x <- html_nodes(x, "div.cf.rankings-page div div ol li a")
xml_remove(html_nodes(x, "span.rank small, span[class^='pos'] em"))
score <- html_text(html_nodes(x, "span.rank"))
rank <- html_text(html_nodes(x, "span[class^='pos']"), trim = TRUE)
xml_remove(html_nodes(x, "span"))
tibble(Rank = rank, Name = html_text(x), Points = score)
}
stats_table <- function(x) {
as_tibble(html_table(x)[[1L]])
}
read_rankings <- function(url) {
ymd <- as.Date(paste0(tail(strsplit(url, "/")[[1L]], 3L), collapse = "-"))
read_html(url) %>% {bind_cols(Date = ymd, batsmen(.), stats_table(.))}
}
mas_url <- "http://bigbashboard.com/rankings/bbl/batsmen"
timeline <-
read_html(mas_url) %>%
html_nodes("div.timeline span a") %>%
html_attr("href") %>%
url_absolute(mas_url)
# Use parallel processing for speed.
plan(multiprocess)
future_map_dfr(timeline[1:100], read_rankings) # I only scrape a few links for test.
Output
# A tibble: 9,250 x 14
Date Rank Name Points Dates I R HS Ave SR `4s` `6s` `100s` `50s`
<date> <chr> <chr> <chr> <chr> <int> <int> <int> <dbl> <dbl> <int> <int> <int> <int>
1 2020-02-08 1 Matthew Wade 125 22 Dec 2018 - 30 Jan 2020 23 943 130 44.9 155. 78 36 1 9
2 2020-02-08 2 Marcus Stoinis 120 21 Dec 2018 - 08 Feb 2020 30 1238 147 53.8 134. 111 39 1 10
3 2020-02-08 3 D'Arcy Short 116 22 Dec 2018 - 30 Jan 2020 24 994 103 49.7 137. 93 36 1 9
4 2020-02-08 4 Alex Hales 115 17 Dec 2019 - 06 Feb 2020 17 576 85 38.4 147. 59 23 0 6
5 2020-02-08 5 Aaron Finch 89 07 Jan 2019 - 27 Jan 2020 17 583 109 36.4 130. 41 24 1 4
6 2020-02-08 6 Josh Inglis 87 26 Dec 2018 - 26 Jan 2020 18 517 73 28.7 149. 53 19 0 5
7 2020-02-08 7 Travis Head 87 11 Jan 2019 - 01 Feb 2020 10 291 79 29.1 132. 22 13 0 1
8 2020-02-08 8 Josh Philippe 84 22 Dec 2018 - 08 Feb 2020 31 791 86 34.4 140. 76 23 0 7
9 2020-02-08 9 Shaun Marsh 82 24 Jan 2019 - 21 Jan 2020 15 547 96 39.1 128. 45 19 0 4
10 2020-02-08 10 Chris Lynn 78 19 Dec 2018 - 27 Jan 2020 27 772 94 32.2 137. 64 44 0 6
# ... with 9,240 more rows
The variable timeline
looks like this
> head(timeline)
[1] "http://bigbashboard.com/rankings/bbl/batsmen/2020/02/08" "http://bigbashboard.com/rankings/bbl/batsmen/2020/02/06"
[3] "http://bigbashboard.com/rankings/bbl/batsmen/2020/02/01" "http://bigbashboard.com/rankings/bbl/batsmen/2020/01/31"
[5] "http://bigbashboard.com/rankings/bbl/batsmen/2020/01/30" "http://bigbashboard.com/rankings/bbl/batsmen/2020/01/27"
It contains all rankings you can possibly get from that website, so you don't have to use a separate csv file to store year, month and day. You may also select the days you want to scrape like what I did above.
Upvotes: 1