njmcd
njmcd

Reputation: 71

R: Webscraping: XML content does not seem to be XML: Using HTMLParse

I am trying to webscrape data over numerous years (represented by different webpages). My 2019 data works exactly like I want it to, but I am getting an error when I try to prep my 2016 data like my 2019 data.

url19 <- 'https://www.pro-football-reference.com/draft/2019-combine.htm'

get_pfr_HTML_file19 <- GET(url19)

combine.parsed19 <- htmlParse(get_pfr_HTML_file19)

page.tables19 <- readHTMLTable(combine.parsed19, stringsAsFactors = FALSE)

data19 <- data.frame(page.tables19[1])

cleanData19 <- data19[!rowSums(data19 == "")> 0,]

cleanData19 <- filter(cleanData19, cleanData19$combine.Pos == 'CB' | cleanData19$combine.Pos == 'S')

cleanData19 is exactly what I want, but when I try to run it with 2016 data, I get the error: XML content does not seem to be XML: ''

url16 <- 'https://www.pro-football-reference.com/draft/2016-combine.htm'

get_pfr_HTML_file16 <- GET(url16)

combine.parsed16 <- htmlParse(get_pfr_HTML_file16)

page.tables16 <- readHTMLTable(combine.parsed16, stringsAsFactors = FALSE)

data16 <- data.frame(page.tables16[1])

cleanData16 <- data16[!rowSums(data16 == "")> 0,]

cleanData16 <- filter(cleanData16, cleanData16$combine.Pos == 'CB' | cleanData16$combine.Pos == 'S')

I get the error when I try to run combine.parsed16 <- htmlParse(get_pfr_HTML_file16)

Upvotes: 0

Views: 303

Answers (1)

Johan Rosa
Johan Rosa

Reputation: 3152

I am not 100% sure of your desired output, you did not include your library calls in your example. Any way, using this code you can get the table

library(rvest)
library(dplyr)

url <- 'https://www.pro-football-reference.com/draft/2016-combine.htm'

read_html(url) %>% 
  html_nodes(".stats_table") %>% 
  html_table() %>% 
  as.data.frame() %>% 
  filter(Pos == 'CB' | Pos == "S")

Several years at once:

library(rvest)
library(magrittr)
library(dplyr)
library(purrr)

years <- 2013:2019
urls <- paste0(
  'https://www.pro-football-reference.com/draft/',
  years,
  '-combine.htm')

map(
  urls,
  ~read_html(.x) %>% 
    html_nodes(".stats_table") %>% 
    html_table() %>% 
    as.data.frame()
) %>%
  set_names(years) %>% 
  bind_rows(.id = "year") %>% 
  filter(Pos == 'CB' | Pos == "S")

Upvotes: 1

Related Questions