Reputation: 103
I am trying to extract multiple bits of data from over 500 URLs which are all structured the same: www.domain.com/something-else_uniqueID
The code I've tried is:
url <- c("www.domain.com/something-else_uniqueID",
"www.domain.com/something-else_uniqueID2",
"www.domain.com/something-else_uniqueID3")
lapply(url, function(x) {
data.frame(url=url,
category=category <- read_html(url) %>%
html_nodes(xpath = '//*[@id="content-anchor"]/div[1]/div[2]/div[1]/span[2]/a') %>%
html_text(),
sub_category=sub_category <- read_html(url) %>%
html_nodes(xpath = '//*[@id="content-anchor"]/div[1]/div[2]/div[1]/span[3]/a') %>%
html_text(),
section=section <- read_html(url) %>%
html_nodes(xpath = '//*[@id="content-anchor"]/div[1]/div[2]/div[1]/span[4]/a') %>%
html_text())
}) -> my_effort
write.csv(my_effort, "mydata.csv")
Really appreciate your help.
Upvotes: 0
Views: 780
Reputation: 19544
The problem is you use url
in your function while you would rather use x
that is the current item being iterated
Try with
url <- c("www.domain.com/something-else_uniqueID",
"www.domain.com/something-else_uniqueID2",
"www.domain.com/something-else_uniqueID3")
Reduce(function(...) merge(..., all=T),
lapply(url, function(x) {
data.frame(url=x,
category=category <- read_html(x) %>%
html_nodes(xpath = '//*[@id="content-anchor"]/div[1]/div[2]/div[1]/span[2]/a') %>%
html_text(),
sub_category=sub_category <- read_html(x) %>%
html_nodes(xpath = '//*[@id="content-anchor"]/div[1]/div[2]/div[1]/span[3]/a') %>%
html_text(),
section=section <- read_html(x) %>%
html_nodes(xpath = '//*[@id="content-anchor"]/div[1]/div[2]/div[1]/span[4]/a') %>%
html_text())
})) -> my_effort
write.csv(my_effort, "mydata.csv")
Upvotes: 1