Reputation: 525
I am using doing some web scraping with packages XML and html, and I need to isolate the country name, and the two numeric values that you see below:
<tr><td>Tonga</td>
<td class="RightAlign">3,000</td>
<td class="RightAlign">6,000</td>
</tr>
here is the code I've written so far - I think that I just need the right regexes?
# a vector to store the results
pages<-character(0)
country_names<-character(0)
# go through all 6 pages containing the info we want, and store
# the html in a list
for (page in 1:6) {
who_search <- paste(who_url, page, '.html', sep='')
page = htmlTreeParse(who_search, useInternalNodes = T)
pages=c(page, pages)
# extract the country names of each tweet
country <- xpathSApply(page, "????", xmlValue)
country_names<-c(country, country_names)
}
Upvotes: 1
Views: 3728
Reputation: 121618
Here no need to use xmlSpathApply
, use readHTMLTable
instead
library(XML)
library(RCurl)
page = htmlParse('http://www.who.int/diabetes/facts/world_figures/en/index4.html')
readHTMLTable(page)
Country 2000 2030
1 Albania 86,000 188,000
2 Andora 6,000 18,000
3 Armenia 120,000 206,000
4 Austria 239,000 366,000
5 Azerbaijan 337,000 733,000
6 Belarus 735,000 922,000
using xpathSApply
(Note the use of gsub to clean the result)
country <- xpathSApply(page, '//*[@id="primary"]/table/tbody/tr',
function(x) gsub('\n','' ,xmlValue(x))
+ )
> country
[1] "Albania 86,000 188,000 "
[2] "Andora 6,000 18,000 "
[3] "Armenia 120,000 206,000 "
[4] "Austria 239,000 366,000 "
[5] "Azerbaijan 337,000 733,000 "
EDIT As mentioned in the comment we can use xpathSApply without gsub
val = xpathSApply(page, '//tbody/tr/td', xmlValue) ##gets a vector of table
as.data.frame(matrix(val, ncol=3, byrow=TRUE)) ##transform to matrix
Upvotes: 4