Reputation: 30301
I have an HTML document in R, and I want to extract a list of unique tags from that document with a count of their frequency of occurrence.
I could loop through every possible tag as follows, but was hoping for a solution that didn't require a pre-defined list of tags:
library('XML')
url <- 'http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array'
doc <- htmlParse(url)
all_tags <- c('//p', '//a', '//b', '//u', '//i')
counts <- sapply(all_tags, function(x) length(xpathSApply(doc, x)))
free(doc)
Upvotes: 4
Views: 96
Reputation: 54237
A classic XML package version could look like this:
tab <- table(xpathSApply(doc, "//*", xmlName))
tab[c('p', 'a', 'b', 'u', 'i')]
Upvotes: 3
Reputation: 78792
Hadleyverse version (but with a reversion to base if necessary):
library(xml2)
library(dplyr)
url <- 'http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array'
doc <- read_html(url)
tags <- xml_name(xml_find_all(doc, "//*"))
# base version
sort(table(tags))
## tags
## body form h1 head html title sub h3 i noscript
## 1 1 1 1 1 1 2 3 3 3
## h4 h2 th link hr ol ul em input b
## 4 5 5 7 8 10 11 12 12 14
## script meta img br pre strong tbody table code li
## 16 17 26 27 41 43 55 79 104 115
## tr p td div a span
## 127 150 268 358 371 423
# hadleyverse
arrange(count(data_frame(tag=tags), tag), desc(n))
## Source: local data frame [36 x 2]
##
## tag n
## 1 span 423
## 2 a 371
## 3 div 358
## 4 td 268
## 5 p 150
## 6 tr 127
## 7 li 115
## 8 code 104
## 9 table 79
## 10 tbody 55
## .. ... ...
Upvotes: 2