BlueFx
BlueFx

Reputation: 67

Scrape webpage using R and Chrome

I am trying to pull the table from this website into R by using path from Chrome inspection, but it does not work. Could you help me with that? Thanks.

library(rvest)
library(XML)

url <- "https://seekingalpha.com/symbol/MNHVF/profitability"
webpage <- read_html(url)
rank_data_html <- html_nodes(webpage, 'section#cresscap') # table.cresscap-table
rank_data <- html_table(rank_data_html)
rank_data1 <- rank_data[[1]]

Upvotes: 1

Views: 477

Answers (1)

QHarr
QHarr

Reputation: 84465

Data comes from an additional xhr call made dynamically by the page. You can make a request to this and handle json response with jsonlite. Extract the relevant list of lists and use dplyr bind_rows to generate your output. You can rename columns to match those on page if you want.

library(jsonlite)
library(dplyr)

data <- jsonlite::read_json('https://seekingalpha.com/symbol/MNHVF/cresscap/fields_ratings?category_id=4&sa_pro=false')
df <- bind_rows(data$fields)
head(df)

Upvotes: 1

Related Questions