Reputation: 21459
This R Shiny application appears to use DT
to display its tables. At least in the source code I see:
<script src="plotly-binding-4.10.0/plotly.js"></script>
<link href="datatables-css-0.0.0/datatables-crosstalk.css" rel="stylesheet" />
<script src="datatables-binding-0.20/datatables.js"></script>
<link href="crosstalk-1.2.0/css/crosstalk.min.css" rel="stylesheet" />
<script src="crosstalk-1.2.0/js/crosstalk.min.js"></script>
It also has a "CSV" button to download the data:
How do I download the data from this website myself without clicking? I suppose the button runs some javascript, which makes a network call, but the "network" tab of the chrome debugger doesn't show any activity.
Ideally I could find a URL to the data, and then I could use the language of my choice (e.g., wget, curl, python, ..).
Upvotes: 0
Views: 315
Reputation: 3096
Looks like the raw data are coming from here via the project github. Seems like git pull
could do it for you pretty easily.
https://github.com/Metropolitan-Council/covid-poops/blob/main/R/d_covid_cases.R
https://static.usafacts.org/public/data/covid-19/covid_confirmed_usafacts.csv?_ga=2.86006619.233414847.1642517751-2016304881.1642174657
And the other data are in this repo.
https://github.com/Metropolitan-Council/covid-poops/tree/main/data
I was able to use the python requests library to pull the raw data.
import requests
x = requests.get('https://static.usafacts.org/public/data/covid-19/covid_confirmed_usafacts.csv?_ga=2.86006619.233414847.1642517751-2016304881.1642174657')
print(x.text)
Edit: It looks like the shiny data are coming from here. I would just grab them via git. The github readme states, "The Shiny app is located in ./metc-wastewater-covid-monitor. /data contains relevant CSV data and /www contains CSS, HTML, and relevant font files the app needs upon running."
https://github.com/Metropolitan-Council/covid-poops/tree/main/metc-wastewater-covid-monitor/data
Upvotes: 1
Reputation: 8395
You can write a script to use a headerless browser to navigate to the page and download the file. Selenium is the usual first choice for this kind of work.
Upvotes: 0