Reputation: 549
When I use a browser to call a web page, I can save the output with right-mouse 'save as...'. To automate that process in a script I figured to use the cURL command. For a normal, static web page it's straightforward, but this one is different. The web page probably redirects, calls scripts. I have set javascript.enabled=false in firefox and the page still works. But when I
curl 'https://www.brenneisen-capital.de/bcag/wi/neuemissionen'
nothing is returned. Can anyone help me with the correct cURL parameters to fetch what I see in the browser, please? Or is this the wrong approach?
First it looked like wget would be the solution, but some content is missing.
Upvotes: 0
Views: 571
Reputation: 130917
You definitely could achieve that with wget
. Just be sure you use the correct options:
$ wget \
--recursive \
--convert-links \
--no-clobber \
--page-requisites \
--html-extension \
--domains example.org \
--no-parent \
www.example.org/page
Upvotes: 1