jamacoe
jamacoe

Reputation: 549

cURL parameters to fetch a web page

When I use a browser to call a web page, I can save the output with right-mouse 'save as...'. To automate that process in a script I figured to use the cURL command. For a normal, static web page it's straightforward, but this one is different. The web page probably redirects, calls scripts. I have set javascript.enabled=false in firefox and the page still works. But when I

curl 'https://www.brenneisen-capital.de/bcag/wi/neuemissionen'

nothing is returned. Can anyone help me with the correct cURL parameters to fetch what I see in the browser, please? Or is this the wrong approach?

First it looked like wget would be the solution, but some content is missing.

Upvotes: 0

Views: 571

Answers (1)

cassiomolin
cassiomolin

Reputation: 130917

You definitely could achieve that with wget. Just be sure you use the correct options:

$ wget \
     --recursive \
     --convert-links \
     --no-clobber \
     --page-requisites \
     --html-extension \
     --domains example.org \
     --no-parent \
         www.example.org/page

Upvotes: 1

Related Questions