Tony
Tony

Reputation: 9571

How to capture cURL output to a file?

I have a text document that contains a bunch of URLs in this format:

URL = "sitehere.com"

What I'm looking to do is to run curl -K myfile.txt, and get the output of the response cURL returns, into a file.

How can I do this?

Upvotes: 761

Views: 1321859

Answers (11)

Alex2php
Alex2php

Reputation: 11250

curl -K myconfig.txt -o output.txt 

Writes the first output received in the file you specify (overwrites if an old one exists).

curl -K myconfig.txt >> output.txt

Appends all output you receive to the specified file.

Note: The -K is optional.

If you are posting to a URL like https://example.org/?foo=1&baz=4 then you need to put double quotes around the URL:

curl \
   -X POST \
   -H "Content-Type: application/octet-stream" \
   --data-binary "@/home/path/file.xyz" \
   "https://xample.org:8080/v1/?filename=file.xyz&food=1&z=bee" \
  >out.txt 2>err.txt

Upvotes: 1031

SanthoshRam
SanthoshRam

Reputation: 41

If you want to store your output into your desktop, follow the below command using post command in git bash.It worked for me.

curl https://localhost:8080
    --request POST 
    --header "Content-Type: application/json" 
    -o "C:\Desktop\test.json"

Upvotes: 4

AlexPixel
AlexPixel

Reputation: 439

You need to add quotation marks between "URL" -o "file_output" otherwise, curl doesn't recognize the URL or the text file name.

Format

curl "url" -o filename

Example

curl "https://en.wikipedia.org/wiki/Quotation_mark" -o output_file.txt

Example_2

curl "https://en.wikipedia.org/wiki/Quotation_mark" > output_file.txt  

Just make sure to add quotation marks.

Upvotes: 7

RubenLaguna
RubenLaguna

Reputation: 24656

There are several options to make curl output to a file

# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt

# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt" 

# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O 

# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J 

Upvotes: 87

Gabriel Staples
Gabriel Staples

Reputation: 52449

Either curl or wget can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt".

Note that in all commands below, I also recommend using the -L or --location option with curl in order to follow HTML 302 redirects to the new location of the file, if it has moved. wget requires no additional options to do this, as it does this automatically.

# save the file locally as my_file.txt

wget http://path/to/file.txt -O my_file.txt  # my favorite--it has a progress bar
curl -L http://path/to/file.txt -o my_file.txt
curl -L http://path/to/file.txt > my_file.txt

Alternatively, to save the file as the same name locally as it is remotely, use either wget by itself, or curl with -O or --remote-name:

# save the file locally as file.txt

wget http://path/to/file.txt
curl -LO http://path/to/file.txt
curl -L --remote-name http://path/to/file.txt

Notice that the -O in all of the commands above is the capital letter "O".

The nice thing about the wget command is it shows a nice progress bar.

You can prove the files downloaded by each of the sets of 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.

References

  1. I learned about the -L option with curl here: Is there a way to follow redirects with command line cURL?

See also: wget command to download a file and save as a different filename

Upvotes: 19

Marco
Marco

Reputation: 3641

My favorite is lwp-download, which can be found here: https://metacpan.org/dist/libwww-perl/view/bin/lwp-download

You can use it like this:

lwp-download http://www.perl.com/CPAN/src/latest.tar.gz

This will store the file as "latest.tar.gz" in your current directory, so no further option is needed.

Upvotes: 0

mmik
mmik

Reputation: 5961

For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command.

Example: curl https://www.google.com/robots.txt | pbcopy. This will copy all the content from the given URL to your clipboard.

Linux version: curl https://www.google.com/robots.txt | xclip

Windows version: curl https://www.google.com/robots.txt | clip

Upvotes: 8

Cudox
Cudox

Reputation: 21

Writes the first output received in the file you specify (overwrites if an old one exists).

curl -K myconfig.txt >> output.txt

Upvotes: -1

lca25er
lca25er

Reputation: 49

A tad bit late, but I think the OP was looking for something like:

curl -K myfile.txt --trace-ascii output.txt

Upvotes: 2

yumingtao
yumingtao

Reputation: 89

Use --trace-ascii output.txt to output the curl details to the file output.txt.

Upvotes: 3

Greg Bray
Greg Bray

Reputation: 15697

For a single file you can use -O instead of -o filename to use the last segment of the URL path as the filename. Example:

curl http://example.com/folder/big-file.iso -O

will save the results to a new file named big-file.iso in the current folder. In this way it works similar to wget but allows you to specify other curl options that are not available when using wget.

Upvotes: 306

Related Questions