Reputation: 417
I have no real experience in bash scripts but as part of a website project I am running a bash script to retrieve xml (using curl) via GETHTTP. I then want to output it to a file. But I only want to do this if the xml doesn't contain a certain string. My data supplier only permits me to perform the GETHTTP once every 6 minutes.
I am able to retrieve the xml and output to a file as follows:
$URL = "insert get http URL here"
$OUTPUT = insert filename and path here
curl -s "$URL" -o $OUTPUT
To introduce an if statement to check for the string, I have assigned the output of the curl to a variable, which I can then search:
DATA=$(curl -s "$URL")
if [[ $DATA == mystring* ]]
then $DATA -o $OUTPUT
fi
However, my big problem is that when I have tried this, each time it looks at $DATA it reruns the curl which exceeds my permitted attempts within the time period.
How do I allocate the output of the curl to a variable which can be reused without rerunning the curl each time the variable is called?
I would have thought I could just convert it to a string but my searching hasn't come up with anything so I fear I am using the wrong search terms.
Upvotes: 1
Views: 1926
Reputation: 22831
This should work:
URL="http://your.url/some/path"
OUTPUT="/path/to/output"
DATA=$(curl -s "$URL")
if echo $DATA | grep "mystring" >/dev/null
then
echo $DATA > $OUTPUT
fi
$DATA
is a string as long as the curl
call is successful (and it returns text). You then just need to manipulate it correctly to search for your term and output it to a file.
Upvotes: 1