Levi Noecker
Levi Noecker

Reputation: 3300

Passing certs to curl from an environment variable

I am working on a CORS monitor for my application, and need to do the following:

  1. Download certificates from our Hashicorp Vault instance
  2. Curl an endpoint using those certs
  3. Evaluate the response

I can do this currently as follows:

vault kv get -field crt my/cert/path/CACERT > CACERT.crt
vault kv get -field crt my/cert/path/TESTCERT > CERT.crt
vault kv get -field key my/cert/path/TESTCERT > KEY.key

curl -v\
    --cacert CACERAT.crt \
    --cert CERT.crt \
    --key KEY.key \
    --location \
    --request GET 'https://my.end.point'
# <evaluate here>
rm CACERT.crt CERT.crt KEY.key

While this works, I would prefer to not write the certs to a file, and would rather keep them in memory, such as using environment variables or maybe some bash-isms I'm not aware of.

In my mind it would look something more like this:

CACERT=$(vault kv get -field crt my/cert/path/CACERT)
CERT=$(vault kv get -field crt my/cert/path/TESTCERT)
KEY=$(vault kv get -field key my/cert/path/TESTCERT)

curl -v\
    --cacert $CACERT \
    --cert $CERT \
    --key $KEY \
    --location \
    --request GET 'https://my.end.point'
# <evaluate here>

Obviously this wont work, as curl is expecting file paths, but for illustrative purposes, is this possible? If possible, is it a poor approach? Perhaps there is another way to approach this that I haven't considered? I am aware that I could do the above with python relatively easily, however I'd prefer to stick to bash + curl if at all possible.

Upvotes: 3

Views: 4328

Answers (2)

L&#233;a Gris
L&#233;a Gris

Reputation: 19545

With a POSIX only shell, it is possible to replace process substitution with explicitly created named pipes.

Background tasks streams the different keys and certificates to their dedicated named pipe.

#!/usr/bin/env sh

# These are the randomly named pipes
cacert_pipe=$(mktemp -u)
cert_pipe=$(mktemp -u)
key_pipe=$(mktemp -u)

# Remove the named pipes on exit
trap 'rm -f -- "$cacert_pipe" "$cert_pipe" "$key_pipe"' EXIT

# Create the named pipes
mkfifo -- "$cacert_pipe" "$cert_pipe" "$key_pipe" || exit 1

# Start background shells to stream data to the respective named pipes
vault kv get -field crt my/cert/path/CACERT >"$cacert_pipe" &
vault kv get -field crt my/cert/path/TESTCERT >"$cert_pipe" &
vault kv get -field key my/cert/path/TESTCERT >"$key_pipe" &

curl -v\
    --cacert "$cacert_pipe" \
    --cert "$cert_pipe" \
    --key "$key_pipe" \
    --location \
    --request GET 'https://example.com/my.end.point'

The downside compared the Bash's process substitution method is; the streaming tasks need to be restarted explicitly before calling curl with the named pipes as files arguments.

Upvotes: 0

James Sneeringer
James Sneeringer

Reputation: 286

Bash supports process substitution using the <(cmd) syntax. This causes the output of cmd to be substituted with a filename. This allows you to pass command output as an argument where a filename is expected.

In your example, you would do something like this:

CACERT=$(vault kv get -field crt my/cert/path/CACERT)
CERT=$(vault kv get -field crt my/cert/path/TESTCERT)
KEY=$(vault kv get -field key my/cert/path/TESTCERT)

curl -v \
    --cacert <(echo "$CACERT") \
    --cert <(echo "$CERT") \
    --key <(echo "$KEY") \
    --location \
    --request GET 'https://my.end.point'

Upvotes: 4

Related Questions