Reputation: 72101
I'd like to export a key from one vault, and import it into another vault.
It feels there should be an easy way to do this from the command line, but I don't see an abstract simple way to do it, to fully export, then import a key.
Is there anyway to do this? I would prefer command line solutions, using the vault
script.
Upvotes: 2
Views: 9464
Reputation: 1328002
In Lakhan Saiteja's answer, the "getting all the keys in the given path from source vault" can be problematic.
keys=$(vault kv list ${secret_path}/ | sed '1,2d')
That problem is tracked with hashicorp/vault
issue 5275 "KV Engine: Recursively list keys".
It proposes various scripts, but in Q4 2024, Palash Das adds the following comment
End of 2024 and we still did not get this.
Scripts shared above works fine, but have certain limitations like:
- How about continue on errors like 403? In real production systems not all team members have access to all keys. So while doing traversal it is safe to ignore 403s
- Huge amount of keys: Yes, not all systems are localhost:8200, production systems may contain 10,000+ keys. So need performance.
- Not all companies are interested to pay letsencrypt, so they use self signed certificates.
VAULT_SKIP_VERIFY
does the job.I made a hacky script (only performance no readability) to list down 10K+ keys in just few seconds.
List Recursive Parallel
list_recur_par() { local root="${1:-secret/}" conc="${2:-32}" out_file="${3:-all_paths}" echo "$root" > "$out_file" dq='"' tail -n +1 -f "$out_file" \ | (grep --line-buffered '/$' || grep '/$') \ | xargs -P "$conc" -I _KEY bash -E -c " export data=\$( vault kv list '_KEY' | awk -v pref='_KEY' ' NR>=3 { print pref\$0 }' || true ); flock $out_file -c ' echo $dq\$data$dq >> $out_file' " } export VAULT_ADDR=https://......:8200 export VAULT_TOKEN='hvs....' list_recur_par secret/ 128 /tmp/all_keys # List all subkeys under "secret/" in 128 parallel processes and save to /tmp/all_keys, then hang indefinitely (See TODO).
Notes For OSX users
- Install
flock
,brew install flock
(I am not skilled developer hence I could not usemkfifo
)- add
-S 2048
toxargs
, yes not every implementation is as good as original linux binaries. Large vault keys are failed in OSXxargs
. It also does not respectARGS_MAX
env variable- Use latest version of
bash
ofzsh
, OSX's old bash is unusable in 2024.Works on ~my~❌ every ✅ machine
Use docker
docker run -v $PWD:/work --rm -it --dns ...(dns only if your company's vault is in VPN') \ -e VAULT_SKIP_VERIFY=true -e VAULT_ADDR=https://...:8200 -e VAULT_TOKEN='hvs....' \ --entrypoint bash bitnami/vault:1.18.3-debian-12-r0
or the original alpine image (real devs never use
alpine
orbusybox
which has pathetic implementation ofgrep
andawk
sub command).docker run -v $PWD:/work --rm -it --dns ...(dns only if your company's vault is in VPN') \ -e VAULT_SKIP_VERIFY=true -e VAULT_ADDR=https://...:8200 -e VAULT_TOKEN='hvs....' \ hashicorp/vault:1.18 sh apk add bash # ... The script goes here # (grep --line-buffered '/$' || grep '/$') is used to fallback to busybox grep and you may see error for unknown argument --line-buffered, but that is ok. busybox grep does not have --line-buffered, and awk does not have -W interactive .
TODO
This script is incomplete, it is an infinite snake which eats it's own tail. When there are no child process with
vault
binary in a steady state, like 5s, you can kill the script considering its complete. In future it will be updated to proper multi process queue management in shell script.
Upvotes: 1
Reputation: 532
The only way to export data from one vault to another is by doing it individually for every key (and every path). I've written a small bash script to automate this for all keys in a given path.
This script polls the data for each key (for the given path) from the source vault and inserts it into the destination vault.
You need to provide the vault url, token and CA certificate (for https authentication) for source and destination vaults & also the path (which has the keys) in the script below -
#! /usr/bin/env bash
source_vault_url="<source-vault-url>"
source_vault_token="<source_vault_token>"
source_vault_cert_path="<source_vault_cert_path>"
destination_vault_url="<destination_vault_url>"
destination_vault_token="<destination_vault_token>"
destination_vault_cert_path="<destination_vault_cert_path>"
# secret_path is the path from which the keys are to be exported from source vault to destination vault
secret_path="<path-without-slash>"
function _set_source_vault_env_variables() {
export VAULT_ADDR=${source_vault_url}
export VAULT_TOKEN=${source_vault_token}
export VAULT_CACERT=${source_vault_cert_path}
}
function _set_destination_vault_env_variables() {
export VAULT_ADDR=${destination_vault_url}
export VAULT_TOKEN=${destination_vault_token}
export VAULT_CACERT=${destination_vault_cert_path}
}
_set_destination_vault_env_variables
printf "Enabling kv-v2 secret at the path ${secret_path} in the destination vault -\n"
vault secrets enable -path=${secret_path}/ kv-v2 || true
_set_source_vault_env_variables
# getting all the keys in the given path from source vault
keys=$(vault kv list ${secret_path}/ | sed '1,2d')
# iterating though each key in source vault (in the given path) and inserting the same into destination vault
printf "Exporting keys from source vault ${source_vault_url} at path ${secret_path}/ ... \n"
for key in ${keys}
do
_set_source_vault_env_variables
key_data_json=$(vault kv get -format json -field data ${secret_path}/${key})
printf "${key} ${key_data_json}\n"
_set_destination_vault_env_variables
echo ${key_data_json} | vault kv put ${secret_path}/${key} -
done
printf "Export Complete!\n"
# listing all the keys (in the given path) in the destination vault
printf "Keys in the destination vault ${destination_vault_url} at path ${secret_path}/ -\n"
vault kv list ${secret_path}
Upvotes: 3
Reputation: 271
We are developing a open source cli tool that does exactly what you need.
The tool can handle one secret or a full tree structure in both import and export. It also supports end to end encryption of your secrets between export and import between Vault instances.
https://github.com/jonasvinther/medusa
export VAULT_ADDR=https://192.168.86.41:8201
export VAULT_SKIP_VERIFY=true
export VAULT_TOKEN=00000000-0000-0000-0000-000000000000
./medusa export kv/path/to/secret --format="yaml" --output="my-secrets.txt"
./medusa import kv/path/to/new/secret ./my-secrets.txt
Upvotes: 2
Reputation: 3225
The only way to do that is by chaining two vault commands, which is effectively reading the value out of the first vault and then writing it to the second one. For example:
export VAULT_TOKEN=valid-token-for1
export VAULT_ADDR=https://vault1
JSON_DATA=$(vault kv get -format json -field data secret/foo)
export VAULT_TOKEN=valid-token-for2
export VAULT_ADDR=https://vault2
echo $JSON_DATA | vault kv put secret/foo -
Upvotes: 5