Reputation: 14601
As part of a setup, I create TLS certs and store them in S3. Creating the certs is done via external
data source that runs the command to generate the certs. I then use those outputs to create S3 bucket object resource
s.
This works very well the first time I run terraform apply
. However, if I change any other (non-cert) variable, resource, etc. and rerun, it reruns the external
command, which generates a new key/cert pair, uploads them to S3, and breaks everything that already works.
Is there any way to create the resource conditionally? What pattern could I use to make the certs created only if they don't exist?
I did look at storing the generated keys/certs locally, but this is sensitive key material; I do not want it stored in local disk (and there are keys per environment).
Key/cert generation and storage:
data "external" "ca" {
program = ["sh","-c","jq '.root|fromjson' | cfssl gencert -initca -"]
#
query = {root = "${ data.template_file.etcd-ca-csr.rendered }"}
# the result will be saved in
# data.external.etcd-ca.result.key
# data.external.etcd-ca.result.csr
# data.external.etcd-ca.result.cert
}
resource "aws_s3_bucket_object" "ca_cert" {
bucket = "${aws_s3_bucket.my_bucket.id}"
key = "ca.pem"
content = "${data.external.ca.result.cert}"
}
resource "aws_s3_bucket_object" "ca_key" {
bucket = "${aws_s3_bucket.my_bucket.id}"
key = "ca-key.pem"
content = "${data.external.ca.result.key}"
}
Happy to look at using some form of conditional or entirely different generation pattern.
Upvotes: 3
Views: 1699
Reputation: 74789
The reason for this behavior is that external
is a data source, and thus Terraform expects that it is is read-only and side-effect-free. It re-runs data sources for every plan.
In order to do this via an external script, it would be necessary to use a resource provisioner to run the script and upload it to S3, since there is currently no external
equivalent for resources, which are allowed to have side-effects, and provisioners are side-effect-only (that is, they can't produce results to use elsewhere in config.)
Another approach, though, would be to use Terraform's built-in TLS provider, which allows creation of certificates within Terraform itself. In this case it looks like you're trying to create a new CA cert and key, which could be done with tls_self_signed_cert
like this:
resource "tls_private_key" "ca" {
algorithm = "RSA"
rsa_bits = 2048
}
resource "tls_self_signed_cert" "ca" {
key_algorithm = "RSA"
private_key_pem = "${tls_private_key.ca.private_key_pem}"
# ... subject and validity settings, as appropriate
is_ca_certificate = true
allowed_uses = ["cert_signing"]
}
resource "aws_s3_bucket_object" "ca_cert" {
bucket = "${aws_s3_bucket.my_bucket.id}"
key = "ca.pem"
content = "${resource.tls_self_signed_cert.ca.cert_pem}"
}
resource "aws_s3_bucket_object" "ca_key" {
bucket = "${aws_s3_bucket.my_bucket.id}"
key = "ca-key.pem"
content = "${resource.tls_self_signed_cert.ca.private_key_pem}"
}
The generated private key will be included in the state for use on future runs, so it's important to ensure that the state is stored securely. Note that this would also be true using the external
data source, since data source results are also stored in state. Thus this approach is equivalent from the standpoint of where the secrets get stored.
I wrote more details about using Terraform for TLS certificate management in an article on my website. Its scope is broader than your requirements here, but may be of some interest.
Upvotes: 2