Reputation: 381
In my 10-machines bare-metal Kubernetes cluster, one service needs to call another https-based service which is using a self-signed certificate. However, since this self-signed certificate is not added into pods' trusted root ca, the call failed saying can't validate x.509 certificate.
All pods are based on ubuntu docker images. However the way to add ca cert to trust list on ubuntu (using dpkg-reconfigure ca-certificates) is not working on this pod any longer. Of course even I succeeded adding the ca cert to trust root on one pod, it's gone when another pod is kicked.
I searched Kubernetes documents, and surprised not found any except configuring cert to talk to API service which is not what I'm looking for. It should be quite common scenario if any secure channel needed between pods. Any ideas?
Upvotes: 34
Views: 106229
Reputation: 5145
Updated Edit read option 3:
I can think of 3 options to solve your issue if I was in your scenario:
(The only complete solution I can offer, my other solutions are half solutions unfortunately, credit to Paras Patidar/the following site :)
Add certificate to config map:
lets say your pem file is my-cert.pem
kubectl -n <namespace-for-config-map-optional> create configmap ca-pemstore --from-file=my-cert.pem
Mount configmap as volume to exiting CA root location of container: mount that config map’s file as one to one file relationship in volume mount in directory /etc/ssl/certs/ as file for example
apiVersion: v1
kind: Pod
metadata:
name: cacheconnectsample
spec:
containers:
- name: cacheconnectsample
image: cacheconnectsample:v1
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
ports:
- containerPort: 80
command: [ "dotnet" ]
args: [ "cacheconnectsample.dll" ]
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
So I believe the idea here is that /etc/ssl/certs/ is the location of tls certs that are trusted by pods, and the subPath method allows you to add a file without wiping out the contents of the folder, which would contain the k8s secrets.
If all pods share this mountPath, then you might be able to add a pod present and configmap to every namespace, but that's in alpha and is only helpful for static namespaces. (but if this were true then all your pods would trust that cert)
(Half solution/idea + doesn't exactly answer your question but solves your problem, I'm fairly confident will work in theory, that will require research on your part, but I think you'll find it's the best option :)
In theory you should be able to leverage cert-manager + external-dns + Lets Encrypt Free + a public domain name to replace the self signed cert with a Public Cert.
(there's cert-manager's end result is to auto gen a k8s tls secret signed by Lets Encrypt Free in your cluster, they have a dns01 challenge that can be used to prove you own the cert, which means that you should be able to leverage that solution even without an ingress / even if the cluster is only meant for private network)
Edit: (After gaining more hands on experience with Kubernetes)
I believe that switchboard.op's answer is probably the best/should be the accepted answer. This "can" be done at runtime, but I'd argue that it should never be done at runtime, doing it at runtime is super hacky and full of edge cases/there's not a universal solution way of doing it.
Also it turns out that my Option 1 doing it is only half correct. mounting the ca.crt on the pod alone isn't enough. After that file is mounted on the pod you'd need to run a command to trust it. And that means you probably need to override the pods startup command. Example you can't do something like connect to database (the default startup command) and then update trusted CA Certs's command. You'd have to override the startup file to be a hand jammed, overwrite the default startup script, update trusted CA Certs's, connect to the database. And the problem is Ubuntu, RHEL, Alpine, and others have different locations where you have to mount the CA Cert's and sometimes different commands to trust the CA Certs so a universal at runtime solution that you can apply to all pods in the cluster to update their ca.cert isn't impossible, but would require tons of if statements and mutating webhooks/complexity. (a hand crafted per pod solution is very possible though if you just need to be able to dynamically update it for a single pod)
switchboard.op's answer is the way I'd do it if I had to do it. Build a new custom docker image with your custom ca.cert being trusted baked into the image. This is a universal solution, and greatly simplifies the YAML side. And it's relatively easy to do on the docker image side.
Upvotes: 27
Reputation: 483
You could update entire trusted CA by doing something like
rootCA1="-----BEGIN CERTIFICATE-----
MIIEYzCCA0ugA ...
-----END CERTIFICATE-----
all my root certs ...
"
mkdir -p myhost-files
curl https://ccadb.my.salesforce-sites.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites -o myhost-files/ca-certificates.crt
echo "$rootCA1" >> myhost-files/ca-certificates.crt
kubectl create configmap myhost-files --from-file=myhost-files --save-config --dry-run=client -o yaml | kubectl apply -f -
Update helm chart values.yaml
extraVolumeMounts:
- mountPath: /etc/ssl/certs/ca-certificates.crt
subPath: ca-certificates.crt
name: myhost-files
# extraVolumes: []
extraVolumes:
- name: myhost-files
configMap:
name: host-files
See other answers for simple deploy mount.
Note that Fedora/RHEL based is in this folder /etc/pki/ca-trust
Some Functions you can expand as well for Debian/Fedora is you want to update it with additional files. I can't remember if root ca trust gets blown away on apt upgrade, probably on updates but not sure.
add_ca_crt_debian(){
sudo apt-get install -y ca-certificates # This pkg is usually installed
echo "debian"
echo "${old_root_ca_crt}" | sudo tee /usr/local/share/ca-certificates/${old_cert_file_name}
echo "${root_ca_crt}" | sudo tee /usr/local/share/ca-certificates/${cert_file_name}
sudo update-ca-certificates
}
add_ca_crt_fedora(){
echo "fedora"
echo "${old_root_ca_crt}" | sudo tee /etc/pki/ca-trust/source/anchors/${old_cert_file_name}
echo "${root_ca_crt}" | sudo tee /etc/pki/ca-trust/source/anchors/${cert_file_name}
sudo update-ca-trust
}
Upvotes: -1
Reputation: 443
I came up against this same problem and the options presented here were all good starting points. The initContainers method was promising but there were some application specific problems that worked against me there. In the end this method using a lifecycle hook is what worked for me.
kubectl -n my-namespace create configmap my-cert --from-file=root_ca_only.crt
...
volumes:
- name: my-cert
configMap:
defaultMode: 420
name: my-cert
...
...
volumeMounts:
- mountPath: /usr/local/share/ca-certificates/root_ca_only.crt
name: my-cert
subPath: root_ca_only.crt
...
lifecycle:
postStart:
exec:
command:
- sh
- -c
- sleep 10; update-ca-certificates
Upvotes: 5
Reputation: 1279
Just for curiosity, here is an example of manifest utilizing the init container approach.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
# in my case it is CloudFlare CA used to sign certificates for origin servers
origin_ca_rsa_root.pem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
name: demo
spec:
nodeSelector:
kubernetes.io/os: linux
initContainers:
- name: init
# image: ubuntu
# command: ["/bin/sh", "-c"]
# args: ["apt -qq update && apt -qq install -y ca-certificates && update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
# # alternative image with preinstalled ca-certificates utilities
image: grafana/alpine:3.15.4
command: ["/bin/sh", "-c"]
args: ["update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
volumeMounts:
- name: demo
# note - we need change extension to crt here
mountPath: /usr/local/share/ca-certificates/origin_ca_rsa_root.crt
subPath: origin_ca_rsa_root.pem
readOnly: false
- name: tmp
mountPath: /artifact
readOnly: false
containers:
- name: demo
# note - even so init container is alpine base, and this one is ubuntu based everything still works
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: tmp
mountPath: /etc/ssl/certs
readOnly: false
volumes:
- name: demo
configMap:
name: demo
# will be used to pass files between init container and actual container
- name: tmp
emptyDir: {}
and its usage:
kubectl apply -f demo.yml
kubectl exec demo -c demo -- curl --resolve foo.bar.com:443:10.0.14.14 https://foo.bar.com/swagger/v1/swagger.json
kubectl delete -f demp.yml
notes:
Indeed it is kind of ugly and monstrous, but at least it does work and is proof of concept. Workarounds with simple ConfigMap do not work because curl reads ca-certificates.crt, which is not modified in that approach.
Upvotes: 14
Reputation: 2288
If you want to bake the cert in at buildtime, edit your Dockerfile adding the commands to copy the cert from the build context and update the trust. You could even add this as a layer to something from docker hub etc.
COPY my-cert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
If you're trying to update the trust at runtime things get more complicated. I haven't done this myself, but you might be able to create a configMap
containing the certificate, mount it into your container at the above path, and then use an entrypoint script to run update-ca-certificates
before your main process.
Upvotes: 20