Reputation: 1567
I have an Azure Kubernetes Service cluster running. And part of setting the environment variables for my pod, i installed the csi-secret-store-provider-azure
with helm chart. The first deployment was with default values and everything was working just fine. I was able to access the keyvault and store the secrets in a volume attached to the pod.
After some testing i realized that i cannot reload the volume attached to the pod if i needs to add a new secret or remove one, but i had to delete the volume and deploy it again. Which is not ideal for my case. While reading some documentations i found that the helm chart installed has some values that could help with this. Such as:
csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --set secrets-store-csi-driver.enableSecretRotation=true -n istio-system--set secrets-store-csi-driver.syncSecret.enabled=true
With those values enabled and configured, as of my understanding, i should be able to resync my secrets every 2m.
But unfortunately that was not the case because i started seeing a random warning across my pods. The warning in the events is the following:
Warning MountRotationFailed 22s (x25 over 4m12s) csi-secrets-store-rotation failed to get node publish secret default/secrets-store-creds, err: secrets "default/secrets-store-creds" not found
my yaml deployments for this looks like this:
SecretClassProvider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: pod-service-app-config
namespace: default
spec:
provider: azure
secretObjects:
- secretName: pod-service-app-config
type: Opaque
data:
[.......] my configurations
While in the deployment itself i have the following:
spec:
replicas: 1
selector:
matchLabels:
app: pod-service
template:
metadata:
labels:
app: pod-service
spec:
nodeSelector:
nodepool: linuxnode00
containers:
- name: pod-service
image: my-image
ports:
- containerPort: 80
env:
- name: Auth__ClientId
valueFrom:
secretKeyRef:
name: pod-service-app-config
key: KeyToFetch
volumeMounts:
- name: pod-service-app-config
mountPath: "/mnt/pod-service-app-config"
readOnly: true
restartPolicy: Always
volumes:
- name: pod-service-app-config
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "pod-service-app-config"
nodePublishSecretRef:
name: secrets-store-creds
i have the secret secret-store-creds
set and is in the same namespace. But i really dont understand what is the root cause issue here if somebody can shade some light on this i would be gratefull.
Please if you need any details, dont hesitate to ask
Upvotes: 1
Views: 2623
Reputation: 1991
According to the documentation here
There is a situation where after v.0.1.0 the secrets-store-csi-driver
driver has some secrets filtering enabled by default. This is to avoid caching secrets unnecessarily I believe.
To make sure that the secrets are visible, the node published secret needs to be labeled as used
using the following command:
kubectl label secret <node publish secret ref name> secrets-store.csi.k8s.io/used=true
And optionally adding the -n <namespace>
if needed.
If you do not label the secrets as used you will see errors like the following:
failed to get node publish secret , err: secrets "" not found
Upvotes: 0
Reputation: 3861
It looks like you are set up correctly with your Azure Key Vault, AKS, and your Kubernetes secrets. The csi-secrets-store-provider-azure
is installed and was working fine initially, but you encountered an issue with the secret rotation feature. The error MountRotationFailed
suggests that the CSI driver is having trouble fetching the node publish secret, which is necessary for it to communicate with the Azure Key Vault.
Ensure Role Assignments i.e. ensure you've assigned the necessary roles in Azure Key Vault using RBAC for your service principal (arkokasp
) in my case. The service principal needs the Key Vault Secrets User
role at minimum to read secrets.
Verify that your SecretProviderClass
(pod-service-app-config
) has the correct reference to the Azure Key Vault and the correct tenantId
. If all good, then you can proceed to ensure that your Kubernetes deployment successfully mounts these secrets into your pod using the CSI Secret Store CSI driver.
First, make sure that your SecretProviderClass
is configured correctly. This class should be referencing the Azure Key Vault correctly and must specify the secret you want to access, which is dbusername
in this case.
example YAML. Modify this accordingly as per your own environment.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname
namespace: default
spec:
provider: azure
secretObjects:
- secretName: my-kv-secret
type: Opaque
data:
- key: username
objectName: dbusername
parameters:
usePodIdentity: "false"
keyvaultName: "arkokv"
objects: |
array:
- |
objectName: dbusername
objectType: secret
tenantId: "id-value"
and apply using
kubectl apply -f secretproviderclass.yaml
Now, deploy a Kubernetes pod that uses this SecretProviderClass
.
Example yaml. You can modify it with your own deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-secrets-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx-secrets
template:
metadata:
labels:
app: nginx-secrets
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/mnt/secrets"
readOnly: true
volumes:
- name: secret-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname"
nodePublishSecretRef:
name: secrets-store-creds
Same way apply it using kubectl apply -f nginx-deployment.yaml
Confirm the existence of the secret in the Azure Key Vault. Also cross check SecretProviderClass
YAML, assuming both dbusername
and dbpassword
exist in your Key Vault and the permissions are appropriately set.
Example secretprovider yaml.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname
namespace: default
spec:
provider: azure
secretObjects:
- secretName: example-secret
type: Opaque
data:
- key: username # Key used in your Kubernetes secret to reference dbusername
objectName: dbusername # Name of the secret in Azure Key Vault
- key: password # Key used in your Kubernetes secret to reference dbpassword
objectName: dbpassword # Name of the secret in Azure Key Vault
parameters:
usePodIdentity: "false" # Set to "true" if using managed identities, otherwise ensure your service principal is configured correctly
keyvaultName: "arkokv" # Name of your Azure Key Vault
cloudName: "AzurePublicCloud" # Optional if default
objects: |
array:
- |
objectName: dbusername
objectType: secret # Object type in Key Vault (could be secret, key, or cert)
objectVersion: "" # Leave blank to always fetch the latest version
- |
objectName: dbpassword
objectType: secret
objectVersion: ""
resourceGroup: "rg-name" # Resource group where your Key Vault is located
subscriptionId: "abcd-44fb-efg-hijklmnop" # Your Azure subscription ID
tenantId: "id value" # Your Azure tenant ID
This configuration will allow the CSI driver to fetch the specified secrets from Azure Key Vault and present them as files inside the pods that use this SecretProviderClass
.
References:
Upvotes: 0