Reputation: 15
I deployed hashicorp vault with 3 replicas. Pod vault-0 is running but the other two pods are in pending status. enter image description here
This is my override yaml,
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: true
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "0.9.0"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
affinity: ""
server:
auditStorage:
enabled: true
standalone:
enabled: false
image:
repository: "hashicorp/vault"
tag: "1.6.3"
resources:
requests:
memory: 4Gi
cpu: 1000m
limits:
memory: 8Gi
cpu: 1000m
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
externalPort: 8200
Did a kubectl describe into the pending pods and can see the following status message. I am not sure I am adding the correct affinity settings in the override file. Not sure what I am doing wrong. I am using vault helm charts to deploy to a docker desktop local cluster. Appreciate any help.
Upvotes: 0
Views: 1933
Reputation: 2229
There are a few problems in your values.yaml file.
1.You set
server:
auditStorage:
enabled: true
but you didn't specify how the PVC would be created and what the Storage class is. The chart expects you to do that if you enable the storage. Look at: https://github.com/hashicorp/vault-helm/blob/v0.9.0/values.yaml#L443 Turn it false if you just testing on your local machine or specify storage config.
2.You set empty affinity variable for the injector but not for the server. Set
affinity: ""
for the server too. Look at: https://github.com/hashicorp/vault-helm/blob/v0.9.0/values.yaml#L338
3.An uninitialised and sealed Vault cluster is not really usable. You need to initialize and unseal Vault before it becomes ready. That means setting up a readinessProbe
. Something like this:
server:
readinessProbe:
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
4.Last one but this is kinda optional. Those memory requests:
resources:
requests:
memory: 4Gi
cpu: 1000m
limits:
memory: 8Gi
cpu: 1000m
are a bit on the higher side. Setting up an HA cluster of 3 replicas with each requesting 4Gi of memory might result in Insufficient memory
errors - most likely to happen when deploying on a local cluster.
But then again, you local machine might have 32 gigs of memory - I wouldn't know ;) If it doesn't, trim down those to fit on your machine.
So the following values works for me:
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: true
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "0.9.0"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
affinity: ""
server:
auditStorage:
enabled: false
standalone:
enabled: false
image:
repository: "hashicorp/vault"
tag: "1.6.3"
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
affinity: ""
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
externalPort: 8200
Upvotes: 2