Reputation: 1500
I am trying to perform a Kubernetes Rolling Update using Helm v2; however, I'm unable to.
When I perform a helm upgrade
on a slow Tomcat image, the original pod is destroyed.
I would like to figure out how to achieve zero downtime by incrementally updating Pods instances with new ones, and draining old ones.
To demonstrate, I created a sample slow Tomcat Docker image, and a Helm chart.
helm install https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz --name slowtom \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/initial.yaml
You can follow the logs by running kubectl logs -f slowtom-sf-0
, and once ready you can access the application on http://localhost:30901
helm upgrade slowtom https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/upgrade.yaml
The upgrade.yaml
is identical to the initial.yaml
deployment file with the exception of the tag version number.
Here the original pod is destroyed, and the new one starts. Meanwhile, users are unable to access the application on http://localhost:30901
helm del slowtom --purge
curl -LO https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz
tar vxfz ./slowtom.tgz
helm install --debug ./slowtom --name slowtom -f ./slowtom/environments/initial.yaml
helm upgrade --debug slowtom ./slowtom -f ./slowtom/environments/upgrade.yaml
Dockerfile
FROM tomcat:8.5-jdk8-corretto
RUN mkdir /usr/local/tomcat/webapps/ROOT && \
echo '<html><head><title>Slow Tomcat</title></head><body><h1>Slow Tomcat Now Ready</h1></body></html>' >> /usr/local/tomcat/webapps/ROOT/index.html
RUN echo '#!/usr/bin/env bash' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'x=2' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'secs=$(($x * 60))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'while [ $secs -gt 0 ]; do' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' >&2 echo -e "Blast off in $secs\033[0K\r"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' sleep 1' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' : $((secs--))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'done' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo '>&2 echo "slow cataline done. will now start real catalina"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'exec catalina.sh run' >> /usr/local/tomcat/bin/slowcatalina.sh && \
chmod +x /usr/local/tomcat/bin/slowcatalina.sh
ENTRYPOINT ["/usr/local/tomcat/bin/slowcatalina.sh"]
slowtom/Chart.yaml
apiVersion: v1
description: slow-tomcat Helm chart for Kubernetes
name: slowtom
version: 1.1.2 # whatever
slowtom/values.yaml
# Do not use this file, but ones from environmments folder
slowtom/environments/initial.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 1
env:
- name: y_env
value: whatever
slowtom/environments/upgrade.yaml
# Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 2
env:
- name: y_env
value: whatever
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
---
slowtom/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: {{.Values.slowtom_sf.name}}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
app: {{.Values.slowtom_sf.name}}
visualize: "true"
hasHealthcheck: "{{ .Values.slowtom_sf.hasHealthcheck }}"
isResilient: "{{ .Values.slowtom_sf.isResilient }}"
spec:
type: NodePort
selector:
app: {{.Values.slowtom_sf.name}}
sessionAffinity: ClientIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
nodePort: 30901
---
Upvotes: 4
Views: 2287
Reputation: 1500
I solved this problem by adding Readiness or Startup Probes to my deployment.yaml
slowtom/templates/deployment.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 1
failureThreshold: 3
---
Upvotes: 0
Reputation: 3208
Unlike Deployment
, StatefulSet
does not start a new pod before destroying the old one during a rolling update. Instead, the expectation is that you have multiple pods, and they will be replaced one-by-one. Since you only have 1 replica configured, it must destroy it first. Either increase your replica count to 2 or more, or switch to a Deployment
template.
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
Upvotes: 2