Anil Kumar
Anil Kumar

Reputation: 461

k8s pods to schedule in both spot and on-demand instances in EKS

we are planning to introduce AWS spot instances in production ( non-prod is running with spot already ). In order to achieve HA we are running HPA with minimum replicas 2 for all critical deployments. Because of the spot instances behaviour we want to run on-demand instances and one pod should be running on on-demand instances for the same

Question:

Is there anyway i can split pods to get launch one pod of the deployment in on-demand and all the other pods (another one since minimum is 2 and if HPA increase the pods ) of the same deployment in spot instances.

We already using nodeaAffinity and podAntiAffinity since we have multiple node groups for different reasons. Below is the snippet.

        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: category
                operator: In
                values:
                - <some value>
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: <lable key>
                operator: In
                values: 
                - <lable value>
            topologyKey: "kubernetes.io/hostname"
    

Upvotes: 1

Views: 1318

Answers (2)

mrbit01
mrbit01

Reputation: 201

I am using affinity and topologySpreadConstraints. The following manifest balances pods between on-demand and spot instances, also on-demand has higher priority. You can play with the maxSkew property, to have more pods on demand than on spot, for example, if you set the value to 2 maxSkew: 2.

One more thing, I don't allow having more than one pod on the same node, this is with the podAntiAffinity parameter.

You will need to set a label for your nodes, in my case I'm using the node-lifecycle node tag which can be on-demand or spot.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: node-lifecycle
                operator: In
                values:
                - on-demand
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: kubernetes.io/hostname
            labelSelector:
              matchLabels:
                app: nginx
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: node-lifecycle
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: nginx

Upvotes: 0

Narain
Narain

Reputation: 922

Short answer is No. No such way to define per replica. As you are already using podAntiAffinity, just by adding the same pod labels, you can ensure no two replicas stays in the same host (if that's not what you are already doing). And then use spotInterruption Handler to drain and reschedule without abrupt downtimes during spot interruptions.

Upvotes: 1

Related Questions