IT_novice
IT_novice

Reputation: 1259

Kubernetes - Can you build a cluster with master node on x86 and worker node on arm?

I have an Intel NUC (I5) and Raspberry-Pi Model-B . I tried to create a kubernetes cluster by making the Intel-NUC as master node and Raspberry-Pi as worker node.When I try the above set up, I see that the worker node crashing all the time . Here's the output . This happens only with the above set up. If I try creating a cluster with two Raspberry-Pis (One master and one worker node), it works fine.

What am I doing wrong?

sudo kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-ubuntu                           1/1       Running            0          13h
kube-system   kube-apiserver-ubuntu                 1/1       Running            0          13h
kube-system   kube-controller-manager-ubuntu        1/1       Running            0          13h
kube-system   kube-dns-6f4fd4bdf-fqmmt                3/3       Running            0          13h
kube-system   kube-proxy-46ddk                        0/1       CrashLoopBackOff   5          3m
kube-system   kube-proxy-j48fc                        1/1       Running            0          13h
kube-system   kube-scheduler-ubuntu                 1/1       Running            0          13h
kube-system   kubernetes-dashboard-5bd6f767c7-nh6hz   1/1       Running            0          13h
kube-system   weave-net-2bnzq                         2/2       Running            0          13h
kube-system   weave-net-7hr54                         1/2       CrashLoopBackOff   3          3m

I examined the logs for kube-proxy and found the following entry Logs from kube-proxy standard_init_linux.go:178: exec user process caused "exec format error" This seem to stem from the issue that the image picked up is arm arch as oppose x86 arch. Here's the yaml file

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kube-proxy-5xc9c",
    "generateName": "kube-proxy-",
    "namespace": "kube-system",
    "selfLink": "/api/v1/namespaces/kube-system/pods/kube-proxy-5xc9c",
    "uid": "a227b43b-27ef-11e8-8cf2-b827eb03776e",
    "resourceVersion": "22798",
    "creationTimestamp": "2018-03-15T01:24:40Z",
    "labels": {
      "controller-revision-hash": "3203044440",
      "k8s-app": "kube-proxy",
      "pod-template-generation": "1"
    },
    "ownerReferences": [
      {
        "apiVersion": "extensions/v1beta1",
        "kind": "DaemonSet",
        "name": "kube-proxy",
        "uid": "361aca09-27c9-11e8-a102-b827eb03776e",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "kube-proxy",
        "configMap": {
          "name": "kube-proxy",
          "defaultMode": 420
        }
      },
      {
        "name": "xtables-lock",
        "hostPath": {
          "path": "/run/xtables.lock",
          "type": "FileOrCreate"
        }
      },
      {
        "name": "lib-modules",
        "hostPath": {
          "path": "/lib/modules",
          "type": ""
        }
      },
      {
        "name": "kube-proxy-token-kzt5h",
        "secret": {
          "secretName": "kube-proxy-token-kzt5h",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "kube-proxy",
        "image": "gcr.io/google_containers/kube-proxy-arm:v1.9.4",
        "command": [
          "/usr/local/bin/kube-proxy",
          "--config=/var/lib/kube-proxy/config.conf"
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "kube-proxy",
            "mountPath": "/var/lib/kube-proxy"
          },
          {
            "name": "xtables-lock",
            "mountPath": "/run/xtables.lock"
          },
          {
            "name": "lib-modules",
            "readOnly": true,
            "mountPath": "/lib/modules"
          },
          {
            "name": "kube-proxy-token-kzt5h",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "IfNotPresent",
        "securityContext": {
          "privileged": true
        }
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "kube-proxy",
    "serviceAccount": "kube-proxy",
    "nodeName": "udubuntu",
    "hostNetwork": true,
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node-role.kubernetes.io/master",
        "effect": "NoSchedule"
      },
      {
        "key": "node.cloudprovider.kubernetes.io/uninitialized",
        "value": "true",
        "effect": "NoSchedule"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute"
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute"
      },
      {
        "key": "node.kubernetes.io/disk-pressure",
        "operator": "Exists",
        "effect": "NoSchedule"
      },
      {
        "key": "node.kubernetes.io/memory-pressure",
        "operator": "Exists",
        "effect": "NoSchedule"
      }
    ]
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-03-15T01:24:45Z"
      },
      {
        "type": "Ready",
        "status": "False",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-03-15T01:35:41Z",
        "reason": "ContainersNotReady",
        "message": "containers with unready status: [kube-proxy]"
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-03-15T01:24:46Z"
      }
    ],
    "hostIP": "192.168.178.24",
    "podIP": "192.168.178.24",
    "startTime": "2018-03-15T01:24:45Z",
    "containerStatuses": [
      {
        "name": "kube-proxy",
        "state": {
          "waiting": {
            "reason": "CrashLoopBackOff",
            "message": "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-5xc9c_kube-system(a227b43b-27ef-11e8-8cf2-b827eb03776e)"
          }
        },
        "lastState": {
          "terminated": {
            "exitCode": 1,
            "reason": "Error",
            "startedAt": "2018-03-15T01:40:51Z",
            "finishedAt": "2018-03-15T01:40:51Z",
            "containerID": "docker://866dd8e7175bd71557b9dcfc84716a0f3abd634d5d78c94441f971b8bf24cd0d"
          }
        },
        "ready": false,
        "restartCount": 8,
        "image": "gcr.io/google_containers/kube-proxy-arm:v1.9.4",
        "imageID": "docker-pullable://gcr.io/google_containers/kube-proxy-arm@sha256:c6fa0de67fb6dbbb0009b2e6562860d1f6da96574d23617726e862f35f9344e7",
        "containerID": "docker://866dd8e7175bd71557b9dcfc84716a0f3abd634d5d78c94441f971b8bf24cd0d"
      }
    ],
    "qosClass": "BestEffort"
  }
}

Upvotes: 3

Views: 2685

Answers (2)

Ofir Makmal
Ofir Makmal

Reputation: 501

Yes, it's possible and i've just done that for one of my customers.

Basically there is an issue that the KubeProxy DaemonSet deployed automatically on the master is compiled to x64 - because you wanted that the master will be x64 and the nodes will be ARM.

When you add ARM nodes to the cluster, the DaemonSet tries to deploy the x64 image on them, and fails.

You'll need to edit the default DaemonSet after installation to select only x64 nodes, and deploy another DaemonSet for ARM nodes. This gist will walk you through: Multiplatform (amd64 and arm) Kubernetes cluster setup

Hope this helps, Ofir.

Upvotes: 6

it is perfectly possible to do that, but you need to ensure that you launch pods for given cpu architecture on appropriate machines. Use nodeSelector or Node. Otherwise, you get what you experience - amd64 binaries failing to run on arm, or vice versa.

Hint: use beta.kubernetes.io/arch label to distinct between cpu architectures.

Upvotes: 1

Related Questions