Silvan van der Veen
Silvan van der Veen

Reputation: 13

Does Google Kubernetes Engine support custom node images and/or 10Gbps networking?

We've been setting up a number of private GCP GKE clusters which work quite well. Each currently has a single node pool of 2 ContainerOS nodes.

We also have a non-K8s Compute Engine in the network that is a FreeBSD NFS server and is configured for 10Gbps networking.

When we log in to the K8s nodes, it appears that they do not support 10Gbps networking out of the box. We suspect this, because "large-receive-offload" seems to be turned off in the network interface(s).

We have created persistent storage claims inside the Kubernetes clusters for shares from this fileserver, and we would like them to support the 10Gbps networking but worry that it is limited to 1Gbps by default.

Google only seems to offer a few options for the image of its node-pools (either ContainerOS or Ubuntu). This is limited both through their GCP interface as well as the cluster creation command.

My question is:

Any help would be much appreciated.

Upvotes: 1

Views: 616

Answers (1)

Will R.O.F.
Will R.O.F.

Reputation: 4148

  • Is it at all possible to support 10Gbps networking somehow in GCP GKE clusters?

Yes, GKE natively supports 10GE connections out-of-the-box, just like Compute Engine Instances, but it does not support custom node images.

A good way to test your speed limits is using iperf3.

I Created a GKE instance with default settings to test the connectivity speed.

I also created a Compute Engine VM named Debian9-Client which will host our test, as you see below:

Cloud Console

  • First we set up our VM with iperf3 server running:
❯ gcloud compute ssh debian9-client-us --zone "us-central1-a

user@debian9-client-us:~$ iperf3 -s -p 7777

-----------------------------------------------------------
Server listening on 7777
-----------------------------------------------------------
  • Then we move to our GKE to run the test from a POD:
❯ k get nodes
NAME                                 STATUS   ROLES    AGE   VERSION
gke-cluster-1-pool-1-4776b3eb-16t7   Ready    <none>   16m   v1.15.7-gke.23
gke-cluster-1-pool-1-4776b3eb-mp84   Ready    <none>   16m   v1.15.7-gke.23

❯ kubectl run -i --tty --image ubuntu test-shell -- /bin/bash

root@test-shell-845c969686-6h4nl:/# apt update && apt install iperf3 -y

root@test-shell-845c969686-6h4nl:/# iperf3 -c 10.128.0.5 -p 7777

Connecting to host 10.128.0.5, port 7777
[  4] local 10.8.0.6 port 60946 connected to 10.128.0.5 port 7777
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   661 MBytes  5.54 Gbits/sec  5273    346 KBytes       
[  4]   1.00-2.00   sec  1.01 GBytes  8.66 Gbits/sec  8159    290 KBytes       
[  4]   2.00-3.00   sec  1.08 GBytes  9.31 Gbits/sec  6381    158 KBytes       
[  4]   3.00-4.00   sec  1.00 GBytes  8.62 Gbits/sec  9662    148 KBytes       
[  4]   4.00-5.00   sec  1.08 GBytes  9.27 Gbits/sec  8892    286 KBytes       
[  4]   5.00-6.00   sec  1.11 GBytes  9.51 Gbits/sec  6136    532 KBytes       
[  4]   6.00-7.00   sec  1.09 GBytes  9.32 Gbits/sec  7150    755 KBytes       
[  4]   7.00-8.00   sec   883 MBytes  7.40 Gbits/sec  6973    177 KBytes       
[  4]   8.00-9.00   sec  1.04 GBytes  8.90 Gbits/sec  9104    212 KBytes       
[  4]   9.00-10.00  sec  1.08 GBytes  9.29 Gbits/sec  4993    594 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  9.99 GBytes  8.58 Gbits/sec  72723             sender
[  4]   0.00-10.00  sec  9.99 GBytes  8.58 Gbits/sec                  receiver

iperf Done.

The average transfer rate was 8.58Gits/sec on this test, proving that the cluster node is, by default, running with 10Gbps Ethernet.

If I can help you further, just let me know in the comments.

Upvotes: 1

Related Questions