Igor Stepin
Igor Stepin

Reputation: 398

How to allow access to kubernetes api using egress network policy?

Init container with kubectl get pod command is used to get ready status of other pod.

After Egress NetworkPolicy was turned on init container can't access Kubernetes API: Unable to connect to the server: dial tcp 10.96.0.1:443: i/o timeout. CNI is Calico.

Several rules were tried but none of them are working (service and master host IPs, different CIDR masks):

...
  egress:
  - to:
    - ipBlock:
        cidr: 10.96.0.1/32
    ports:
    - protocol: TCP
      port: 443
...

or using namespace (default and kube-system namespaces):

...
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: default
    ports:
    - protocol: TCP
      port: 443
...

Looks like ipBlock rules just don't work and namespace rules don't work because kubernetes api is non-standard pod.

Can it be configured? Kubernetes is 1.9.5, Calico is 3.1.1.

Problem still exists with GKE 1.13.7-gke.8 and calico 3.2.7

Upvotes: 27

Views: 7920

Answers (4)

aude
aude

Reputation: 1872

You can allow egress traffic to the Kubernetes API endpoints IPs and ports.

You can get the endpoints by running $ kubectl get endpoints kubernetes -oyaml.

I don't understand why it doesn't work to just allow traffic to the cluster IP of the kubernetes service in the default namespace (what is in the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars), but anyway, it works to allow traffic to the underlying endpoints.

To do this in a Helm chart template, you could do something like:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ...
spec:
  podSelector: ...
  policyTypes:
    - Egress
  egress:
    {{- range (lookup "v1" "Endpoints" "default" "kubernetes").subsets }}
    - to:
        {{- range .addresses }}
        - ipBlock:
            cidr: {{ .ip }}/32
        {{- end }}
      ports:
        {{- range .ports }}
        - protocol: {{ .protocol }}
          port: {{ .port }}
        {{- end }}
    {{- end }}

Upvotes: 2

adelmoradian
adelmoradian

Reputation: 476

Had the same issue when using ciliumnetworkpolicy with helm. For anyone having a similar issue, something like this should work:

{{- $kubernetesEndpoint := lookup "v1" "Endpoints" "default" "kubernetes" -}}
{{- $kubernetesAddress := (first $kubernetesEndpoint.subsets).addresses -}}
{{- $kubernetesIP := (first $kubernetesAddress).ip -}}
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  ...
spec:
  ...
  egress:
    - toCIDRSet:
        - cidr: {{ $kubernetesIP }}/32
    ...

Upvotes: 2

Dave McNeill
Dave McNeill

Reputation: 473

You need to get the real ip of the master using kubectl get endpoints --namespace default kubernetes and make an egress policy to allow that.

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1 
metadata:
  name: allow-apiserver
  namespace: test
spec:
  policyTypes:
  - Egress
  podSelector: {}
  egress:
  - ports:
    - port: 443
      protocol: TCP
    to:
    - ipBlock:
        cidr: x.x.x.x/32

Upvotes: 15

Christian
Christian

Reputation: 1697

We aren't on GCP, but the same should apply.

We query AWS for the CIDR of our master nodes and use this data as values for helm charts creating the NetworkPolicy for the k8s API access.

In our case the masters are part of an auto-scaling group, so we need the CIDR. In your case the IP might be enough.

Upvotes: 0

Related Questions