Reputation: 1869
I'm trying to restrict to my openvpn to allow accessing internal infrastructure and limit it only by 'develop' namespace, so I started with simple policy that denies all egress traffic and see no effect or any feedback from cluster that it was applied, I've read all docs both official and not and didn't find a solution, here is my policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: policy-openvpn
namespace: default
spec:
podSelector:
matchLabels:
app: openvpn
policyTypes:
- Egress
egress: []
I've applied network policy above with kubectl apply -f policy.yaml
command, but I don't see any effect of this policy, I'm still able to connect to anything from my openvpn pod, how to debug this and see what's wrong with my policy?
It seems like a black-box for me and what can do only is try-error method, which seems not how it should work.
How can I validate that it finds pods and applies policy to them?
I'm using latest kubernetes cluster provided by GKE
I noticed that I didn't check 'use networkpolicy' in google cloud settings and after I checked my vpn just stopped worked, but I don't know how to check it, or why vpn just allows me to connect and blocks all network requests, very strange, is there a way to debug is instead of randomly changing stuff?
Upvotes: 20
Views: 59348
Reputation: 385
Even though the accepted answer tells you that the network policy was created, it lacks testing to make sure it is applied correctly and behaves as expected.
As an alternative to exec
mentioned above (when the image is lacking tools) you can use a debug container. See example on testing egress policies:
Test public internet access:
kubectl -n my_namespace debug my_pod -it --image=quay.io/curl/curl:latest -- \
curl --connect-timeout 5 -I https://example.com
Test internal services access:
kubectl -n my_namespace debug my_pod -it --image=quay.io/curl/curl:latest -- \
curl --connect-timeout 5 -I -k https://kuberentes.default.svc.cluster.local
Depending on your network policies, you should see a successful request (even if you get non-200 response) if network policy allows it to that destination, or connection timeout of your egress doesn't allow it)
Upvotes: 0
Reputation: 37
How to check whether policies are indeed applied to the pod is by testing. There are few things that could go wrong, namely network policies are namespaced and really on labels, plus they need to be setup by installing proper plugin. Labels on the other hand are pod properties (in observed case) and as such, people often tend to forget to check the labels and voila the pod is not restricted. How to check whether policies apply is by testing, try it out with CURL while attached to pods that may or may not access that particular pod. What may help is also the following,however I haven't had an opportunity to test it
Upvotes: 0
Reputation: 481
Debug with the netcat(nc):
$ kubectl exec <openvpnpod> -- nc -zv -w 5 <domain> <port>
P.S: To deny all egress traffic, do not need to declare the spec.egress
key as an empty array, however it affects same:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: policy-openvpn
namespace: default
spec:
podSelector:
matchLabels:
app: openvpn
policyTypes:
- Egress
ref: https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/network-policy-v1/
- egress ([]NetworkPolicyEgressRule) ... If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). ...
Upvotes: 2
Reputation: 111
When you run you can check the label used for a POD selector:
k describe netpol <networkpolicy-name>
Name: <networkpolicy-name>
Namespace: default
Created on: 2020-06-08 15:19:12 -0500 CDT
Labels: <none>
Annotations: Spec:
PodSelector: app=nginx
Pod selector will show you which labels this netpol applied too. Then you can present all the pods with this label by:
k get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-f7b9c7bb-5lt8j 1/1 Running 0 19h
nginx-deployment-f7b9c7bb-cf69l 1/1 Running 0 19h
nginx-deployment-f7b9c7bb-cxghn 1/1 Running 0 19h
nginx-deployment-f7b9c7bb-ppw4t 1/1 Running 0 19h
nginx-deployment-f7b9c7bb-v76vr 1/1 Running 0 19h
Upvotes: 11
Reputation: 282
GKE uses calico for implementing network policy. You need to enable network network policy for master and nodes before applying network policy. You can verify whether calico is enabled by looking for calico pods in kube-system namespace.
kubectl get pods --namespace=kube-system
For verifying the network policies you can see the following commands.
kubectl get networkpolicy
kubectl describe networkpolicy <networkpolicy-name>
Upvotes: 21