Reputation: 13
How can egress from a Kubernetes pod be limited to only specific FQDN/DNS with Azure CNI Network Policies?
This is something that can be achieved with:
Istio
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: googleapis
namespace: default
spec:
destination:
service: "*.googleapis.com"
ports:
- port: 443
protocol: https
Cilium
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
app: some-pod
egress:
- toFQDNs:
- matchName: "api.twitter.com"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
OpenShift
apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: default-rules
spec:
egress:
- type: Allow
to:
dnsName: www.example.com
- type: Deny
to:
cidrSelector: 0.0.0.0/0
How can something similar be done with Azure CNI Network Policies?
Upvotes: 1
Views: 6686
Reputation: 11
As other answers indicated there are no official solutions from Azure.
I developed a controller with CRD that will allow you to create DNS-based egress rules called FQDNNetworkPolicies. I have been using it successfully for a while on Azure AKS. It is forked from this archived project from Google mentioned in another answer, but significantly improved.
For this to work optimally though you need a CoreDNS plugin (see README), which is difficult to install on managed AKS. I have accomplished this with a Kyverno policy that mutates the CoreDNS pods. A bit hacky, but it works.
Upvotes: 0
Reputation: 19
In case someone is hitting this page from google:
I found a solution that works nicely on my cloud provider (OpenTelekomCloud) and probably will on many other.
There is a project called gke-fqdnnetworkpolicies-golang
By defining a custom resource
apiVersion: networking.gke.io/v1alpha3
kind: FQDNNetworkPolicy
metadata:
name: allow-test
namespace: test1
spec:
podSelector: {}
egress:
- to:
- fqdns:
- heise.de
ports:
- port: 443
protocol: TCP
- port: 80
protocol: TCP
it will resolve the FQDNs, produce the final NetworkPolicy and update the records every 30 seconds. This is what the final Policy will look like
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test
namespace: test1
annotations:
fqdnnetworkpolicies.networking.gke.io/owned-by: allow-test
spec:
podSelector: {}
egress:
- ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 80
to:
- ipBlock:
cidr: 128.65.210.8/32
policyTypes:
- Ingress
- Egress
I had to append following permissions to the clusterRole fqdnnetworkpolicies-manager-role in the yaml (downloaded from the release page) to make it work outside GKE
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fqdnnetworkpolicies-manager-role
rules:
...
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies/status
verbs:
- get
- patch
- update
Upvotes: 1
Reputation: 2807
ATM network policies with FQDN/DNS rules are not supported on AKS.
If you use Azure CNI & Azure Policy Plugin you get the default Kubernetes Network Policies.
If you use Azure CNI & Calico Policy Plugin you get advanced possibilities like Global Network Polices but not the FQDN/DNS one. This is a paid feature on Calico Cloud unfortunately.
Upvotes: 3