Reputation: 33
How does cross namespace communication across pods in kubernetes? Say webserver & application pod is in namespace A and DB in namespace B. I have created External Name as well but still doesn't work.
Can we have multiple selectors in the deployments.yaml
frontend-service
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
namespace: db
spec:
type: NodePort
selector:
app: mongo-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
DB-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: db
spec:
type: ExternalName
externalName: mongo-express-service.frontend.svc.cluster.local
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
$ kubectl get svc -n db
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongo-express-service NodePort 10.103.8.140 <none> 8081:32468/TCP 5h20m
mongodb-service ExternalName <none> mongo-express-service.frontend.svc.cluster.local 27017/TCP 5h19m
$ kubectl get svc -n frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongo-express-service NodePort 10.102.174.70 <none> 8081:30928/TCP 5h20m
Upvotes: 0
Views: 1540
Reputation: 33
Pods in one namespace communicates with the other namespaces without anyexternal name or network policies. By default communication takes place between pods unless any denial of traffic is configured via network policies.
I created the db service with clusterIP as usual by associating it with mongodb app in the selector section
$ kubectl.exe describe svc mongodb-service -n db
Name: **mongodb-service**
Namespace: **db**
Labels: <none>
Annotations: <none>
**Selector: app=mongodb**
Type: **ClusterIP**
IP Families: <none>
IP: 10.109.25.141
IPs: 10.109.25.141
Port: <unset> 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.0.0.48:27017
Session Affinity: None
Events: <none>
$ kubectl.exe get svc -n db
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-service ClusterIP 10.109.25.141 <none> 27017/TCP 47m
Then in the frontend namespace I updated the configmap with the nameofservice.anothernamespace.svc.cluster.local (mongodb-service.db.svc.cluster.local).
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
namespace: frontend
data:
database_url: mongodb-service.db.svc.cluster.local
pods in frontend namespace are working fine.
$ kubectl.exe get pods,svc -n frontend
NAME READY STATUS RESTARTS AGE
pod/mongo-express-78fcf796b8-m4xhj 1/1 Running 8 19m
pod/mongo-express-78fcf796b8-mf4bf 1/1 Running 8 19m
pod/mongo-express-78fcf796b8-zgjkh 1/1 Running 8 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongo-express-service NodePort 10.105.0.117 <none> 8081:31097/TCP 19m
Upvotes: 0
Reputation: 5424
you should create a network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: simple-policy
namespace: db
spec:
podSelector:
matchLabels:
app: mongo-db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
- podSelector:
matchLabels:
app: mongo-express
ports:
- protocol: TCP
port: 27017
egress:
- to:
- namespaceSelector:
matchLabels:
name: frontend
- podSelector:
matchLabels:
app: mongodb-express
ports:
- protocol: TCP
port: 27017
you can slim it down by the restrictions you like whether is the entire namespace, just two pods and with a specific port or all of them.
check out more in the docs
Upvotes: 1