DevOps guy
DevOps guy

Reputation: 51

Can't connect Kubernetes pod to RDS Database

I have a frontend application running in a Kubernetes cluster in AWS. It communicates with a Postgres RDS instance running in the same VPC. The RDS and the k8s worker nodes are both attached to the same private subnets.

The RDS security group allows all access from the Kubernetes nodes (EC2 instances) to the database. The connection from the nodes is fine, because running telnet <rds-endpoint> 5432 returns Connected to <rds-endpoint>.

However when the application is deployed to the pods, it is unable to access the RDS instance. The pods are running on the nodes private IP address <private-ip-address>:8080. The worker node security group also allows all outbound traffic.

I think it may be a port forwarding issue but I do not have much visibility on the connectivity issues. Any assistance appreciated.

Upvotes: 1

Views: 4308

Answers (1)

welcomeboredom
welcomeboredom

Reputation: 635

You forgot to mention how did you provision your kubernetes cluster.

  1. Either you use EKS service. In that case, the pod IP address is really a secondary IP address of your EC2 instance (worker node). At least if you use default AWS CNI plugin All that should be needed in that case is really to allow EC2 intance ID in the security group attached to RDS.
  2. You setup your cluster by yourself using kubeadm or something like that and you're using some CNI plugin like calico, flannel or weave. This creates the problem because POD IP addresses are not routed in your VPC. They are completely different from EC2 instance address. What you need to do in this case is setup address translation rule that you will translate source address of pod to IP address of EC2 instance. Follow this guide in that case: https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/

Upvotes: 1

Related Questions