Greedence
Greedence

Reputation: 41

Internal network container name resolution

I am currently trying to setup my homelab using podman and hosting multiple containers on a fedora server. The backend containers shall have no internet/host access but should be able to communicate between one another based on their container names. Their services are then callable via reverseproxy (in my case nginx) that is connected to the host and the various backend containers.

Self hosted dns provides the relevant subdomain routes. Sidenote: for reasons I have to do all the podman stuff as sudo - not related to the question. Also, SELinux is active.

To achieve that I tried to create 2 networks, both of type bridge and the backend one is internal: true. Nginx is able to call the containers, the backend containers can't access the internet, but they can't reach each other. Within the backend network they are pingable, thus they are there, but name resolution fails. All docs/posts that I read so far stated that it should be possible the resolve the container names, thus what's going on? (Trying a nslookup in the backend container shows that the gateway is not reachable; but since it is on the same subnet this should work[?])

Stripped podman-compose file:

version: '3'

networks:
  no-internet:
    driver: bridge
    internal: true
  internet:
    driver: bridge
services:
  A:
    container_name: A
    networks:
      - no-internet
    depends_on:
      - B
     // Stuff related to A
  B:
    container_name: B
    networks:
      - no-internet
     // Stuff related to B
  proxy:
   networks:
    - no-internet
    - internet
   ports:
     - 80:80
   depends_on:
     - A
      // Stuff related to proxy
  

Upvotes: 4

Views: 2010

Answers (3)

AmanicA
AmanicA

Reputation: 5505

In my case ufw firewall was blocking it so this worked for me:

sudo ufw allow in on podman1

See also: https://stackoverflow.com/a/76326163/381083

Upvotes: 1

Jim Paris
Jim Paris

Reputation: 898

On my system, this happened because I had an instance of bind9 running on the host configured with listen-on port 53 { any };. This meant that named was binding to the host's IP on the podman-created internal network (10.88.0.1) on the same port 53 that aardvark-dns was trying to use, so it was preventing aardvark-dns from responding.

The fix was to change named.conf.options to not bind on any.

Upvotes: 1

larsks
larsks

Reputation: 312138

I can't reproduce that problem.

First, I converted your example docker-compose.yaml into something we can actually run:

version: '3'

networks:
  no-internet:
    driver: bridge
    internal: true
  internet:
    driver: bridge

services:
  service1:
    image: docker.io/alpinelinux/darkhttpd
    networks:
      - no-internet
  service2:
    image: docker.io/alpinelinux/darkhttpd
    networks:
      - no-internet
  proxy:
    image: docker.io/alpinelinux/darkhttpd
    networks:
    - no-internet
    - internet

I'm using podman 4.5.1 and podman-compose 1.0.6

If I bring up the above environment...

podman-compose up

It looks like name lookups work as expected:

$ podman-compose exec service1 sh
/ $ getent hosts service1
10.89.1.16        service1.dns.podman  service1.dns.podman service1
/ $ getent hosts service2
10.89.1.17        service2.dns.podman  service2.dns.podman service2
/ $ getent hosts proxy
10.89.1.15        proxy.dns.podman  proxy.dns.podman proxy

And just to verify things, I checked and I can successfully reach the web service running in each container:

/ $ wget -O /dev/null service1:8080
Connecting to service1:8080 (10.89.1.16:8080)
saving to '/dev/null'
null                 100% |*********************************************************|   191  0:00:00 ETA
'/dev/null' saved
/ $ wget -O /dev/null service2:8080
Connecting to service2:8080 (10.89.1.17:8080)
saving to '/dev/null'
null                 100% |*********************************************************|   191  0:00:00 ETA
'/dev/null' saved
/ $ wget -O /dev/null proxy:8080
Connecting to proxy:8080 (10.89.1.15:8080)
saving to '/dev/null'
null                 100% |*********************************************************|   191  0:00:00 ETA
'/dev/null' saved

I see the same behavior in the service2 and proxy containers.

From the "backend" containers (service1 and service2) I have no access to external sites:

$ podman-compose exec service1 sh
/ $ wget -O/dev/null google.com
Connecting to google.com ([2607:f8b0:4006:80d::200e]:80)
wget: can't connect to remote host: Network unreachable

But this works as expected in the proxy container:

$ podman-compose exec proxy sh
/ $ wget -O/dev/null google.com
Connecting to google.com (142.250.65.238:80)
Connecting to www.google.com (142.250.80.36:80)
saving to '/dev/null'
null                 100% |*********************************************************| 18138  0:00:00 ETA
'/dev/null' saved

Upvotes: 0

Related Questions