Yogesh.Kathayat
Yogesh.Kathayat

Reputation: 1004

container running on docker swarm not accessible from outside

I am running my containers on the docker swarm. asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do

curl http://10.255.8.21:80

or

curl http://127.0.0.1:80

from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host. I am not able to access it from another machine, my host machine operating system is centos 8.

this is my docker-compose file

version: "3.3"
networks:
  basic:
services:
  asset-backend:
    image: asset/asset-management-backend
    env_file: .env
    deploy:
      replicas: 1
    depends_on:
      - asset-mongodb
      - asset-postgres
    networks:
      - basic
  asset-mongodb:
    image: mongo
    restart: always
    env_file: .env
    ports:
      - "27017:27017"
    volumes:
      - $HOME/asset/mongodb:/data/db
    networks:
      - basic
  asset-postgres:
    image: asset/postgresql
    restart: always
    env_file: .env
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=asset-management
    volumes:
      - $HOME/asset/postgres:/var/lib/postgresql/data
    networks:
      - basic
  asset-frontend:
    image: asset/asset-management-frontend
    restart: always
    ports:
      - "80:80"
    environment:
      - ENV=dev
    depends_on:
      - asset-backend
    deploy:
      replicas: 1
    networks:
      - basic
  asset-autodiscovery-cron:
    image: asset/auto-discovery-cron
    restart: always
    env_file: .env
    deploy:
      replicas: 1
    depends_on:
      - asset-mongodb
      - asset-postgres
    networks:
      - basic

this is my docker service ls

ID                  NAME                                       MODE                REPLICAS            IMAGE                                         PORTS
auz640zl60bx        asset_asset-autodiscovery-cron   replicated          1/1                 asset/auto-discovery-cron:latest         
g6poofhvmoal        asset_asset-backend              replicated          1/1                 asset/asset-management-backend:latest    
brhq4g4mz7cf        asset_asset-frontend             replicated          1/1                 asset/asset-management-frontend:latest   *:80->80/tcp
rmkncnsm2pjn        asset_asset-mongodb              replicated          1/1                 mongo:latest                                  *:27017->27017/tcp
rmlmdpa5fz69        asset_asset-postgres             replicated          1/1                 asset/postgresql:latest                  *:5432->5432/tcp

My 80 port is open in firewall following is the output of firewall-cmd --list-all

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

if i inspect my created network the output is following

[
    {
        "Name": "asset_basic",
        "Id": "zw73vr9xigfx7hy16u1myw5gc",
        "Created": "2019-11-26T02:36:38.241352385-05:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.3.0/24",
                    "Gateway": "10.0.3.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
                "Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
                "EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
                "MacAddress": "02:42:0a:00:03:08",
                "IPv4Address": "10.0.3.8/24",
                "IPv6Address": ""
            },
            "943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
                "Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
                "EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
                "MacAddress": "02:42:0a:00:03:04",
                "IPv4Address": "10.0.3.4/24",
                "IPv6Address": ""
            },
            "afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
                "Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
                "EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
                "MacAddress": "02:42:0a:00:03:0a",
                "IPv4Address": "10.0.3.10/24",
                "IPv6Address": ""
            },
            "f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
                "Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
                "EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
                "MacAddress": "02:42:0a:00:03:0c",
                "IPv4Address": "10.0.3.12/24",
                "IPv6Address": ""
            },
            "f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
                "Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
                "EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
                "MacAddress": "02:42:0a:00:03:06",
                "IPv4Address": "10.0.3.6/24",
                "IPv6Address": ""
            },
            "lb-asset_basic": {
                "Name": "asset_basic-endpoint",
                "EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
                "MacAddress": "02:42:0a:00:03:02",
                "IPv4Address": "10.0.3.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4100"
        },
        "Labels": {
            "com.docker.stack.namespace": "asset"
        },
        "Peers": [
            {
                "Name": "8170c4487a4b",
                "IP": "10.255.8.21"
            }
        ]
    }
]

Upvotes: 7

Views: 9183

Answers (7)

EMR
EMR

Reputation: 29

My particular problem was that the hostname was resolving to an IPv6 addr on the docker host. The docker iptables rules automagically installed by docker swarm are IPv4.

  • To diagnose:

    • iptables-save > rules.txt # inspect rules to make sure everything is in order for the :FORWARD chain
    • netcat -vz myhost 80 # connected with no problems
    • wget http://myhost # resolved to ipv6 addr and just hung
    • wget http://10.20.30.40 # brought back my web facing pod's port 80 response instead of the packets getting dropped.
  • To resolve:

My clients were using IPv6 by default. Using modified /etc/hosts or directly connecting via IP worked. Redid the iptables rules for ip6tables, and all is good!

Much thanks to @suyuan in the previous response to look at the forwarding rules on iptables.

Upvotes: 1

Suyuan Chang
Suyuan Chang

Reputation: 839

I got into this same issue. It turns out that's my iptables filter causes external connections not work.

In docker swarm mode, docker create a virtual network bridge device docker_gwbridge to access to overlap network. My iptables has following line to drop packet forwards:

:FORWARD DROP

That makes network packets from physical NIC can't reach the docker ingress network, so that my docker service only works on localhost.

Change iptables rule to

:FORWARD ACCEPT

And problem solved without touching the docker.

Upvotes: 2

Sourav Debnath
Sourav Debnath

Reputation: 51

While running docker provide an port mapping, like

docker run -p 8081:8081 your-docker-image

Or, provide the port mapping in the docker desktop while starting the container.

Upvotes: 0

rmbrad
rmbrad

Reputation: 1102

Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network. This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.

To fix you can update the configuration of the ingress network as specified in Customize the default ingress network; in summary:

  1. Remove services that publish ports
  2. Remove existing network: docker network rm ingress
  3. Recreate using non-conflicting subnet:
    docker network create \
        --driver overlay \
        --ingress \
        --subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use
        --gateway 172.16.0.1 \
        ingress
    
  4. Restart services

You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.

Upvotes: 15

Yor Jaggy
Yor Jaggy

Reputation: 435

I suggest you verify the "right" behavior using docker-compose first. Then, try to use docker swarm without network specification just to verify there are no network interface problems.

Also, you could use the below command to verify your LISTEN ports:

netstat -tulpn

EDIT: I faced this same issue but I was able to access my services through 127.0.0.1

Upvotes: 0

Vincent
Vincent

Reputation: 61

Can you try this url instead of the ip adres? host.docker.internal so something like http://host.docker.internal:80

Upvotes: 0

Razvan I.
Razvan I.

Reputation: 239

docker service update your-service --publish-add 80:80

You can publish ports by updating the service.

Upvotes: 0

Related Questions