sectech3232
sectech3232

Reputation: 3

docker using ip not within vpc

So I couldnt figure out why I couldnt connect to my containers from a public IP until I found out what IP the docker ports were listening on...if you see ifconfig is showing 172...this is not valid within my vpc...you can see below im not using any 172 within my vpc...so..im not sure where this is getting this from...should I create a new subnet in a new vpn and just make an ami and launch it in new vpc with conforming subnet? can I change docker ip/port its listening on?

ifconfig
br-387bdd8b6fc4 Link encap:Ethernet HWaddr 02:42:69:A3:BA:A9
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:69ff:fea3:baa9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:114269 errors:0 dropped:0 overruns:0 frame:0
TX packets:83675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11431231 (10.9 MiB) TX bytes:36504449 (34.8 MiB)

docker0 Link encap:Ethernet HWaddr 02:42:65:A6:7C:B3
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

eth0 Link encap:Ethernet HWaddr 02:77:F6:7A:50:A6
inet addr:10.0.140.193 Bcast:10.0.143.255 Mask:255.255.240.0
inet6 addr: fe80::77:f6ff:fe7a:50a6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:153720 errors:0 dropped:0 overruns:0 frame:0
TX packets:65773 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:209782581 (200.0 MiB) TX bytes:5618173 (5.3 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:30 errors:0 dropped:0 overruns:0 frame:0
TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2066 (2.0 KiB) TX bytes:2066 (2.0 KiB)

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.11 tcp dpt:389
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:9043
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:7777
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:9443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.7 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.8 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.9 tcp dpt:443

DockerSubnet1-Public 10.0.1.0/24
DockerSubnet2-Public 10.0.2.0/24
DockerSubnet3-Private 10.0.3.0/24
DockerSubnet4-Private 10.0.4.0/24
Private subnet 1A 10.0.0.0/19
Private subnet 2A 10.0.32.0/19
Public subnet 1 10.0.128.0/20
Public subnet 2 10.0.144.0/20 

Upvotes: 0

Views: 283

Answers (2)

sectech3232
sectech3232

Reputation: 3

I never did figure out why..i did allow all traffic inbound and outbound in the security groups and network acls etc etc..I made an ami out of my instance copied it over to another region with a newly built vpc and deployed there. It works!! Chalking it up to AWS VPC. Thanks for the clarification of 172.x I did not know that was between the docker containers...makes sense now.

Upvotes: 0

David Maze
David Maze

Reputation: 158908

The standard way to use Docker networking is with the docker run -p command-line option. If you run:

docker run -p 8888:80 myimage

Docker will automatically set up a port forward from port 8888 on the host to port 80 in the container.

If your host has multiple interfaces (you hint at a "public IP", though it's not shown separately in your ifconfig output) you can set it to listen on only one of those by adding an IP address

docker run -p 10.0.140.193:8888:80 myimage

The Docker-internal 172.18.0.0/16 addresses are essentially useless. They're an important implementation detail when talking between containers, but Docker provides an internal DNS service that will resolve container names to internal IP addresses. In figuring out how to talk to a container from "outside", you don't need these IP addresses.

The terminology in your question hints strongly at Amazon Web Services. A common problem here is that your EC2 instance is running under a security group (network-level firewall) that isn't allowing the inbound connection.

Upvotes: 1

Related Questions