Reputation: 559
Simple use case: a production server running Nginx (listening on 0.0.0.0:80 and 443), responsible for reading the SSL certificates and, if valid, redirect to a "hidden" service in a Docker container. This service runs Gunicorn (so a Django website) and is listening on port 8000. That sounds standard, simple, tested, almost too beautiful to be true... but of course, it's not working as I'd like.
Because Gunicorn, running in its little Docker container, is accessible through the Internet. If you go to my hostname, port 8000, you get the Gunicorn website. Obviously, it's ugly, but the worst is it completely bypasses Nginx and the SSL certificates. So why would a Docker container be accessible through the Internet? I know that for some time it was the other way around with Docker. We need a proper balance!
On further inspecting the problem: I do have a firewall and it's configured to be extremely limiting. It only allows port 22 (for ssh, waiting to be remapped), 80 and 443. So 8000 should absolutely not be allowed. But ufw uses iptables, and Docker adds some iptables rules to bypass configuration if a container runs on that port.
I tried a lot of stupid things (that's part of the job): in my docker-compose.yml file specifying the ports to bind, I tried to remove them ('course if I do, nginx can't access my hidden service). I tried to add an IP specified to bind (it seems it's allowed):
ports:
- "127.0.0.1:8000:8000"
This had a weird result, as Nginx wasn't able to connect but Gunicorn was still visible through the Internet. So I would say, exactly the contrary of what I want. I tried to manually change the Docker service to add flags (not good) and tried to add a configuration file in etc/docker/daemon.json
(changing the "ip" setting to "127.0.0.1" again). I'm short out of ideas. If anyone has a pointer... I wouldn't think this is an extremely rare usage of Docker, after all.
Specifics: I don't run containers with docker run
directly. I have a docker-compose.yml
file and run services using docker stack
, so in a swarm (although I only have one machine at the time). Could be related, though again, I would think not.
upstream gunicorn {
server localhost:8000;
}
server {
server_name example.com www.example.com;
location / {
proxy_pass http://gunicorn;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate ...;
ssl_certificate_key ...;
include ...;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
docker-compose.yml
version: '3.7'
services:
gunicorn:
image: gunicorn:latest
command: gunicorn mysite.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./:/usr/src/app/
ports:
- "8000:8000"
depends_on:
- db
networks:
- webnet
db:
image: postgres:10.5-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- webnet
networks:
webnet:
volumes:
postgres_data:
Note: the gunicorn image has been built beforehand, but there's no trick, just a Python-3.7:slim image with all set for gunicorn and a Django website under mysite/. It doesn't expose any port (not that I think makes any difference here).
Upvotes: 3
Views: 4465
Reputation: 559
Okay, after some digging, here's what I found: docker will make sure to create iptables rules to access containers from outside of the network. Asking Docker to not bother about iptables at all is not a good strategy, as it will need it to forward connections from the containers to the outside world. So the documentation recommended to create an iptables rule in the docker-user chain to prevent external access to the Docker daemon. That's what I did. Of course, the given command (slightly modified to completely forbid external access) didn't persist, so I had to create a service just to add this rule. Probably not the best option, so don't hesitate to comment if you have a better choice to offer. Here's the iptables rule I added:
iptables -I DOCKER-USER -i ext_if ! -s 127.0.0.1 -j DROP
This rule forbids external access to the Docker daemon, but still allows to connect to individual containers. This rule didn't seem to solve anything for me, since this solution wasn't exactly an answer to my question. To forbid access to my Docker container, running on port 8000, from the Internet, I added yet another rule in the same script:
iptables -I DOCKER-USER -p tcp --destination-port 8000 -j DROP
This rule is a bit extreme: it completely forbids network traffic on port 8000 over the Docker network. The host (localhost) is excluded from this rule, since a previous rule should allow local traffic no matter what (if you do your iptables config by hand, you will have to add this rule, it's not a given, a reason why I switched to simpler solutions like ufw
). If you look at the firewall rules with iptables -L
, you will see that the chain DOCKER-USER
is called before any Docker rule. You might find another rule allowing traffic on the same port. However, because we forbid traffic in a higher-priority rule, port 8000 will be effectively hidden from outside.
This solution, while seeming to solve the problem, is not exactly elegant and intuitive. Again, I can't help but wonder if I was the first ever to use Docker to host my "hidden" services while maintaining a different service in front. I guess the usual approach is to have nginx in a docker container itself, but that created other issues for me that I frankly decided outdid the advantages.
Upvotes: 4