Reputation: 306
So, I'm doing a project where I have two Docker containers, one for the main app and one for Redis (using docker compose btw). Naturally I wanted to connect both and tried the default bind setting, but of course the app couldn't connect to the db due to them being in two different containers. Then I just went with 0.0.0.0 after reading this. However, I still feel like asking if there's a way to bind Redis to my local network, so that only machines running inside it would be able to connect.
This isn't really what I want. Maybe incorporate something like this?
Does anyone have a good solution to how I could make Redis only accept connections from the other container (linked by Docker Compose) or binding Redis to 0.0.0.0 and using strong security measures is the only way?
Thanks in advance!
Upvotes: 4
Views: 5089
Reputation: 158647
It’s easy to make a Docker-hosted service only accessible to other containers on the same host. If you:
docker run -p
or Docker Compose ports:
optionthen the client containers can reach the server container using its container name as a host name, but non-Docker processes on the host and other hosts can’t reach the server.
If your host has multiple network interfaces and binding to one of those would make a service “private” then you can do the same thing with docker run -p
. If your host has public IP address 10.20.30.40/16 and also private IP address 192.168.144.128/24, then docker run -p 192.168.144.128:6379:6379
will make it available to the private network (and other Docker containers as above) but not the public network. (The server itself, inside the container, still needs to bind to 0.0.0.0.)
If you otherwise need the server to be visible off-host, but only to some IP addresses, I think you’re down to iptables
magic that’s not native to Docker.
Upvotes: 1