Reputation: 46158
I'm beginning to be used to develop my applications with Docker built environments.
I have a physical server on which I'm serving several (locally installed) web applications with Apache.
Apache listening on 80
sites-available
app1 -> locally installed in /apps/app1
app2 -> locally installed in /apps/app2
...
Now I've just prepared a full-stack production environment with Docker for one my application. I'd like to plug it in my good old server beside the locally installed applications:
Apache listening on 80
sites-available
app1 -> locally installed in /apps/app1
app2 -> locally installed in /apps/app2
app3 -> proxy to the related Docker service
...
And progressively Dockerize my other existing apps.
The main question is:
Will I have to expose my dockerized services in dediced ports each time ?
Is there some networking technique I don't know of that I could use to finally proxy several services running on the same machine ?
Also could you point me to some Apache proxy example ?
Eventually I'll switch on Nginx when I'll have an exclusive proxy.
Upvotes: 0
Views: 139
Reputation: 263479
In Docker, the preferred method to expose a specific container is with a dedicated port. There are ways to connect directly to the container, especially when you're running on the same machine, but you create the challenge of trying to track the current IP of the container if it gets rebuilt.
For your scenario, I'd recommend placing a second proxy inside a container. Since it's running as a container, it can connect to each of the other containers by name. My personal favorite implementation of this is nginx-proxy which listens to the docker socket for containers starting and stopping to automatically update it's configuration.
Then, once you've finished your migration into containers, you can add another listening port on the nginx proxy upon turning off your existing apache proxy process.
Upvotes: 1