Reputation: 544
I have a docker container which is running an application server. The application logs are generated and stored in a particular directory. In my scenario, there are multiple log files and traces generated and stored in that location.
I can understand that there are multiple ways to access those logs - such as
docker exec -it <container_id> bash ssh into the container (when the keys are set while creating the container)
Apart from the above two examples, I would like to access the logs through a different container (which is running an nginx webserver). Theoretically I can understand that with the concept of using "--volumes-from" while creating the docker container for the app server (the first container) we can acheieve, but I dont know how to get it working.
My requirement is whenever the log is generated, it should be served via the other container's webserver. Is there a way we can see the directory listing of the entire logs directory (through Volumes ???) like a directory listing.
I tried something like this and its not working as expected
docker run --name=log_webserver -p 80 --volumes-from logvol:/website_files nginx
docker run --name appcontainer -m 1024 -p 9080 -v sysvol:/var/log/rsyslog -v logvol:/opt/servers/simpleappserver/logs krckumar/simpleappserver
Any other ways to expose the logs ? please suggest.
Upvotes: 1
Views: 78
Reputation: 57185
docker run --name container1 -v /website_files
docker run --name container2 --volumes_from container1
to expose logs do ....
docker logs -f container1
this works if you tail -f the logs in your ENTRYPOINT script - tail -f also prevents container exit since docker is process centric. since tail -f never ends, so your container will never exit. so you can investigate whats going on inside with
docker exec -it container1 bash
any more questions plz ask.
PS if using multiple containers you might want to consider using docker-compose which is v easy to learn and based totally on docker commands itself.
Upvotes: 2