Fabio
Fabio

Reputation: 47

Docker-compose pass stdout from a service to stdin in another service

I'm not sure that what's I'm looking for is possible or not... I'm a newbie in docker-compose world and I've read a lot of documentation and posts but I wasn't able to find out a solution.

I need to pass the stdout of a service defined in docker-compose to the stdin of another service. So the output of ServiceA will be the input of ServiceB.

Is it possible?

I see the function stdin_open but I cannot understand how to use the stdout of the other service as input.

Any suggestion?

Thanks

Upvotes: 1

Views: 1447

Answers (1)

David Maze
David Maze

Reputation: 158858

You can't do this in Docker easily.

Container processes' stdin and stdout aren't usually used for much. Most often the stdout receives log messages that can get reviewed later, and containers actually communicate through network sockets. (A container would typically run Apache but not grep.)

Docker doesn't have a native cross-container pipe, beyond the networking setup. If you're docker running containers from the shell, you can use an ordinary pipe there:

sudo sh -c 'docker run image-a | docker run image-b'

If it's practical to run both processes in the same container, you can use a shell pipe as the main container command:

docker run image sh -c 'process_a | process_b'

A differently hacky approach is to use a tool like Netcat to bridge between "stdin" and a network port. For example, consider a "server":

#!/bin/sh
# server.sh
# (Note, this uses busybox nc syntax)
nc -l -p 12345 \
  | cat        \ # <-- a process that reads from stdin
  > out.txt

And a matching "client":

#!/bin/sh
# client.sh
cat in.txt        \ # <-- a process that writes to stdout
  | nc "$1" 12345

Build these into an image

FROM busybox
COPY client.sh server.sh /bin/
EXPOSE 12345
WORKDIR /data
CMD ["server.sh"]

Now run both containers:

docker network create testnet
docker build -t testimg .
echo hello world > in.txt
docker run -d -v $PWD:/data --net testnet --name server testimg \
  server.sh
docker run -d -v $PWD:/data --net testnet --name client testimg \
  client.sh server
docker wait client
docker wait server
cat out.txt

A more robust path would be to wrap the server process in a simple HTTP server that accepted an HTTP POST on some path and launched a subprocess to handle the request; then you'd have a single long-running server process instead of having to re-launch it for each request. The client would use a tool like curl or any other HTTP client.

Upvotes: 2

Related Questions