Reputation: 2439
I want to pipe both script and data over stdin to a shell and its child process. Basically, I'm starting a shell and sending a string like "exec wc -c;\n<data>"
to the shell over stdin.
I'm looking to start a sub-process and pass data to said sub-process.
Using exec
I fully expect wc -c
to replace my shell and count the number of bytes sent over stdin.
Examples:
echo -ne 'exec wc -c;\nabc' | dash
echo -ne 'exec wc -c;\nabc' | bash
echo -ne 'exec wc -c;\nabc' | bash --posix
echo -ne 'exec wc -c;\nabc' | busybox sh
It seems to work consistently with bash, but not with dash or busybox sh
. They both seem to fail intermittently. If I sleep for 100ms before sending <data>
over stdin, then it works. But sleeping is not a reliable fix.
In practice I'm sending a non-trivial amount of data, so I don't want to encode it or somehow store it in memory. Any ideas?
Note: I'm sure there are many use cases where one could work around this. But I'm looking to find out why this doesn't work consistently. And/or what shell magic I could do to make it work reliably, preferably across platforms :)
Update: to clarify I have a remote system where I run sh
and expose stdin/stdout/stderr, I would like to prove that such a setup can do anything. By sending "exec cat - > /myfile;\n<data>"
one could imagine I could stream in /myfile
so that it contains <data>
. Again imagine <data>
being large.
Essentially, I'm looking to prove that a system can be controlled using just stdin for sh
. Something else very simple instead of sh
that is readily available across platforms as a static binary might also work.
I'll admit this might be totally crazy and that I should employ a protocol like SSH or something, but then I would likely have to implement that.
Upvotes: 0
Views: 1462
Reputation: 1766
This appears to work for me with bash, but not at all with dash or busybox sh. I'm surprised it ever works at all. I would expect the shell to read a big chunk of input from its stdin, including both the command on the first line and the data after it, before processing anything. That would leave nothing remaining on the exec'ed command's stdin.
In practice, bash seems to read 1 byte at a time until it sees a newline, and then immediately execs, so everything is good.
dash and busybox sh do exactly what I expected, reading the whole input first.
Can you do
echo -ne "exec wc -c <<'END'\nabc" | sh
instead? Possibly replacing END with END_adjfioe38999f3jf_END if you're worried about it appearing in the input?
Update: though come to think of it, that fails your "don't buffer in memory" criterion. Err... ok, that's hard. If you can guarantee the command to run fits within 4k, then something horrible like this would "work":
echo -ne "exec wc -c\nabc" | perl -e 'sysread(STDIN, $buf, 4096); $buf =~ s/([^\n]*)\n//; open(my $pipe, "|-", $1); syswrite($pipe, $buf); open(STDOUT, ">&", $pipe); exec("cat")'
but that's hardly the portable shell-based solution you were looking for. (It reads a 4k chunk of input, grabs the command from it, forks and spawns off a shell running that command connected by a pipe, sends the rest of the initial chunk down the pipe, dups its own stdout to the pipe, and then execs cat to copy its stdin a piece at a time to the pipe, which will receive it on its stdin. I think that's what you want to end up happening, just with simpler shell-based syntax.
Upvotes: 1