rath
rath

Reputation: 4069

Storing process output in files based on the output (BASH)

Here's the problem: My process writes a series of data to stdout in the following format:

[1] i: 0 X: 0 Y: 0
[1] i: 1 X: 1 Y: 0
[2] i: 0 X: 0 Y: 0
[2] i: 1 X: 1 Y: 0
[2] i: 2 X: 2 Y: 0
[4] i: 0 X: 0 Y: 0

It's meant to be a distributed solution written in MPI.

What I want to do is put the output into different files depending on the value between the brackets (that's the processor ID) so that I can more easily find where each process crashes.

My approach so far is to run

test.sh > out | grep '\[2\]'

I then use grep on out for each number of interest. Later I did this

cat out | grep '\[2\]' > out.2

to store the result of each process (hint: I am a bash novice). My question is this:

How can I do something like

test.sh > out | grep '\[${N}\]' > out.${N}

where the result of each process is sent to its own file? The processes don't write to files so the solution has to be in bash (or perhaps even awk).

EDIT 1:
The processes don't talk to each other so there's no need to preserve the order with each process wrote to stdout.

Upvotes: 2

Views: 123

Answers (1)

Kent
Kent

Reputation: 195209

you could pipe the output of your test.sh to

..|awk -F'[][]' '{print $0 > ("out."$2)}'

this line will generate the files for you.

test with your example input:

kent$  echo "[1] i: 0 X: 0 Y: 0
[1] i: 1 X: 1 Y: 0
[2] i: 0 X: 0 Y: 0
[2] i: 1 X: 1 Y: 0
[2] i: 2 X: 2 Y: 0
[4] i: 0 X: 0 Y: 0"|awk -F'[][]' '{print $0 > ("out."$2)}'

kent$  head out*
==> out.1 <==
[1] i: 0 X: 0 Y: 0
[1] i: 1 X: 1 Y: 0

==> out.2 <==
[2] i: 0 X: 0 Y: 0
[2] i: 1 X: 1 Y: 0
[2] i: 2 X: 2 Y: 0

==> out.4 <==
[4] i: 0 X: 0 Y: 0

Upvotes: 2

Related Questions