Reputation: 4034
I have a perl program that takes two arguments, dictionary file composed of english words one per line, and file with concatenated words also one per line, something like this:
lovetoplayguitar
...
...
So normally program is used like:
perl ./splitwords.pl words-en.txt bigfile.txt
It prints results to stdout
.
I am trying to put it through GNU parallel
like this:
time parallel -n 2 -j8 -k perl ./splitwords.pl {1} {2} ::: words-en.txt bigfile.txt > splitted.txt
but it doesn't work that way.. Tried many combinations so far but was unable to run it using parallel.
EDIT
Actually this seems to be working, however it is using only one core..? Why..?
Upvotes: 1
Views: 684
Reputation: 33685
This will chop bigfile into 1 MB chunks:
cat bigfile.txt | parallel --pipe --cat -k perl ./splitwords.pl words-en.txt {}
If the perlscript only reads the file then this will be faster:
cat bigfile.txt | parallel --pipe --fifo -k perl ./splitwords.pl words-en.txt {}
Upvotes: 1