Reputation: 227
I have tried a few things, but don't seem to be making any progress - I have a text file with some lines of data, and I want some lines of data from that file. Each line has a unique identifier which I can grep.
If I use
grep 'name1\|name2\|name3\|name4' file.txt > newfile.txt
it does the job and greps the desired lines I want, however, I want the lines in the order in which I specified - from this example I want the name1 lines first, then name2 lines, then name3 lines and finally the name4 lines.
However, say for example in my original file the order of the lines were the name2, followed by the name4, followed by name3, followed by name1, the output file also seems to have the lines in this order.
Is there a way to order the grep easily?
The ids are block-sorted, so all lines with name1 for example occur next to each other.
Thanks for any advice!
Upvotes: 4
Views: 4359
Reputation: 159
List item
I was looking same command, but now I figured out.
Step 1: Make file for your grep search. Just paste grep word in file, one grep keyword in one line. For example I make file mySearch.txt.
more mySearch.txt
name2
name3
name1
name4
name2
name1
Step 2: Now use this command
grep -Fwf mySearch.txt file.txt>newfile.txt
or
cat file.txt|grep -Fwf mySearch.txt >newfile.txt
Thais command will print all line with name2 first, then name3 and so on and name1 line at the end of newfile.txt.
Upvotes: 0
Reputation: 67231
perl would be better for this kind of search and print.
perl -lne '/name1/?push @a,$_:
(/name2/?push @b,$_:
(/name3/?push @c,$_:
/name4/?push @d,$_:next));
END{print join "\n",@a,@b,@c,@d}' your_file
Testde Below:
> cat temp
1 name1
2 name2
3 name3
4 name1
5 name4
6 name2
7 name1
> perl -lne '/name1/?push @a,$_:(/name2/?push @b,$_:(/name3/?push @c,$_:/name4/?push @d,$_:next));END{print join "\n",@a,@b,@c,@d}' temp
1 name1
4 name1
7 name1
2 name2
6 name2
3 name3
5 name4
>
Good question though :)
Upvotes: -1
Reputation: 189427
You can use an Awk array.
awk 'BEGIN { k[1]="name1"; k[2]="name2"; k[3]="name3" }
{ for (i=1; i<4; ++i) if ($0 ~ k[i]) m[i]=(m[i]?m[i] RS:"") $0 }
END { for(i=1; i<4; ++i) if (m[i]) print m[i] }' file
This will produce duplicates if a line matches multiple expressions. It could be optimized somewhat if you need it to be fast; just ask.
Or in Perl:
perl -ne 'BEGIN { @k = qw( name1 name2 name3 name4 );
$k = join("", "(", join("|", @k), ")");
$r = qr($k); }
if(m/$r/) { push @{$m{$1}}, $_ }
END { for $i (@k) { if ($m{$i}) {
print join("", @{$m{$i}}); } } }' file
This is probably somewhat more efficient than the equivalent Awk script. It will only find one match per line, so it is not exactly equivalent.
Upvotes: 1
Reputation: 84363
Given a file of words to grep for such as the following:
root
lp
syslog
nobody
you can use a read loop to repeatedly grep for fixed strings in another file. For example, using Bash shell's default REPLY variable and a word file stored in the /tmp
directory, this will work:
while read; do
grep --fixed-strings "$REPLY" /etc/passwd
done < /tmp/words
/tmp/words
.Upvotes: 7
Reputation: 1899
One lame solution might be.. Run the grep command multiple times (maybe u can paste it into a shell script and run it multiple times)
grep 'name1' file.txt > newfile.txt
grep 'name2' file.txt >> newfile.txt
grep 'name3' file.txt >> newfile.txt
grep 'name4' file.txt >> newfile.txt
Hope this helps!
Upvotes: 0