Reputation: 13
I have several .seq files containing text. I want to get a single text file containing :
name_of_the_seq_file1
contents of file 1
name_of_the_seq_file2
contents of file 1
name_of_the_seq_file3
contents of file 3
...
All the files are on the same directory. It´s possible with awk or similar?? thanks !!!
Upvotes: 1
Views: 50
Reputation: 3451
perl -lpe 'print $ARGV if $. == 1; close(ARGV) if eof' *.seq
$.
is the line number
$ARGV
is the name of the current file
close(ARGV) if eof
resets the line number at the end of each file
Upvotes: 0
Reputation: 158020
You can use the following command:
awk 'FNR==1{print FILENAME}1' *.seq
FNR
is the record number (which is the line number by default) of the current input file. Each time awk starts to handle another file FNR==1
, in this case the current filename get's printed trough {print FILENAME}
.
The trailing 1
is an awk idiom. It always evaluates to true, which makes awk
print all lines of input.
Note:
The above solution works only as long as you have no empty files in that folder. Check Ed Morton's great answer which points this out.
Upvotes: 2
Reputation: 203665
If there can be empty files then you need:
with GNU awk:
awk 'BEGINFILE{print FILENAME}1' *.seq
with other awks (untested):
awk '
FNR==1 {
for (++ARGIND;ARGV[ARGIND]!=FILENAME;ARGIND++) {
print ARGV[ARGIND]
}
print FILENAME
}
{ print }
END {
for (++ARGIND;ARGIND in ARGV;ARGIND++) {
print ARGV[ARGIND]
}
}' *.seq
Upvotes: 2