Reputation: 31
I want to remove all empty lines from some text file. I can do it with:
grep '[^[:blank:]]' < file1.dat > file1.dat.nospace
But I need to do it with n-files in a directory. How can I do it?
Any help would be appreciated. Thanks!
Upvotes: 0
Views: 155
Reputation: 4340
here is a way:
for filename in *.dat; do
grep '[^[:blank:]]' < $filename > $filename.nospace
done
here is a more robust way, one that works in a larger variety of circumstances:
find . -maxdepth 1 -type f -name "*.dat" | while read filename; do
grep '[^[:blank:]]' < "$filename" > "$filename.nospace"
done
here a much faster way (in execution time, but also in typing). this is the way i would actually do it:
find *.dat -printf "grep '[^[:blank:]]' < \"%f\" > \"%f.nospace\"\n" | sh
here is a more robust version of that:
find . -maxdepth 1 -type f -name "*.dat" -printf "grep '[^[:blank:]]' < \"%f\" > \"%f.nospace\"\n" | sh
ps: here's the actual correct grep for nonblank lines:
grep -v '^$' < $filename > $filename.nospace
Upvotes: 1
Reputation: 4612
this oneliner could probably help you:
for a in /path/to/file_pattern*; do sed "/^\s*$/d" $a > $a.nospace;done
Upvotes: 1
Reputation: 785196
You can use it with find
find . -name '*.dat' -exec sed -i.bak '/^[[:blank:]]*$/d' {} +
Upvotes: 3