Reputation: 1441
I'm looking for a way to remove lines within multiple csv files, in bash using sed, awk or anything appropriate where the file ends in 0.
So there are multiple csv files, their format is:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLElong,60,0
EXAMPLEcon,120,6
EXAMPLEdev,60,0
EXAMPLErandom,30,6
So the file will be amended to:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
A problem which I can see arising is distinguishing between double digits that end in zero and 0 itself.
So any ideas?
Upvotes: 3
Views: 7695
Reputation: 36229
I would tend to sed, but there is an egrep (or: grep -e) -solution too:
egrep -v ",0$" example.csv
Upvotes: 2
Reputation: 3711
For this particular problem, sed
is perfect, as the others have pointed out. However, awk
is more flexible, i.e. you can filter on an arbitrary column:
awk -F, '$3!=0' test.csv
This will print the entire line is column 3 is not 0.
Upvotes: 5
Reputation: 342303
you can also use awk,
$ awk -F"," '$NF!=0' file
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
this just says check the last field for 0
and don't print if its found.
Upvotes: 2
Reputation: 19717
use sed to only remove lines ending with ",0":
sed '/,0$/d'
Upvotes: 2
Reputation: 15157
Using your file, something like this?
$ sed '/,0$/d' test.txt
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
Upvotes: 9