Reputation: 123
I want to delete a specific strings from file. I try to use:
for line3 in $(cat 2.txt)
do
if grep -Fxq $line3 4.txt
then
sed -i /$line3/d 4.txt
fi
done
I want this code to delete lines from 4.txt if they are also in 2.txt, but this loop deletes all lines from 4.txt and I have no idea why. Can someone tell what is wrong with this code ?
2.txt:
a
ab
abc
4.txt:
a
abc
abcdef
Upvotes: 1
Views: 294
Reputation: 785098
You can do this via single awk command
:
awk 'ARGV[1] == FILENAME && FNR==NR {a[$1];next} !($1 in a)' 2.txt 4.txt
abcdef
To store output back to 4.txt
use:
awk 'ARGV[1] == FILENAME && FNR==NR {a[$1];next} !($1 in a)' 2.txt 4.txt > _tmp && mv _tmp 4.txt
PS: Added ARGV[1] == FILENAME &&
to take care of empty file case as noted by @pjh below.
Upvotes: 1
Reputation: 25023
Look ma', only sed...
sed $( sed 's,^, -e /^,;s,$,$/d,' 2.txt ) 4.txt
2.txt
in a sed command, e.g., abc
-> -e /^abc$/d
To store output back to 4.txt use:
sed -i $( sed 's,^, -e /^,;s,$,$/d,' 2.txt ) 4.txt
edit: while I love my answer on an aesthetic base, please don't try this at home! see pjh comment below for a detailed rationale of the many ways in which my microscript may fail
Upvotes: 0
Reputation: 8064
Using just Bash (4) builtins:
declare -A found
while IFS= read -r line || [[ $line ]] ; do found[$line]=1 ; done <2.txt
while IFS= read -r line || [[ $line ]] ; do
(( ${found[$line]-0} )) || printf '%s\n' "$line"
done <4.txt
The '[[ $line ]]' tests are to handle files with unterminated last lines.
Use 'printf' instead of 'echo' in case any of the output lines begin with 'echo' options.
Upvotes: 0
Reputation: 2376
grep -F -v -x -f 2.txt 4.txt
or
grep -Fvxf 2.txt 4.txt
or
fgrep -vxf 2.txt 4.txt
Upvotes: 1