Reputation: 1818
I have a directory of files, myFiles/
, and a text file values.txt
in which one column is a set of values to find, and the second column is the corresponding replace value.
The goal is to replace all instances of find values (first column of values.txt
) with the corresponding replace values (second column of values.txt
) in all of the files located in myFiles/
.
For example...
values.txt
:
Hello Goodbye
Happy Sad
Running the command would replace all instances of "Hello" with "Goodbye" in every file in myFiles/
, as well as replace every instance of "Happy" with "Sad" in every file in myFiles/
.
I've taken as many attempts at using awk/sed and so on as I can think logical, but have failed to produce a command that performs the action desired.
Any guidance is appreciated. Thank you!
Upvotes: 1
Views: 2038
Reputation: 7837
You can do what you want with awk.
#! /usr/bin/awk -f
# snarf in first file, values.txt
FNR == NR {
subs[$1] = $2
next
}
# apply replacements to subsequent files
{
for( old in subs ) {
while( index(old, $0) ) {
start = index(old, $0)
len = length(old)
$0 = substr($0, start, len) subs[old] substr($0, start + len)
}
}
print
}
When you invoke it, put values.txt
as the first file to be processed.
Upvotes: 1
Reputation: 1005
values.txt
sed
for each line to replace 1st word with 2st word in all files in myFiles/
directoryNote: I've used bash parameter expansion to split the line (${line% *}
etc) , assuming values.txt
is space separated 2 columnar file. If it's not the case, you may use awk
or cut
to split the line.
while read -r line;do sed -i "s/${line#* }/${line% *}/g" myFiles/* # '-i' edits files in place and 'g' replaces all occurrences of patterns done < values.txt
Upvotes: 1
Reputation: 3373
Option One:
Option Two:
invoke
perl -p -i -e 's/from/to/g' dirname/*.txt
done
second is probably easier to write but less exception handling. It's called 'Perl PIE' and it's a relatively famous hack for doing find/replace in lots of files at once.
Upvotes: 0