Reputation: 2777
For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
Upvotes: 267
Views: 300995
Reputation: 70852
Because the main usage of shell ( and others shells like bash) is to run other commands, there is not only 1 answer!!
0. Shell command line expansion
1. xargs
dedicated tool
2. while read
with some remarks and consideration about parallel processing
3. while read -u
using dedicated fd
, for interactive processing (sample)
5. running shell with inline generated script
Regarding the OP request: running chmod
on all targets listed in file, xargs
is the indicated tool. But for some other applications, small amount of files, etc...
If
you could use shell command line expansion. Simply:
chmod 755 $(<file.txt)
This command is the simplier one.
xargs
is the right toolFor
For many binutils tools, like chown
, chmod
, rm
, cp -t
...
xargs chmod 755 <file.txt
Could be used after a pipe on found files by find
:
find /some/path -type f -uid 1234 -print | xargs chmod 755
If you have special chars and/or a lot of lines in file.txt
.
xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
find /some/path -type f -uid 1234 -print0 | xargs -0 chmod 755
If your command need to be run exactly 1 time for each entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as chmod
accepts multiple files as arguments, but this matches the title of question.
For some special cases, you could even define the location of the file argument in commands generated by xargs
:
xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
seq 1 5
as inputTry this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5)
Blah 1 blabla 1..
Blah 2 blabla 2..
Blah 3 blabla 3..
Blah 4 blabla 4..
Blah 5 blabla 5..
where your command is executed once per line.
Doing loop under shell is generally a bad idea! There is a lot of warning about doing loop under shell!
Before doing loop, think parallelisation and dedicated tools!!
You could use bash for interact with and administrate dedicated tools. Some samples:
for
and while
loops, in my Swap usage by process script.while read
and variants.For this, make sure to end the file with a newline character.
As OP suggests,
cat file.txt |
while read in; do
chmod 755 "$in"
done
will work, but there are 2 issues:
cat |
is a useless fork, and
| while ... ;done
will become a subshell whose environment will disappear after ;done
.
So this could be better written:
while read in; do
chmod 755 "$in"
done < file.txt
But
$IFS
and read
flags:help read
read: read [-r] ... [-d delim] ... [name ...] ... Reads a single line from the standard input... The line is split into fields as with word splitting, and the first word is assigned to the first NAME, the second word to the second NAME, and so on... Only the characters found in $IFS are recognized as word delimiters. ... Options: ... -d delim continue until the first character of DELIM is read, rather than newline ... -r do not allow backslashes to escape any characters ... Exit Status: The return code is zero, unless end-of-file is encountered...
In some cases, you may need to use
while IFS= read -r in;do
chmod 755 "$in"
done <file.txt
for avoiding problems with strange filenames. And maybe if you encounter problems with UTF-8:
while LANG=C IFS= read -r in ; do
chmod 755 "$in"
done <file.txt
While you use a redirection from standard inputfor reading
file.txt`, your script cannot read other input interactively (you cannot use standard input for other input anymore).
while read
for limited number of concurrent parallel tasksIf you plan to run a big number of repetitive tasks, using multiprocessing, you could do something like:
maxProc=4
...
wait4oneTask() {
wait -np epid
results[epid]=$?
...
unset "running[$epid]"
}
...
while read file ;do
...
exec {shC_Fd}>"${tmpLoc}_${file//\//_}"
shellcheck -f gcc "$file" >&$shC_Fd 2>&1 &
lpid=$!
...
running[lpid]=''
((${#running[@]}>=maxProc)) && wait4oneTask
done
while ((${#running[@]})); do
wait4oneTask
done
This is extracted from Parallel ShellCheck script: parShellCheck.sh
a sample of parallelizing process and overall statistics of collected shellcheck remarks.
while read
, using dedicated fd
.Syntax: while read ...;done <file.txt
will redirect standard input to come from file.txt
. That means you won't be able to deal with processes until they finish.
This will let you use more than one input simultaneously, you could merge two files (like here: scriptReplay.sh), or maybe:
You plan to create an interactive tool, you have to avoid use of standard input and use some alternative file descriptor.
Constant file descriptors are:
You could see them by:
ls -l /dev/fd/
or
ls -l /proc/$$/fd/
From there, you have to choose unused numbers between 0 and 63 (more, in fact, depending on sysctl
superuser tool) as your file descriptor.
For this demo, I will use file descriptor 7:
while read <&7 filename; do
ans=
while [ -z "$ans" ]; do
read -p "Process file '$filename' (y/n)? " foo
[ "$foo" ] && [ -z "${foo#[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done 7<file.txt
If you want to read your input file in more differents steps, you have to use:
exec 7<file.txt # Without spaces between `7` and `<`!
# ls -l /dev/fd/
read <&7 headLine
while read <&7 filename; do
case "$filename" in
*'----' ) break ;; # break loop when line end with four dashes.
esac
....
done
read <&7 lastLine
exec 7<&- # This will close file descriptor 7.
# ls -l /dev/fd/
Under bash, you could let him choose any free fd
for you and store into a variable:
exec {varname}</path/to/input
:
while read -ru ${fle} filename;do
ans=
while [ -z "$ans" ]; do
read -rp "Process file '$filename' (y/n)? " -sn 1 foo
[ "$foo" ] && [ -z "${foo/[yn]}" ] && ans=$foo || echo '??'
done
if [ "$ans" = "y" ]; then
echo Yes
echo "Processing '$filename'."
else
echo No
fi
done {fle}<file.txt
Or
exec {fle}<file.txt
# ls -l /dev/fd/
read -ru ${fle} headline
while read -ru ${fle} filename;do
[[ -n "$filename" ]] && [[ -z ${filename//*----} ]] && break
....
done
read -ru ${fle} lastLine
exec {fle}<&-
# ls -l /dev/fd/
sed <file.txt 's/.*/chmod 755 "&"/' | sh
This won't optimise forks, but this could be usefull for more complex (or conditional) operation:
sed <file.txt 's/.*/if [ -e "&" ];then chmod 755 "&";fi/' | sh
sed 's/.*/[ -f "&" ] \&\& echo "Processing: \\"&\\"" \&\& chmod 755 "&"/' \
file.txt | sh
This can be very useful if sed
input is a feed instead of a file. Practical sample: Using rsync
log output as sed
input for deleting corresponding description file when a project file are deleted. See my answer to Remove file if a file with the same name but different extension doesn't exist in another directory which differ a lot from what SO asker did expect.
Upvotes: 264
Reputation: 14360
Now days (In GNU Linux) xargs
still the answer for this, but ... you can now use the -a
option to read directly input from a file:
xargs -a file.txt -n 1 -I {} chmod 775 {}
Upvotes: 6
Reputation: 121407
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat
process.
cat
is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
Upvotes: 200
Reputation: 9556
if you have a nice selector (for example all .txt files in a dir) you could do:
for i in *.txt; do chmod 755 "$i"; done
or a variant of yours:
while read line; do chmod 755 "$line"; done < file.txt
Upvotes: 24
Reputation:
The logic applies to many other objectives. And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh
Upvotes: 0
Reputation: 780
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
Upvotes: 5
Reputation: 6371
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel
runs as many threads as your CPUs count. But you can specify it with -j
Upvotes: 19
Reputation: 247002
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
Upvotes: 16
Reputation: 1765
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print
:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
Upvotes: 3