Reputation: 109
I've seen a few answers regarding this, but as a newbie, I don't really understand how to implement that in my script.
it should be pretty easy (for those who can stuff like this)
I'm using a simple
for f in "/drive1/"images*.{jpg,png}; do
but this is simply overloading and giving me
Argument list too long
How is this easiest solved?
Upvotes: 3
Views: 4386
Reputation: 71027
Argument list length is something limited by your config.
getconf ARG_MAX
2097152
But after discuss around differences between bash specifics and system (os) limitations (see comments from that other guy), this question seem wrong:
Regarding discuss on comments, OP tried something like:
ls "/simple path"/image*.{jpg,png} | wc -l
bash: /bin/ls: Argument list too long
This happen because of OS limitation, not bash!!
But tested with OP code, this work finely
for file in ./"simple path"/image*.{jpg,png} ;do echo -n a;done | wc -c
70980
Like:
printf "%c" ./"simple path"/image*.{jpg,png} | wc -c
But for having a simple count of files by extensions:
find "/simple path/" -maxdepth 1 -type f \( -iname *.jpg -printf JPG\\n \) -o \
\( -iname *.gif -printf GIF\\n \) |
sort |
uniq -c
May output something like:
1358 GIF
1105 JPG
Or recursively, counting folders too:
find "/simple path/" -type d -printf DIR\\n -o -type f \( -iname *.jpg -printf JPG\\n \) -o \
\( -iname *.gif -printf GIF\\n \) |
sort |
uniq -c
738 DIR
1358 GIF
1105 JPG
Or for showing a quick stat list of all extensions you could
find "/simple path/" -type f -print |
sed -ne 's/^.*\.//gp' |
tr a-z A-Z |
sort |
uniq -c |
sort -n
Another little immediat workaround: you could reduce argument length by:
cd "/drive1/"
ls images*.{jpg,png} | wc -l
But when number of file will grow, you'll be buggy again...
find "/drive1/" -type f \( -name '*.jpg' -o -name '*.png' \) -exec myscript {} +
If you want this to NOT be recursive, you may add -maxdepth
as 1st option:
find "/drive1/" -maxdepth 1 -type f \( -name '*.jpg' -o -name '*.png' \) \
-exec myscript {} +
There, myscript
will by run with filenames as arguments. The command line for myscript
is built up until it reaches a system-defined limit.
myscript /drive1/file1.jpg '/drive1/File Name2.png' /drive1/...
From man find
:
-exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invoca‐ tions of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}'
You could create your script like
#!/bin/bash
target=( "/drive1" "/Drive 2/Pictures" )
[[ $1 == --run ]] &&
exec find "${target[@]}" -type f \( -name '*.jpg' -o -name '*.png' \) \
-exec $0 {} +
for file ;do
echo Process "$file"
done
Then you have to run this with --run
as argument.
work with any number of files! (Recursively! See maxdepth
option)
permit many target
permit spaces and special characters in file and directrories names
you could run same script directly on files, without --run
:
./myscript hello world 'hello world'
Process hello
Process world
Process hello world
Using arrays, you could do things like:
allfiles=( "/drive 1"/images*.{jpg,png} )
[ -f "$allfiles" ] || { echo No file found.; exit ;}
echo Number of files: ${#allfiles[@]}
for file in "${allfiles[@]}";do
echo Process "$file"
done
Upvotes: 3
Reputation: 141930
There's also a while read loop:
find "/drive1/" -maxdepth 1 -mindepth 1 -type f \( -name '*.jpg' -o -name '*.png' \) |
while IFS= read -r file; do
or with zero terminated files:
find "/drive1/" -maxdepth 1 -mindepth 1 -type f \( -name '*.jpg' -o -name '*.png' \) -print0 |
while IFS= read -r -d '' file; do
Upvotes: 0