Reputation: 767
What is the fastest way to find Duplicate folders in linux file system?
this is my idea, I know that it's not Efficient.
here it's small sudocode of it:
start from on directory named d1
put file names in d1 into an array.
for i in array
do
node=`find -iname $i`
father=father of node
check if the $father is that specific directory.
done
It's not complete.
what is the most Efficient way to do this?
Upvotes: 0
Views: 1393
Reputation: 15310
Try this:
shopt -s dotglob
for file in "$1"/*; do [[ -f "$file" ]] && d1+=( "$(md5sum < "$file")" ); done
for file in "$2"/*; do [[ -f "$file" ]] && d2+=( "$(md5sum < "$file")" ); done
[[ "$(sort <<< "${d1[*]}")" == "$(sort <<< "${d2[*]}")" ]] && echo "Same" || echo "Different"
Here is how it works:
$ mkdir 1 2
$ ./comparedirs 1 2
Same
$ cat > 1/1 <<< foo
$ cat > 2/1 <<< foo
$ ./comparedirs 1 2
Same
$ cat > 2/1 <<< bar
$ ./comparedirs 1 2
Different
Upvotes: 1