Reputation: 213
I have a solution for my question
find . -type d -exec sh -c 'test $(find "$0" -maxdepth 1 -type d | wc -l) -eq 1' {} \; -print
I wonder, whether there is a better (faster) method to do this. I don't really like to start in a 'find' another find process.
Upvotes: 4
Views: 128
Reputation: 123448
man find
would list an option:
-links n
File has n links.
You're looking for directories that contain only two links (namely .
and it's name). The following would return you directories without subdirectories:
find . -type d -links 2
Each directory on a normal Unix filesystem has at least 2 hard links: its name and its .
(parent directory) entry. Additionally, its subdirectories (if any) each have a ..
entry linked to that directory.
Upvotes: 4
Reputation: 784928
With little more coding following commandshould also work:
find . -type d|awk 'NR>1{a[c++]=$0; t=t $0 SUBSEP} END{for (i in a) {if (index(t, a[i] "/") > 0) delete a[i]} for (i in a) print a[i]}'
Making it more readable:
find . -type d | awk 'NR > 1 {
a[c++]=$0;
t=t $0 SUBSEP
}
END {
for (i in a) {
if (index(t, a[i] "/") > 0)
delete a[i]}
for (i in a)
print a[i]
}'
While it might look like more coding in this solution but in a big directory this awk based command should run much faster than the embedded find | wc
solution, as in the question.
Performance Testing:
I ran it on a directory containing 15k+ nested sub directories and found this awk command considerably faster (250-300% faster) that the OP's find | wc
command.
Upvotes: 1