Reputation: 7705
I'm using find to all files in directory, so I get a list of paths. However, I need only file names. i.e. I get ./dir1/dir2/file.txt
and I want to get file.txt
Upvotes: 385
Views: 426085
Reputation: 13079
As others have pointed out, you can combine find
and basename
, but by default the basename
program will only operate on one path at a time, so the executable will have to be launched once for each path (using either find ... -exec
or find ... | xargs -n 1
), which may potentially be slow.
If you use the -a
option on basename
, then it can accept multiple filenames in a single invocation, which means that you can then use xargs
without the -n 1
, to group the paths together into a far smaller number of invocations of basename
, which should be more efficient.
Example:
find /dir1 -type f -print0 | xargs -0 basename -a
Here I've included the -print0
and -0
(which should be used together), in order to cope with any whitespace inside the names of files and directories.
Here is a timing comparison, between the xargs basename -a
and xargs -n1 basename
versions. (For sake of a like-with-like comparison, the timings reported here are after an initial dummy run, so that they are both done after the file metadata has already been copied to I/O cache.) I have piped the output to cksum
in both cases, just to demonstrate that the output is independent of the method used.
$ time sh -c 'find /usr/lib -type f -print0 | xargs -0 basename -a | cksum'
2532163462 546663
real 0m0.063s
user 0m0.058s
sys 0m0.040s
$ time sh -c 'find /usr/lib -type f -print0 | xargs -0 -n 1 basename | cksum'
2532163462 546663
real 0m14.504s
user 0m12.474s
sys 0m3.109s
As you can see, it really is substantially faster to avoid launching basename
every time.
Upvotes: 11
Reputation: 12728
Honestly basename
and dirname
solutions are easier, but you can also check this out :
find . -type f | grep -oP "[^/]*$"
or
find . -type f | rev | cut -d '/' -f1 | rev
or
find . -type f | sed "s/.*\///"
Upvotes: 6
Reputation: 1373
-exec
and -execdir
are slow, xargs
is king.
$ alias f='time find /Applications -name "*.app" -type d -maxdepth 5'; \
f -exec basename {} \; | wc -l; \
f -execdir echo {} \; | wc -l; \
f -print0 | xargs -0 -n1 basename | wc -l; \
f -print0 | xargs -0 -n1 -P 8 basename | wc -l; \
f -print0 | xargs -0 basename | wc -l
139
0m01.17s real 0m00.20s user 0m00.93s system
139
0m01.16s real 0m00.20s user 0m00.92s system
139
0m01.05s real 0m00.17s user 0m00.85s system
139
0m00.93s real 0m00.17s user 0m00.85s system
139
0m00.88s real 0m00.12s user 0m00.75s system
xargs
's parallelism also helps.
Funnily enough i cannot explain the last case of xargs
without -n1
.
It gives the correct result and it's the fastest ¯\_(ツ)_/¯
(basename
takes only 1 path argument but xargs
will send them all (actually 5000) without -n1
. does not work on linux and openbsd, only macOS...)
Some bigger numbers from a linux system to see how -execdir
helps, but still much slower than a parallel xargs
:
$ alias f='time find /usr/ -maxdepth 5 -type d'
$ f -exec basename {} \; | wc -l; \
f -execdir echo {} \; | wc -l; \
f -print0 | xargs -0 -n1 basename | wc -l; \
f -print0 | xargs -0 -n1 -P 8 basename | wc -l
2358
3.63s real 0.10s user 0.41s system
2358
1.53s real 0.05s user 0.31s system
2358
1.30s real 0.03s user 0.21s system
2358
0.41s real 0.03s user 0.25s system
Upvotes: 5
Reputation: 166419
Use -execdir
which automatically holds the current file in {}
, for example:
find . -type f -execdir echo '{}' ';'
You can also use $PWD
instead of .
(on some systems it won't produce an extra dot in the front).
If you still got an extra dot, alternatively you can run:
find . -type f -execdir basename '{}' ';'
-execdir utility [argument ...] ;
The
-execdir
primary is identical to the-exec
primary with the exception that utility will be executed from the directory that holds the current file.
When used +
instead of ;
, then {}
is replaced with as many pathnames as possible for each invocation of utility. In other words, it'll print all filenames in one line.
Upvotes: 43
Reputation: 19
I've found a solution (on makandracards page), that gives just the newest file name:
ls -1tr * | tail -1
(thanks goes to Arne Hartherz)
I used it for cp
:
cp $(ls -1tr * | tail -1) /tmp/
Upvotes: -3
Reputation: 5303
If you want to run some action against the filename only, using basename
can be tough.
For example this:
find ~/clang+llvm-3.3/bin/ -type f -exec echo basename {} \;
will just echo basename /my/found/path
. Not what we want if we want to execute on the filename.
But you can then xargs
the output. for example to kill the files in a dir based on names in another dir:
cd dirIwantToRMin;
find ~/clang+llvm-3.3/bin/ -type f -exec basename {} \; | xargs rm
Upvotes: 14
Reputation: 140327
In GNU find
you can use -printf
parameter for that, e.g.:
find /dir1 -type f -printf "%f\n"
Upvotes: 516
Reputation: 3654
If your find doesn't have a -printf option you can also use basename:
find ./dir1 -type f -exec basename {} \;
Upvotes: 224
Reputation: 25599
If you are using GNU find
find . -type f -printf "%f\n"
Or you can use a programming language such as Ruby(1.9+)
$ ruby -e 'Dir["**/*"].each{|x| puts File.basename(x)}'
If you fancy a bash (at least 4) solution
shopt -s globstar
for file in **; do echo ${file##*/}; done
Upvotes: 38