Marco Roy
Marco Roy

Reputation: 5285

How to find the timestamp of the latest modified file in a directory (recursively)?

I'm working on a process that needs to be restarted upon any change to any file in a specified directory, recursively.

I want to avoid using anything heavy, like inotify. I don't need to know which files were updated, but rather only whether or not files were updated at all. Moreover, I don't need to be notified of every change, but rather only to know if any changes have happened at a specific interval, dynamically determined by the process.

There has to be a way to do this with a fairly simple bash command. I don't mind having to execute the command multiple times; performance is not my primary concern for this use case. However, it would be preferable for the command to be as fast as possible.

The only output I need is the timestamp of the last change, so I can compare it to the timestamp that I have stored in memory.

I'm also open to better solutions.

Upvotes: 8

Views: 7968

Answers (4)

Scz
Scz

Reputation: 699

Instead of remembering the timestamp of the last change, you could remember the last file that changed and find newer files using

find . -type f -newer "$lastfilethatchanged"

This does not work, however, if the same file changes again. Thus, you might need to create a temporary file with touch first:

touch --date="$previoustimestamp" "$tempfile"
find . -type f -newer "$tempfile"

where "$tempfile" could be, for example, in the memory at /dev/shm/.

Upvotes: 1

Marco Roy
Marco Roy

Reputation: 5285

I just thought of an even better solution than the previous one, which also allows me to know about deleted files.

The idea is to use a checksum, but not a checksum of all files; rather, we can only do a checksum of the timestamps. If anything changes at all (new files, deleted files, modified files), then the checksum will change also!

find . -type f -printf '%T@,' | cksum

  1. '%T@,' returns the modification time of each file as a unix timestamp, all on the same line.
  2. cksum calculates the checksum of the timestamps.
  3. ????
  4. Profit!!!!

It's actually even faster than the previous solution (by ~20%), because we don't need to sort (which is one of the slowest operations). Even a checksum will be much faster, especially on such a small amount of data (22 bytes per timestamp), instead of doing a checksum on each file.

Upvotes: 4

user3156262
user3156262

Reputation: 356

$ find ./ -name "*.sqlite" -ls here you can use this command to get info for your file .Use filters to get timestamp

Upvotes: -1

Marco Roy
Marco Roy

Reputation: 5285

I actually found a good answer from another closely related question.

I've only modified the command a little to adapt it to my needs:

find . -type f -printf '%T@\n' | sort -n | tail -1

  1. %T@ returns the modification time as a unix timestamp, which is just what I need.
  2. sort -n sorts the timestamps numerically.
  3. tail -1 only keeps the last/highest timestamp.

It runs fairly quickly; ~400ms on my entire home directory, and ~30ms on the intended directory (measured using time [command]).

Upvotes: 9

Related Questions