Jukka Suomela
Jukka Suomela

Reputation: 12338

List last commit dates for a large number of files, quickly

I would like to list the last commit date for a large number of files in a git repository.

For the sake of concreteness, let us assume that I want to get the last commit dates of all *.txt files inside a particular subdirectory. There are tens of thousands of files in the repository in total, and the number of relevant *.txt files is in the ballpark of several hundreds. There are already thousands of commits in the repository.

I tried three different approaches.


Solution 1. This question gives one answer, based on git log. However, if I try to do something like this, it is very slow:

find . -name '*.txt' |
    xargs -n1 git log --format=format:%ai -n1 --all -- '{}'

In my test case, it took several minutes – far too slow for my purposes.


Solution 2. Something like this would be much faster, less than one second:

git log --format=format:%ai --name-only .

However, then I would have to write a script that post-processes the output. Moreover, the above command prints out lots of information that is never needed: irrelevant files and old commits.


Solution 3. I also tried something like this, in order to get rid of the irrelevant files:

git log --format=format:%ai --name-only `find . -name '*.txt'`

However, this turned out to be slower than solution 2. (There was a factor 3 difference in the running time.) Moreover, it still prints old commits that are no longer needed.


Question. Am I missing something? Is there a fast and convenient approach? Preferably something that works not only right now but also in future, when we have a much larger number of commits?

Upvotes: 15

Views: 2843

Answers (4)

majkinetor
majkinetor

Reputation: 9046

Here is a Powershell function

function Get-GitRevisionDates($Path='.', $Ext='.md')
{
    [array] $log = git --no-pager log --format=format:%ai --name-only $Path

    $date_re = "^\d{4}-\d\d-\d\d \d\d:\d\d:\d\d .\d{4}$"
    [array] $dates = $log | Select-String $date_re | select LineNumber, Line

    $files = $log -notmatch "^$date_re$" | ? { $_.EndsWith($Ext) } | sort -unique

    $res = @()
    foreach ($file in $files) {
        $iFile = $log.IndexOf($file) + 1
        $fDate = $dates | ? LineNumber -lt $iFile | select -Last 1
        $res += [PSCustomObject]@{ File = $file; Date = $fDate.Line }
    }

    $res | sort Date -Desc
}

Upvotes: 0

AKX
AKX

Reputation: 169184

I'm somewhat late to the party here, but here's a little Bash script that uses the invocation in OP's #2, and does the postprocessing in awk. (For my use, I didn't need to see files that had gotten deleted as of the current date, so there's the existence check too.)

#!/bin/bash
(
    git ls-files | sed 's/^/+ /'
    git log --format=format:"~ %aI" --name-only .
) | gawk '
/^~/ {date=$2;}
/^+/ {extant[$2] = 1;}
/^[^~+]/ {dates[$1] = date;}
END { for (file in dates) if(extant[file]) print(dates[file], file); }
' | sort

Upvotes: 0

Borealid
Borealid

Reputation: 98509

Try this.

In git, each commit references a tree object which has pointers to the state of each file (the files being blob objects).

So, what you want to do is write a program which starts out with a list of all the files in which you're interested, and begins at the HEAD object (SHA1 commit obtained via git rev-parse HEAD). It checks to see if any of the "files of interest" are modified in that tree (tree gotten from "tree" attribute of git cat-file commit [SHA1]) - note, you'll have to descend to the subtrees for each directory. If they are modified (meaning a different SHA1 hash from the one they had in the "previous" revision), it removes each such from the interest set and prints the appropriate information. Then it continues to each parent of the current tree. This continues until the set-of-interest is empty.

If you want the maximal speed, you'll use the git C API. If you don't want that much speed, you can use git cat-file tree [SHA1 hash] (or, easier, git ls-tree [SHA1 hash] [files]), which is going to perform the absolute minimal amount of work to read a particular tree object (it's part of the plumbing layer).

It's questionable how well this will continue to work in the future, but if forward-compat is a bigger issue you can move up a level from git cat-file - but as you already discovered, git log is comparatively slow as it's part of the porcelain, not the plumbing.

See here for a pretty good resource on how git's object model works.

Upvotes: 5

steabert
steabert

Reputation: 6888

I also think your solution #2 is the fastest, you can find several scripts that use this method to set access times. A way to avoid printing older access times is to use e.g. a hash.

I wrote some script in perl to modify access times, and after some modifications, this is a version which should print what you're after:

#!/usr/bin/perl
my $commit = $ARGV[0];

$commit = 'HEAD' unless $commit;

# git a list of access times and files
my @logbook = `git whatchanged --pretty=%ai $commit`;

my %seen;
my $timestamp;
my $filename;
foreach (@logbook) {
    next if /^$/; # skip emtpy lines
    if (/^:/) {
        next unless /.txt$/;
        chomp ($filename = (split /\t/)[1]);
        next if $seen{$filename};
        print "$timestamp $filename\n";
        $seen{$filename} = 1;
    } else {
        chomp ($timestamp = $_);
    }
}

I used git whatchanged instead of git log to have a convenient format with non-time lines beginning with :, so I can easily separate the lines with files from the last modification times.

Upvotes: 0

Related Questions