escapecharacter
escapecharacter

Reputation: 965

Tool for high-performance search of binary file match in a directory of thousands of files on OSX

I'm merging two large (1000s) sets of photos with different directory structures, where many photos already exist in both sets. I was going to write a script such that:

For a given photo in set B,
Check if a binary match for it exists in set A.
If there's a match, delete the file.

After all of the files in set B have been reviewed, I'll then merge the (now unique) remainders of set B into set A.

There may be binary matches with different file names, so file names should be ignored when testing.

Also, I'm going to be doing the set A search for every single file in set B, so I'd prefer a tool that builds an index of set A as part of an initial scan. Fortunately, this index can be done once and never needs to be updated.

I was going to use a OSX shell script, but python is fine too.

Upvotes: 1

Views: 141

Answers (1)

escapecharacter
escapecharacter

Reputation: 965

I solved my problem by writing a pair of Python scripts, based on Mark's suggestions.

md5index.py:

#given a folder path, makes a hash index of every file, recursively
import sys, os, hashlib, io

hash_md5 = hashlib.md5()

#some files need to be hashed incrementally as they may be too big to fit in memory
#http://stackoverflow.com/a/40961519/2518451
def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
    return md5

#this project done on macOS. There may be other files that are appropriate to hide on other platforms.
ignore_files = [".DS_Store"]

def index(source, index_output):

    index_output_f = open(index_output, "wt")
    index_count = 0

    for root, dirs, filenames in os.walk(source):

        for f in filenames:
            if f in ignore_files:
                continue

            #print f
            fullpath = os.path.join(root, f)
            #print fullpath

            md5 = md5sum(fullpath)
            md5string = md5.hexdigest()
            line = md5string + ":" + fullpath
            index_output_f.write(line + "\n")
            print line
            index_count += 1

    index_output_f.close()
    print("Index Count: " + str(index_count))


if __name__ == "__main__":
    index_output = "index_output.txt"

    if len(sys.argv) < 2:
        print("Usage: md5index [path]")
    else:
        index_path = sys.argv[1]
        print("Indexing... " + index_path)
        index(index_path, index_output)

And uniquemerge.py:

#given an index_output.txt in the same directory and an input path,
#remove all files that already have a hash in index_output.txt

import sys, os
from md5index import md5sum
from send2trash import send2trash
SENDING_TO_TRASH = True

def load_index():
    index_output = "index_output.txt"
    index = []
    with open(index_output, "rt") as index_output_f:
        for line in index_output_f:
            line_split = line.split(':')
            md5 = line_split[0]
            index.append(md5)
    return index

#traverse file, compare against index
def traverse_merge_path(merge_path, index):
    found = 0
    not_found = 0

    for root, dirs, filenames in os.walk(merge_path):
        for f in filenames:
            #print f
            fullpath = os.path.join(root, f)
            #print fullpath

            md5 = md5sum(fullpath)
            md5string = md5.hexdigest()

            if md5string in index:
                if SENDING_TO_TRASH:
                    send2trash(fullpath)

                found += 1
            else:
                print "\t NON-DUPLICATE ORIGINAL: " + fullpath
                not_found += 1


    print "Found Duplicates: " + str(found) + " Originals: " + str(not_found)


if __name__ == "__main__":
    index = load_index()
    print "Loaded index with item count: " + str(len(index))

    print "SENDING_TO_TRASH: " + str(SENDING_TO_TRASH) 

    merge_path = sys.argv[1]
    print "Merging To: " + merge_path

    traverse_merge_path(merge_path, index)

Assuming I want to merge folderA into folderB, I do: python md5index.py folderA # creates index_output.txt with all the hashes from folderA

python uniquemerge.py folderB
# deletes all files in folderB that already existed in folderA
# I can now manually merge folderB into folderA

Upvotes: 1

Related Questions