BubbleMonster
BubbleMonster

Reputation: 1416

Group files by size, then find hash duplicates

This follows on from my question yesterday: Finding duplicate files via hashlib?

I now realize I need to group the files into filesize. So let's say I had 10 files in a folder, but 3 of them were 50 bytes each, I would group the 3 files.

I've found that I can find the size in bytes of a file by using:

print os.stat(/Users/simon/Desktop/file1.txt).st_size

or:

print os.path.getsize(/Users/simon/Desktop/file1.txt)

Which is great. But how would I scan a folder using os.walk and list a group of files together using one of the methods above??

After that, I want to hash them via hashlib's MD5 to find duplicates.

Upvotes: 1

Views: 1060

Answers (2)

Ankur Agarwal
Ankur Agarwal

Reputation: 24758

This sample code allows you to create a dictionary with size as keys and a list of files with the same size as values.

#!/usr/bin/env python

import os,sys
d = {}
gen = os.walk(os.getcwd())

for i in gen:
    dirname, dirlist, filelist = i
    for f in filelist:
        fullname = os.path.join(dirname,f)
        sz = os.path.getsize(fullname)
        if d.has_key(sz):
            d[sz].append(fullname)

        else:
            d[sz] = []
            d[sz].append(fullname)




print d                        

Upvotes: 1

Kevin
Kevin

Reputation: 76194

Sort the filenames by size, and then use itertools.groupby to group similar sized files together.

import os
import os.path
import itertools

#creates dummy files with a given number of bytes.
def create_file(name, size):
    if os.path.isfile(name): return
    file = open(name, "w")
    file.write("X" * size)
    file.close()

#create some sample files 
create_file("foo.txt", 4)
create_file("bar.txt", 4)
create_file("baz.txt", 4)
create_file("qux.txt", 8)
create_file("lorem.txt", 8)
create_file("ipsum.txt", 16)

#get the filenames in this directory
filenames = [filename for filename in os.listdir(".") if os.path.isfile(filename)]

#sort by size
filenames.sort(key=lambda name: os.stat(name).st_size)

#group by size and iterate
for size, items_iterator in itertools.groupby(filenames, key=lambda name: os.stat(name).st_size):
    items = list(items_iterator)
    print "{} item(s) of size {}:".format(len(items), size)
    #insert hashlib code here, or whatever else you want to do
    for item in items:
        print item

Result:

3 item(s) of size 4:
bar.txt
baz.txt
foo.txt
2 item(s) of size 8:
lorem.txt
qux.txt
1 item(s) of size 16:
ipsum.txt
1 item(s) of size 968:
test.py

Upvotes: 3

Related Questions