Reputation: 963
I have a problem coming up with an algorithm. Will you, guys, help me out here?
I have a file which is huge and thus can not be loaded at once. There exists duplicate data (generic data, might be strings). I need to remove duplicates.
Upvotes: 2
Views: 1491
Reputation: 14558
Depending on how the input is placed in the file; if each line can be represented by row data;
Another way is to use a database server, insert your data into a database table with a unique value column, read from file and insert into database. At the end database will contain all unique lines/rows.
Upvotes: 0
Reputation: 3724
Second solution:
Upvotes: 0
Reputation: 48216
you can calculate a hash for each record and keep that in a Map>
read in the file building the map and if you find the HashKey exists in the map you seek to position to double check (and if not equal add the location to the mapped set)
Upvotes: 1
Reputation: 3724
One easy but slow solution is read 1st Gigabite in HashSet. Read sequential rest of the file and remove duplicit Strings, that are in file. Than read 2nd gigabite in memory(hashset) and remove duplicit in files and again, and again... Its quite easy to program and if you want to do it only once it could be enough.
Upvotes: 2