Rich
Rich

Reputation: 121

Deduplicate Delta Lake Table

I have a Delta Lake table in Azure. I'm using Databricks. When we add new entries we use merge into to prevent duplicates from getting into the table. However, duplicates did get into the table. I'm not sure how it happened. Maybe the merge into conditions weren't setup properly.

However it happened the duplicates are there. Is there any way to detect and remove the duplicates from the table? All the documentation I've found shows how to deduplicate the dataset before merging. Nothing for once the duplicates are already there. How can I remove the duplicates?

Thanks

Upvotes: 2

Views: 5068

Answers (4)

Elisabetta
Elisabetta

Reputation: 358

I would suggest the following SOP:

  1. Fix existing job (streamer or batch) to handle duplicates
  2. Change job configuration to write into _recovery table (also change a checkpoint path to _recovery in case of streamer job)
  3. Run the job and validate its output
  4. Rename the original folder in _backup and rename the _recovery to original (do the same with the checkpoints directory)
  5. Restore the original job configuration.

Upvotes: 0

Nikunj Kakadiya
Nikunj Kakadiya

Reputation: 3008

In order to remove the duplicates you can follow the below approach:

  1. Create a separate table that is the replica of the table that has duplicate records.
  2. Drop the first table that has duplicate records. (Meta data information plus physical files)
  3. write a python script or scala code to remove the duplicate records either using dropDuplicates function or any custom logic that defines a unique record by reading the data from the table that you created in step 1 and recreate the table that you deleted in step 2.

Once you follow the above steps your table would not have duplicate rows but this is just a workaround to make your table consistent so it does not have duplicate records and not a permanent solution.

Before or after you follow the above steps you will have to look into your merge into statements to see if that is written correctly so that it does not insert duplicate records. If the merge into statement is proper make sure that the dataset that you are processing is not having duplicate records from the source from where you are reading the data.

Upvotes: 0

Ehab
Ehab

Reputation: 1

you can use dataset.dropDuplicates to delete duplicates based on columns.

https://spark.apache.org/docs/2.3.2/api/java/org/apache/spark/sql/Dataset.html#dropDuplicates-java.lang.String-java.lang.String...-

Upvotes: 0

Arjoon
Arjoon

Reputation: 210

If the duplicate exists in the target table, your only options are:

  1. Delete the duplicated rows from the target table manually using SQL DELETE statements
  2. Create a deduplicated replica of your target table and rename both tables (dedupped replica and original target) to ensure make your dedupped replica the main table.

Upvotes: 0

Related Questions