irshmd2
irshmd2

Reputation: 1

Increase Speed In SQL Big Data

I have a table of 50 million records with 17 columns. I want to publish data into some tables. I have built some tables for this. I wrote a sql script for this work. But the speed of this script is very low. The main problem is that Before I want to insert a record in a table, I must check the table to not exists that record. Of course I already done some optimization in my code. For example I replace cursor with while statement. But still the speed is very low. What can I do to increase the speed and optimization?

Upvotes: 0

Views: 47

Answers (1)

Gordon Linoff
Gordon Linoff

Reputation: 1269543

I must check the table to not exists that record.

Let the database do the work via a unique constraint or index. Decide on the columns that cannot be identical and run something like:

create unique index unq_t_col1_col2_col3 on t(col1, col2, col3);

The database will then return an error if you attempt to insert a duplicate.

This is standard functionality and should be available in any database. But, you should tag your question with the database you are using and provide more information about what you mean by a duplicate.

Upvotes: 1

Related Questions