Reputation: 164
This is my First Question on Blog.
I am facing issues with the Update Statement.
I am using one server (Source Server - SS) and want to update a table that is on another server (Target Server - TS) on daily basis. But the update is taking long time to process.
SS.Table1 - Row count over 5 Million
TS.Table2 - Row count over 50 Million
Every day i want to update TS.Table2 with some 500 records from SS.Table1.
Since both he tables are big, i have extracted the records from SS.Table1 into Temp Table #Temp
Update TS.Table2
set TargetTable.ColumnName = SourceTable.ColumnName
from TS.Table2 T
JOIN #Temp S
on T.Col1 = S.Col1
and T.Col2 = S.Col2
Note 1: Both Col1 and Col2 are having clustered indexes on Target Table and also on Source Table
Note 2: When selecting the data it is doing in Milli Seconds with same join condition.
Note 3: I Did try updatind the Target table using CTE but there is no performance improvement.
Note 4: Even for updating one record this is taking the same time as 500 records
Note 5: I have in some online articles, that dropping the indexes on Target Table and Recreating may improve performance and done the same but there is no improvement.
If not provided enough information, please respond.
Regards
Upvotes: 2
Views: 1129
Reputation: 391
I had a similar problem in the past. I was using SQL Server 2008. That version requires sysadmin rights to access the remote statistics. Without the remote statistics, the risk is that SQL Server will retrieve the remote data (50M rows) and execute locally. http://www.sqlbadpractices.com/linked-servers-and-distributed-queries/ I bypassed the problem using openquery that enforces the remote execution.
In your case, you could write 500 rows in a remote staging table and use openquery to execute the update entirely in the remote server.
Upvotes: 1