Reputation: 1740
we're tracking down a nasty timeout bug in one of our applications.
I wanted to know what (if any) differences there are to using the SQL Transaction within the application versus written into the stored procedure using TSQL statement. We would need to restructure the stored procs and vb code to get this to work and I'm not sure it would be worth the effort at this time.
Public Sub RetrieveTData(ByVal cID As String, ByVal cnn As SqlConnection) As Boolean
Dim sqlTran As SqlTransaction
cnn.Open()
sqlTran = cnn.BeginTransaction
Try
If Not DataAlreadyTransferred(cID) Then
Dim Command As New SqlCommand("usp_BigNasty_CopyDataFromDB1toDB2")
Command.CommandType = CommandType.StoredProcedure
Command.Transaction = sqlTran
Command.Parameters.AddWithValue("@cID", cID)
Command.Connection = cnn
Command.ExecuteNonQuery()
sqlTran.Commit()
Endif
Catch ex As Exception
sqlTran.Rollback()
Throw
Finally
cnn.Close()
End Try
End Sub
We aren't certain whether the timeout is occurring in DataAlreadyTransferred()
or usp_BigNasty_CopyDataFromDB1toDB2
due to how the try/catch is written. We can restructure this code, but will take a week or so to get it to production (no errors occur on test/dev today)
DB1 - permanent storage, used by other applications as well
DB2 - working set, used only by Web App
DataAlreadyTransferred(cID)
first checks to see if DB2 has any copies of the records, if DB2 does and those records are clean it deletes them (data could have changed in DB1 and we want the most up-to-date version). If DB2's data is dirty it is left alone and no data is deleted.
usp_BigNasty_CopyDataFromDB1toDB2
copies rows from approximately 20-30 different tables and copies over the perm copies from DB1 into DB2, essentially creating a working set from which the Web App can access
We are aware this is inefficient and are examining ways to improve it, just haven't had time yet...
I believe by having the transactions in the app code it's locking many more tables than is really needed. If we move them to the stored procs, less tables will be locked at one time thus improving our chances of removing deadlock conditions/timeout issues we're seeing today. Just not sure on this..
Thanks for any input.
Upvotes: 3
Views: 1305
Reputation: 1740
Upon further investigation we've discovered that many of the tables involved had no indexes on them. After creating the recommended indexes for the queries involved in this process our timeouts issue has seemed to go away (for now).
I'm still going to be looking into optimizing this entire process, as it's still pretty slow for the amount of data it's churning through...
Upvotes: 0
Reputation: 432662
Your transaction is open for longer than needed with round trips and multiple calls.
I'd suggest a single stored proc that tests and cleans and writes = one atomic call to the server.
I'd also consider replication to keep the data live in near real time, or even mirroring too if DB2 contains a lot of DB1. Let the DB engine do it?
Upvotes: 1