I Love Stackoverflow
I Love Stackoverflow

Reputation: 6868

Timeout exception even after setting timeout property for operation

Timeout issue:

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.\r\nThe the statement has been terminated.

I am having 17 millions of records to dump in my application database.These 12 million records are the result of comparison operation between 2 database records.

I compare 2 database records then populate mismatch records (based on some criteria) in the data table and once that data table reaches some limit like 1000 or 500 etc I send this data table to SQL bulk copy for bulk import and then empty the data table.

I am doing this whole operation inside the transaction so that I have inserted X records and during my comparison process any error comes so i will rollback those X records.

But because of this, I am getting a timeout issue doing then bulk copy.

I have checked varying different batchsize like 5000,1000,500,300 etc.I am getting timeout issues in all this batch size.

Once I have set bulk-copy timeout to 0 but then I go this below error :

The transaction log for my database is full.

With 1000 records it reaches 2.7 million and then throws timeout issues,

With 500 records it reached some 2.1 million records then throws an error.

With 300,200,100 also it is throwing timeout errors.

I also have set connection timeout in my connection string to 30 minutes.

Code :

public class SaveRepo : IDisposable
    {
        DataTable dataTable;
        SqlConnection connection;
        string connectionString;
        SqlTransaction transaction;
        SqlBulkCopy bulkCopy;
        int testId,

        public SaveRepo (int testId)//testId=10364
        {
            this.connectionString = connectionString;
            dataTable = new DataTable();
            connection = new SqlConnection(connectionString);
            connection.Open();
            transaction = connection.BeginTransaction();
            bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.Default, transaction);
            bulkCopy.BulkCopyTimeout = 60;
            bulkCopy.EnableStreaming = true;
            bulkCopy.DestinationTableName = "dbo.Sales";
            bulkCopy.BatchSize = 100;
            bulkCopy.SqlRowsCopied +=
                  new SqlRowsCopiedEventHandler(OnSqlRowsCopied);
            bulkCopy.NotifyAfter = 100;
        }

       void Dump()
        {
            try
            {
                bulkCopy.WriteToServer(dataTable);
            }
            catch(Exception ex) // timeout error
            {
                throw ex;
            }
        }

    void FillDatatable(object[] row)
    {
        if (dataTable.Rows.Count == 100)
        {
           Dump();
           dataTable.Clear();
        }
        dataTable.Rows.Add(row);
    }

        public void End()
        {
            transaction.Commit();
            //dispose the stuffs also
        }
    }

Is there any other way or solution which I am missing and can solve this timeout issue?

Update : After setting BulkCopyTimeout to 0 and having batchsize =1000 i got this error till 3593000 records bulk copied:

Could not allocate space for object 'dbo.Sales'.'PK_dbo.Sales' in database 'XYZ' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

Update 2 : I removed the transaction and i will open and close connection for each of the batch and while dumping any batch if error occurs then i will removed all those previously saved data using testId.Now this works up to dumping 3 millions of data then i get this error :

Could not allocate space for object 'dbo.Sales'.'PK_dbo.Sales' in database 'XYZ' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

This goes in catch section where i try to remove old data based on testId but it takes so long and then it throws this error :

The transaction log for my database is full.

void Dump()
        {
            using (SqlConnection connection =
                  new SqlConnection(connectionString))
            {
                connection.Open();
                using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connectionString))
                {
                    bulkCopy.DestinationTableName = "dbo.Sales";
                    bulkCopy.EnableStreaming = true;
                    try
                    {
                        bulkCopy.WriteToServer(dataTable);
                    }
                    catch(Exception ex)
                    {
                        connection.Close();
                        SalesRepo.Delete(connectionString, testId);
                    }
                }
            }
        }

Upvotes: 6

Views: 2197

Answers (2)

Walter Verhoeven
Walter Verhoeven

Reputation: 4411

So on your 2nd option you have actually been able to "dump" the data in the using the code into the database only to find-out that you are running out of file size on the database.

This can happen when auto-growth is set to falls and or you have reached the maximum file size available on your disk.

Your first attempt failed as the Transaction became to big to maintain using the resources your server had.

First 2 things:

  1. Back up your Transaction log if your database is in full recovery mode or "check" the database to make sure you do have the log-space.
  2. Stripe your table over several files, you do this by making 1 file group that contains several files, best is to spread them over several disk arrays/ controllers so you can parallel the writes

Then,

  • You will have to re-create the indexes as the indexes have become disabled after such an error.
  • the data statistics are going to be really bad after such a large update
  • each index is going to slow down the insert by a factor, if the indexes are bad this will really slow things down depending on how many index splits you get (if you data gets inserted in the soft order of the index or against it, if against it this is like stacking crates of beer from bottom instead of on the the top if you understand what I mean.
  • if you have an Enterprise version used a partition function, this will greatly speed up the action as you can really parallel process the data and reduce locking to the partitioned data.

Try to backup in the middle of your "import" as the backup will persist the data that are transnational committed and your LDF file will be less stressed.

Upvotes: 3

Brad
Brad

Reputation: 3591

Preface/NOTE: This is a solution that works good for some needs but may not/is not recommended for all situations and should be tested/evaluated if it is the best solution for what you are doing.

This is to solve the Transaction log issue filling up:

I had similar issue where I was working on something that was log file intensive and I filled it up a couple of times. The log file will shrink back down after the data is shipped/removed from the log file but this takes 3-8 minutes (depending on the DB and server settings). To alleviate this issue I created an SP that will check the log file and if it reaches a certain size it will WAIT for a given time period. All these values are variables you pass to the SP.

The way I used this was I put the SP call in my script and it would run and if the log file got too big it would WAIT giving the log file time to shrink back down before proceeding.

You call it by

EXEC dbo.LogFileFullCheckAndWaitFor 
     @DBNameToCheck = 'DBALocal', 
     @WaitForDealyToUse = '00:00:05.00', 
     @LogFileUsedPercentToCheck = '10'

@DBNameToCheck = The databases log file you want to check on

@WaitForDealyToUse = The time you want to WAIT before resuming your script (script uses WAITFOR DELAY). It must be in this format '00:00:05.00' (HH:MM:SS:MM), you can leave off the MM (milliseconds)

@LogFileUsedPercentToCheck = This is a number that is 2 decimal places that you will pass and if the log file exceeds this percentage it will trigger the WAIT. It will also instantly display a message in the SQL output window (without having to buffer anything). It does this by using RAISERROR, but NOTE it uses a low severity error number so it will not trigger an error for try/catch blocks (this was the only way I found to instantly display the message without the normal buffer time). This may not be needed if you are not executing in Management Studio.

Depending on your permission level this may/may not work.

USE [DBALocal]
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO


create PROCEDURE [dbo].[LogFileFullCheckAndWaitFor] (
    @DBNameToCheck VARCHAR(250),
    @WaitForDealyToUse VARCHAR(50),
    @LogFileUsedPercentToCheck DECIMAL(10,2)
)

AS
BEGIN

    SET NOCOUNT ON;


    BEGIN TRY
            -- table to hold the data returned from 
            DECLARE @LogSize AS TABLE (
                DatabaseName VARCHAR(250), 
                LogSize DECIMAL(10,2), 
                LogUsedPercent DECIMAL(10,2), 
                Status INT
            )

            DECLARE @LogUsedPercent AS DECIMAL(10,2)
            DECLARE @RaiseErrorMessage AS VARCHAR(1000)

            -- build out the error message here
            SET @RaiseErrorMessage = 'LOG FILE REACHED ' + CAST(@LogFileUsedPercentToCheck AS VARCHAR(50)) + ' full so pausing for ' + CAST(@WaitForDealyToUse AS VARCHAR(50)) + ' minutes'

            /*
                -- removed the below because may need higher permissions, so using query below below this instead

                INSERT INTO @LogSize
                EXEC('DBCC SQLPERF(LOGSPACE) WITH NO_INFOMSGS;')    

                SELECT @LogUsedPercent = LogUsedPercent
                --select *,  CAST(LogSize*(LogUsedPercent * .01) AS DECIMAL(10,2)) AS TotalSizeUsed, CAST(LogSize - (LogSize*(LogUsedPercent * .01)) AS DECIMAL(10,2)) AS LogSizeLeft
                FROM @LogSize 
                WHERE DatabaseName = @DBNameToCheck 
            */

                --- this has lower required permissions then the above
                -- this gets the log file used percent
                SELECT @LogUsedPercent = cast(pc2.cntr_value*100.0/pc1.cntr_value as dec(5,2))
                FROM sys.dm_os_performance_counters (NOLOCK) AS pc1
                INNER JOIN sys.dm_os_performance_counters (NOLOCK) AS pc2 ON pc1.instance_name = pc2.instance_name
                WHERE  pc1.object_name LIKE '%Databases%'
                AND pc2.object_name LIKE '%Databases%'
                AND pc1.counter_name = 'Log File(s) Size (KB)'
                AND pc2.counter_name = 'Log File(s) Used Size (KB)'
                AND pc1.instance_name not in ('_Total', 'mssqlsystemresource')
                AND pc1.cntr_value > 0
                AND pc1.instance_name = @DBNameToCheck


            -- now if the current log file used percent is > what is passed, it displays a message, and waits for the time passed
            IF (@LogUsedPercent > @LogFileUsedPercentToCheck)
                BEGIN
                    SET @RaiseErrorMessage += ' Current Log Used Percent is: ' + CAST(@LogUsedPercent AS VARCHAR(50)) + ' '

                    -- Do this so it displays message immediatly, it is a low error message number so it will not be caught by the try catch blocks
                    -- but using the "WITH NOWAIT" displays the message instantly instead of waiting for  buffer to display
                    RAISERROR(@RaiseErrorMessage, 0, 1) WITH NOWAIT

                    -- now wait for the allowted time
                    WAITFOR DELAY @WaitForDealyToUse 
                END

            -- return the percent if they want to capture it
            SELECT @LogUsedPercent



    END TRY
    BEGIN CATCH

        -- run your catch logic here


    END CATCH
END

Upvotes: 3

Related Questions