Demir
Demir

Reputation: 1837

Microsoft SQL Server insert data into large table at every second

I have a data acquisition system that reads values from some industrial devices and records values into Microsoft SQL Server 2008 R2 database. Data record interval is approximately 20 seconds. Every record data contains approximately 600 bytes of data.

Now I need to insert data from a new hardware but this time record interval has to be 1 second. In other words I insert 1 record of 600 bytes into SQL server database in every second.

I have two questions:

  1. Is there any possible problem that I may run into while inserting data in every second? I think Microsoft SQL server is quite OK for this frequency of insertion but I am not sure for a long-period.

  2. Program is a long running application. I clear the data table approximately every week. When I record data in every second I will have 3600 rows in the table every hour and 86400 rows every day and approximately 600K rows at the end of week. Is this OK for a good level of reading data? Or should I try to change my approach in order not to have such amount of rows in the table?

By the way I use LinqToSQL for all my database operations and C# for programming.

Upvotes: 2

Views: 2375

Answers (4)

Costas
Costas

Reputation: 181

Although I think you should be ok, since you are apparently using a .NET platform, you can check out StreamInsight: http://technet.microsoft.com/en-us/library/ee391416.aspx

Upvotes: 0

Mojtaba
Mojtaba

Reputation: 1630

Based on my thesis experience in college, if your system is fully stable and doesn't crash or overflow or etc. You can use SqlBulkCopy to avoid I/O operation per record. This is sample code of bulk copy for DataTable and this method should call every 1 hour:

        private void SaveNewData()
        {
            if (cmdThesis.Connection.State == ConnectionState.Open)
            {
                cmdThesis.Connection.Close();
            }

            SqlBulkCopy bulkCopy = new SqlBulkCopy(@"Data Source=.;Initial Catalog=YourDb;Integrated Security=True");
            bulkCopy.BatchSize = 3000;
            bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col1", "Col1"));
            bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col2", "Col2"));
            bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col3", "Col3"));
            bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col4", "Col4"));
            bulkCopy.DestinationTableName = "DestinationTable";
            bulkCopy.WriteToServer(Result);

            Result.Rows.Clear();

        }

Upvotes: 0

veljasije
veljasije

Reputation: 7092

First of all, you must think about existing indexes on tables in which you insert data, because indexes slowing down insert process. Second, if you have FULL recovery model, then every insert process will be written in transaction log, and your log file will rapidly rise.

Think about change your recovery model to SIMPLE, and to disable your indexes.

Of course, selecting rows from that table will be slower, but I don't know what is your requests.

Upvotes: 0

Ehsan
Ehsan

Reputation: 32701

Is there any possible problem that I may run into while inserting data in every second? I think Microsoft SQL server is quite OK for this frequency of insertion but I am not sure for a long-period.

If database is properly designed than you should not run into any problem. We save GIS data at much greater speed without any issue.

Is this OK for a good level of reading data? Or should I try to change my approach in order not to have such amount of rows in the table?

It depends, if you need all the data than how can you change the approach? if you don't need it why do you save it?

Upvotes: 1

Related Questions