Reputation: 51
My question is more of a design question rather than code. I'm trying to INSERT
trades on stocks, as they come in, into SQL Server.
Trades can come in many times a sec and I'm receiving trades from 15 stocks at the same time so there could potentially be a lot of trades in the same sec. My question is what is the best way to do the INSERT
?
open one connection at the start of trading session and continuously insert trades as they are received will this slow down my app? The trading session is 12hrs, does SQL Server allow a connection to last that long?
Collect trades in memory and do BulkInsert
once every x mins? I'd rather not keep any trades in memory as this will slow things down and use up a lot of ram ... is there a better way to do this?
Upvotes: 0
Views: 836
Reputation: 5458
You don't describe exactly how many rows per second are being inserted. SQL Server on a good machine (whatever that means) can easily handle hundreds of sequential inserts per second depending of course on what else the machine is doing and how fast the machine/hard disks (SSDS?)/amount of memory available. So what you are asking is can someone drive from NY to California in a week?
If you find that your machine cannot handle tens or hundreds of thousand of inserts per second you may try inserting rows into a narrow table (as few narrow columns as possible ) into a table without keys. I have see SQL Server successfully handle enormous amount of inserts into such a table. I would at the end of the day copy the data into another clustered table on which I run all my processing.
Finally you may want to look at Service broker which can use messaging to handle very large volumes asynchronously.
An active connection will not time out.
Upvotes: 1
Reputation: 2350
The short answer (and rather ambiguous due to your question), is "it depends". What it depends on is what you're using to build the client application.
If it's a web application, built with PHP/ASP, there shouldn't really be any problem with multiple inserts. SQL Server intrinsically supports this due to the way it's designed, the same goes for most RDBMS models. Of course, a bulk insert is going to have less strain, it's one operation as opposed to multiple threads of data, but 15 stocks....you're still talking about a small time operations for something that's designed to handle BIG data.
What I would probably suggest is you have an intermediate that "listens" for data, and opens a connection to the database when an transaction is in progress. In PHP we usually call this "long-polling", which is a ubiquitous term for any exchange of data that is "pushed" to and from a server. There's plenty of documentation online to help you out with this, including a Wikipedia page (if you're into that kind of stuff).
Upvotes: 0