Reputation: 2098
I have a HTML5 front end which uses HTML5 canvas. The canvas displays a set of images which the user can manipulate (move them around) and then click 'Save'. 'Save' then send a list of image data such as x,y coordinates to a asmx web service.
At the moment: I am deleting all records for a session and reinserting new records, if however the session was never saved earlier then I am inserting.
My first question is that is it recommended to delete and insert or simply run an update. which one is computationally less intensive?
Second, I want to make sure that all of the session data is committed, i.e. if I send 46 rows of data I want to make sure they are written. in the delete / insert scenario it's possible that the delete might proceed whereas insert might fail.
how does one accomplish this using transactions.
at the moment I am just using an entity framework, selecting the session data, deleting it and then reinserting the data all over again.
any pointers would be most appreciated.
Upvotes: 2
Views: 2753
Reputation: 5697
Check out the MERGE statement in conjunction with TVPs
I've also created a temp table and used the SqlBulkCopy class (followed by MERGE of course) to good effect. SqlBulkCopy is usually used for more rows but it's simple to use (more so than TVPs) so no harm done using it in these conditions.
Another alternative is to do a temp table and lots of inserts into that followed by the merge. By sending all the instructions in a single round trip you'll get ok performance. You could also do 46 merge statements if you wanted I guess - you'd have to do an inline select of your values.
In all cases, a transaction would be appropriate though not strictly necessary for the first 2 because the MERGE is atomic. Still a good habit to get into in case you start doing more stuff.
Upvotes: 0
Reputation: 10516
My first question is that is it recommended to delete and insert or simply run an update. which one is computationally less intensive?
An update is cheaper than delete and insert. The reason is the bottleneck is the disk (not computation). The inserts and deletes allocate and free bytes in a page on disk and probably need to update an index or two. An update will leave them in place and only change a few bytes, so that's less IO. On top of that the insert and delete are two operations (+ any indexes this would affect) vs only one for the update. (there's some more overhead for logging by btw)
If you are on sql2008 or above you can use table valued parameters, these should work pretty well for what you describe because you can send your entire update set to sql in one round trip. If you use regular update statements you would need to run 46 statements.
If you really want to tune this, you may also want to look at combining the TVP's with the merge statement to handle a mix of updates and inserts in a single operation.
Having said that, you may want to just use what works easiest in your code. If you don't have a performance problem, you don't have to fix it.
Second, I want to make sure that all of the session data is commited, i.e. if I send 46 rows of data I want to make sure they are written. in the delete / insert scenario it's possible that the delete might proceed whereas insert might fail.
You can use a transaction indeed, either for updates or deletes/ inserts.
begin tran
delete * from table where some condition
insert into table (value1, value 2) values (1, 2), (3, 4) ..
commit tran
This will either succeed as a whole or not change the database. That also goes for using the table value parameter.
Upvotes: 1