Reputation: 2022
I have searched all afternoon and don't believe this is a duplicate to the other questions floating around like ExecuteNonQuery() and SET NOCOUNT ON.
I have also found an interesting blog about Finding code smells using SQL Prompt: the SET NOCOUNT problem (PE008 and PE009) which does go into some interesting detail but still doesn't answer my question.
I have a database server where the admins have ticked the no count
option for the server connections (In SSMS, right click the server in the Object Explorer, go to Properties, select the Connections page, under 'Default connection options', scroll down till you find the 'no count' selector).
Based on all the reading I've done, more and more admins may well be ticking the no count
option to increase performance since the resulting rows will no longer be sent back to the client.
My question therefore mainly focuses on the SqlCommand.ExecuteNonQuery Method in C# since it relies on the fact that it:
Executes a Transact-SQL statement against the connection and returns the number of rows affected.
But if SET NOCOUNT ON
is enforced, then the result will always be -1. I've read about how people are recommending rather using select @rowcount = @@ROWCOUNT
but this doesn't make up for the fact that you're losing functionality from the SqlCommand.ExecuteNonQuery
method to the point that you might as well just start using SqlCommand.ExecuteScalar instead.
So my question is, for best practice, should we start setting SQL Servers to no count
(or at the very least expect that's how they're going to start configuring them in the next couple of years) and then if so, should we be forcing SET NOCOUNT OFF
or drop SqlCommand.ExecuteNonQuery
in favour of select @rowcount = @@ROWCOUNT
?
Upvotes: 0
Views: 1162
Reputation: 5763
About a decade ago I started using SqlCop, which used to highlight "Procedures without SET NOCOUNT ON". Over time, I have evolved so I no longer use that, but I liked the concept, so I have a set of unit tests, which run metadata queries to find things like SqlCop used to highlight (and a whole lot more).
Each of my tests allows me to add exclusions to each rule, which allows me to document the justification for excluding it, in a comment.
My current test for that (which I don't have to hand) probably highlights Stored Procs which contain SELECT
but not SET NOCOUNT ON
. This way, INSERT
s, UPDATE
s and DELETE
s are all left with the default behaviour, allowing my code to use the result of ExecuteNonQuery
. Any fringe cases can be excluded from the rule.
This allows me to leave SQL server with the default behaviour, and my application's SPs choosing when to vary from Microsoft's default.
Notice that if someone modified that setting away from the default, then my code wouldn't work, because I haven't followed the advice at the very end of the RedGate article of "If you need to ensure that a query returns a rowcount message, you should specify it rather than assume the current setting.".
It's very sensible advice, in the sense that hindsight is a wonderful thing: you would only know that you needed to set it if you knew the default could be changed. A bit like you learn that scripting SPs as "IF EXISTS, DROP and CREATE" (so helpfully offered by SSMS) is the stupid way of doing it: better to just IF NOT EXISTS CREATE a placeholder, and then ALTER it to the actual definition (which avoids losing the metadata, like permissions).
Of course I could fix my SPs, and add a new or changed unit test to ensure that I always set the NOCOUNT option one way or the other. But the point I'm making is that it wouldn't surprise me if setting that behaviour different from the default might also break any third-party products you install on that server. (Maybe that doesn't matter in your scenario.)
Whichever way you decide to go, I just felt that this technique of having unit tests to enforce your decided approach is invaluable. Checking that it is always set one way or the other by each SP would seem to be sensible.
Upvotes: 1