Reputation: 2906
So, I'm looking at a stored procedure here, which has more than one line like the following pseudocode:
if(select count(*) > 0)
...
on tables having a unique id (or identifier, for making it more general).
Now, in terms of performance, is it more performant to change this clause to
if(select count([uniqueId]) > 0)
...
where uniqueId is, e.g., an Idx
containing double values?
An example:
Consider a table like Idx (double) | Name (String) | Address (String)
Now the 'Idx' is a foreign key which I want to join in a stored procedure.
So, in terms of performance: what is better here?
if(select count(*) > 0)
...
or
if(select count(Idx) > 0)
...
? Or does the SQL Engine Change select count(*)
to select count(Idx)
internally, so we do not have to bother about this? Because at first sight, I'd say that select count(Idx)
would be more performant.
Upvotes: 2
Views: 204
Reputation: 1269563
The two are slightly different. count(*)
counts rows. count([uniqueid])
counts the number of non-NULL values for uniqueid
. Because a unique constraint allows a NULL
value, SQL Server actually needs to read the column. This could add microseconds of time to a query, particularly if the page with the id
is not already in memory. This also gives SQL Server more opportunities to optimize count(*)
.
As @lad2025 writes in a comment, the performant solution is to use if (exists . . .
.
Upvotes: 2
Reputation: 6405
SELECT t1.*
FROM Table1 t1
JOIN Table2 t2 ON t2.idx = t1.idx
will give you only the rows in t1 that match an idx value in Table2. I'm not sure there is a good reason to do an if(select count...).
If you are really interested in the performance of something like this, just create a temp table with a million rows and give it a go:
CREATE TABLE #TempTable (id int identity, txt varchar(50))
GO
INSERT #TempTable (txt) VALUES (@@IDENTITY)
GO 1000000
Upvotes: -1