Karim
Karim

Reputation: 18597

Using "IN" in a WHERE clause where the number of items in the set is very large

I have a situation where I need to do an update on a very large set of rows that I can only identify by their ID (since the target records are selected by the user and have nothing in common other than it's the set of records the user wanted to modify). The same property is being updated on all these records so I would I like to make a single UPDATE call.

Is it bad practice or is there a better way to do this update than using "WHERE IN (1,2,3,4,.....10000)" in the UPDATE statement?

Would it make more sense to use individual update statements for each record and stick them into a single transaction? Right now I'm working with SQL Server and Access but,if possible, I'd like to hear more broad best-practice solutions across any kind of relational database.

Upvotes: 17

Views: 54207

Answers (10)

Aswath
Aswath

Reputation: 986

There are multiple ways of accommodating a large set of values in a where condition

  1. Using Temp Tables

Insert the values into a temp table with a single column.

Create a UNIQUE INDEX on that particular column.

INNER JOIN the required table with the newly created temp table

  1. Using array-like functionality in SQL Server

    SQL does support an array like functionality

check this link for full documentation.

SAMPLE SYNTAX :

Create TABLE #IDs (id int NOT NULL)
DECLARE @x varchar(max) = '' 
DECLARE @xParam XML;
SELECT @xParam = CAST('<i>' + REPLACE(@x, ',', '</i><i>') + '</i>' AS XML)
INSERT into #IDs
SELECT x.i.value('.','NVARCHAR(100)') as key FROM @xParam .nodes('//i') x(i)
CREATE UNIQUE INDEX IX_#IDs ON #IDs (ID ASC) 

Query using

SELECT A.Name, A.Age from Table A INNER JOIN #IDs id on id.id = A.Key 

Upvotes: 1

Tooony
Tooony

Reputation: 3889

Without knowing what a "very large" number of ID's might be, I'd venture a guess. ;-)

Since you are using Access as a database, the number of ID's can't be that high. Assuming we're talking about less than, say 10,000 numbers and we should know the limitations of the containers to hold the ID's (what language is used for the front end?), I'd stick to one UPDATE statement; if that is most readable and easiest to perform maintenance on later. Otherwise I'd split them into multiple statements using some clever logic. Something like split the statement into multiple statements with in one, ten, hundred, thousand... ID's per statement.

Then, I'd leave it to the DB optimiser to execute the statement(s) as efficient as possible. I would probably do an 'explain' on the query / queries to make sure nothing silly is going on though.

But in my experience, it is quite often OK to leave this kind of optimisation to the database manager itself. The one thing that takes the most time is usually the actual connection to the database, so if you can execute all queries within the same connection it is normally no problems. Make sure you send off all UPDATE statements before you start to look into and wait for any result sets coming back though. :-)

Upvotes: 2

David-W-Fenton
David-W-Fenton

Reputation: 23067

I don't know the type of values in your IN list. If they are most of the values from 1 to 10,000, you might be able to process them to get something like:

WHERE MyID BETWEEN 1 AND 10000 AND MyID NOT IN (3,7,4656,987)

Or, if the NOT IN list would still be long, processing the list and generating a bunch of BETWEEN statements:

WHERE MyID BETWEEN 1 AND 343 AND MyID BETWEEN 344 AND 400 ...

And so forth.

Last of all, you don't have to worry about how Jet will process an IN clause if you use a passthrough query. You can't do that in code, but you could have a saved QueryDef that is defined as a passthrough and alter the WHERE clause in code at runtime to use your IN list. Then it's all passed off to SQL Server, and SQL Server will decide best how to process it.

Upvotes: 0

jimmyorr
jimmyorr

Reputation: 11688

If you were on Oracle, I'd recommend using table functions, similar to Marc Gravell's post.

-- first create a user-defined collection type, a table of numbers
create or replace type tbl_foo as table of number;

declare
  temp_foo tbl_foo;
begin
  -- this could be passed in as a parameter, for simplicity I am hardcoding it
  temp_foo := tbl_foo(7369, 7788);

  -- here I use a table function to treat my temp_foo variable as a table, 
  -- and I join it to the emp table as an alternative to a massive "IN" clause
  select e.*
    from emp e,
         table(temp_foo) foo
   where e.empno = foo.column_value;
end;

Upvotes: 1

JosephStyons
JosephStyons

Reputation: 58685

How do you generate the IN clause?

If there is there another SELECT statement that generates those values, you could simply plug that into the UPDATE like so:

UPDATE TARGET_TABLE T
SET
  SOME_VALUE = 'Whatever'
WHERE T.ID_NUMBER IN(
                    SELECT ID_NUMBER  --this SELECT generates your ID #s.
                    FROM SOURCE_TABLE
                    WHERE SOME_CONDITIONS
                    )

In some RDBMses, you'll get better performance by using the EXISTS syntax, which would look like this:

UPDATE TARGET_TABLE T
SET
  SOME_VALUE = 'Whatever'
WHERE EXISTS (
             SELECT ID_NUMBER  --this SELECT generates your ID #s.
             FROM SOURCE_TABLE S
             WHERE SOME_CONDITIONS
               AND S.ID_NUMBER =  T.ID_NUMBER
             )

Upvotes: 5

Mr. Shiny and New 安宇
Mr. Shiny and New 安宇

Reputation: 13908

In general there are several things to consider.

  1. The statement parsing cache in the DB. Each statement, with a different number of items in the IN clause, has to be parsed separately. You ARE using bound variables instead of literals, right?
  2. Some Databases have a limit on the number of items in the IN clause. For Oracle it's 1000.
  3. When updating you lock records. If you have multiple separate update statements you can have deadlocks. This means you have to be careful about the order in which you issue your updates.
  4. Round-trip latency to the database can be high, even for a very fast statement. This means it's often better to manipulate lots of records at once to save trip-time.

We recently changed our system to limit the size of the in-clauses and always use bound variables because this reduced the number of different SQL statements and thus improved performance. Basically we generate our SQL statements and execute multiple statements if the in-clause exceeds a certain size. We don't do this for updates so we haven't had to worry about the locking. You will.

Using a temp table may not improve performance because you have to populate the temp table with the IDs. Experimentation and performance tests can tell you the answer here.

A single IN clause is very easy to understand and maintain. This is probably what you should worry about first. If you find that the performance of the queries is poor you might want to try a different strategy and see if it helps, but don't optimize prematurely. The IN-clause is semantically correct so leave it alone if it isn't broken.

Upvotes: 1

DanSingerman
DanSingerman

Reputation: 36502

I would always use

WHERE id IN (1,2,3,4,.....10000)

unless your in clause was stupidly large, which shouldn't really happen from user input.

edit: For instance, Rails does this a lot behind the scenes

It would definitely not be better to do separate update statements in a single transaction.

Upvotes: 10

Marc Gravell
Marc Gravell

Reputation: 1062745

I would use a table-variable / temp-table; insert the values into this, and join to it. Then you can use the same set multiple times. This works especially well if you are (for example) passing down a CSV of IDs as varchar. As a SQL Server example:

DECLARE @ids TABLE (id int NOT NULL)

INSERT @ids
SELECT value
FROM dbo.SplitCsv(@arg) // need to define separately

UPDATE t
SET    t. // etc
FROM   [TABLE] t
INNER JOIN @ids #i ON #i.id = t.id

Upvotes: 2

HeDinges
HeDinges

Reputation: 4607

In Oracle there is a limit of values you can put into a IN clause. So you better use a OR , x=1 or x=2 ... those are not limited, as far as I know.

Upvotes: 2

Ot&#225;vio D&#233;cio
Ot&#225;vio D&#233;cio

Reputation: 74250

Another alternative is to store those numbers in a temp table and use it in a join to do the update. If you are able to execute a single update statement is definitely better than executing one statement per record.

Upvotes: 16

Related Questions