Reputation: 2544
I have a file containing a list of 5000 ids of records in a table.
Example: id_list.txt
('12345', '23456', '34567', ..., '65432')
I want to select the records using the following command in Linux:
id_list=`cat id_list.txt`
sqlplus -s user/password@server <<END
select company_name, company_status, count(*)
from printing_request
where request_id in ${id_list}
group by company_name, company_status;
END
To explain, the table printing_request
is where many company can send many requests to print their documents. If a request is complete, its status will be change from I
to C
.
I expect the output being something like this
company_name | company_status | count
AA | I | 1000
AA | C | 1234
BB | I | 2334
...
But I got an error Input is too long (> 2499 characters)
The non-solution I can only think of is to split the id_list into many lists and loop to select many times. But when I think of how many loops it has to perform and how I will end up having to do the grouping and counting manually after that, I give up this solution.
Upvotes: 0
Views: 1103
Reputation: 2138
1) create global temporary table with ID column
2) generate 5000 insert statements into this table from SQLPlus
3) modify your query to join with this table
Upvotes: 0
Reputation: 5243
If you run into "Input list too long", then the IDs were certainly not entered manually. They must have been deduced from something else and possibly this "something else" is already in your database. Try to rephrase your select taking this into account.
In case it is not and the IDs magically arrive from elsewhere, then you'll have to put them into a table of their own. If the file is formatted consistently you can use an external table as Justin Cave pointed out.
How the these IDs get into the file in the first place? Whoever puts them there, couldn't she put it into the DB right away? It would solve all your problems.
Upvotes: 1