Reputation: 4984
I prepared some fiddle:
CREATE TABLE t_process
("process_number" int, "process_status" varchar2(12))
;
INSERT ALL
INTO t_process ("process_number", "process_status")
VALUES (1, 'PROCESSING')
INTO t_process ("process_number", "process_status")
VALUES (2, 'PROCESSING')
INTO t_process ("process_number", "process_status")
VALUES (3, 'TO_BE_KILLED')
INTO t_process ("process_number", "process_status")
VALUES (4, 'PROCESSING')
INTO t_process ("process_number", "process_status")
VALUES (5, 'PROCESSING')
INTO t_process ("process_number", "process_status")
VALUES (6, 'TO_BE_KILLED')
INTO t_process ("process_number", "process_status")
VALUES (7, 'TO_BE_KILLED')
INTO t_process ("process_number", "process_status")
VALUES (8, 'WAITING')
INTO t_process ("process_number", "process_status")
VALUES (9, 'KILLED')
SELECT * FROM dual
;
This is my processing procedure:
CREATE OR REPLACE PROCEDURE MY_PROCEDURE(IN_ID IN NUMBER) IS
BEGIN
UPDATE T_PROCESS SET process_status = 'KILLING' WHERE process_number = IN_ID;
COMMIT;
-- DO SOME STUFF
UPDATE T_PROCESS SET process_status = 'KILLED' WHERE process_number = IN_ID;
COMMIT;
END MY_PROCEDURE;
Now I want to use DBMS_PARALLEL_EXECUTE
to run this custom SQL
:
DECLARE
id1 number = :id1;
id2 number = :id2;
BEGIN
MY_PROCEDURE(id1);
END;
So, my question is:
Can I use DBMS_PARALLEL_EXECUTE
to execute above SQL statement?
Because I only found UPDATE
examples. Maybe CREATE_CHUNKS_BY_SQL
to select only TO_BE_KILLED
and then RUN_TASK
with above statement?
Upvotes: 6
Views: 5940
Reputation: 1163
YES, You can very well do that 11g onwards. I am surprised why it was not suggested here.
You can execute a procedure inside run_task like begin MY_PROCEDURE( :start_id, :end_id ); end;
You may need to modify procedure to accept two parameters :start_id, :end_id
Here is the sample code (based on ‘create_chunks_by_rowid’).
DECLARE
l_task VARCHAR2(30) := 'parallel_processing';
l_sql_stmt VARCHAR2(32767);
l_try NUMBER;
l_status NUMBER;
BEGIN
DBMS_PARALLEL_EXECUTE.create_task (task_name => l_task);
DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name => l_task,
table_owner => 'SCHEMANAME',
table_name => 'T_PROCESS',
by_row => TRUE,
chunk_size => 10000);
l_sql_stmt := 'begin MY_PROCEDURE( :start_id, :end_id ); end;';
DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
sql_stmt => l_sql_stmt,
language_flag => DBMS_SQL.NATIVE,
parallel_level => 10);
-- If there is error, RESUME it for at most 2 times.
l_try := 0;
l_status := DBMS_PARALLEL_EXECUTE.task_status(l_task);
WHILE(l_try < 2 and l_status != DBMS_PARALLEL_EXECUTE.FINISHED)
Loop
l_try := l_try + 1;
DBMS_PARALLEL_EXECUTE.resume_task(l_task);
l_status := DBMS_PARALLEL_EXECUTE.task_status(l_task);
END LOOP;
DBMS_PARALLEL_EXECUTE.drop_task(l_task);
END;
/
You can create chunks
by following means as well based upon your situation / comfort.
CREATE_CHUNKS_BY_NUMBER_COL
-- If you want to update by 'process_number'
CREATE_CHUNKS_BY_SQL
-- If you think BY_SQL is going to give you a very smaller set of chunks to be processed. Beware of the fact that each chunk will be able to process just 1 row per chunk (start_id and end_id will same for each chunk) in this approach.
Upvotes: 5
Reputation: 4262
No (or at least not without real dirty hacks). If you want to execute PL/SQL in parallel, I've learnt the following options:
When you are a programmer and there is no standard yet and no need for more parallel processing, you could go for dbms_job. When volume increases or when there are more functional requirements, go for a better solution.
Upvotes: 2