Reputation: 48733
Product design use separate tables with similar signatures for holding different kind of data.
As so you can perform calculation on each table separately in parallel without locks.
I optimize calculation code so it run it 7 times faster, but next 10% of performance boosting I think require x10 more time for coding/testing.
Calculation must be performed on all tables so it is logically to make calculation in parallel (especially when server have 32 cores). For such purpose I found useful DBMS_SCHEDULER package:
declare
job1 clob := dbms_scheduler.generate_job_name('REPAYMENT_');
job2 clob := dbms_scheduler.generate_job_name('REPAYMENT_');
begin
dbms_scheduler.create_job(job_name => job1,
job_type => 'PLSQL_BLOCK',
job_action => 'begin calc_rep(30, DATE ''2012-01-01'', DATE ''2012-12-31''); end;',
enabled => true,
auto_drop => true);
dbms_scheduler.create_job(job_name => job2,
job_type => 'PLSQL_BLOCK',
job_action => 'begin calc_rep(31, DATE ''2012-01-01'', DATE ''2012-12-31''); end;',
enabled => true,
auto_drop => true);
dbms_output.put_line('jobs was finished');
end;
/
But I don't know how to wait for jobs finishing before jobs was finished message...
Constantly polling scheduler_jobs in loop is an inefficient solution but it is only one that come in mind. Search and official docs take no any relevant solution.
Upvotes: 2
Views: 1757
Reputation: 15094
Jobs are not used for parallelism (at least that's not what they're for). Jobs are used to invoke a procedure at a particular time and on a particular schedule. So you don't start a job and wait for it to finish.
Anyway, what you've done is create the job but never invoke them to start running. You'd have to call dbms_scheduler.run_job( job_name => job1 )
to do that. That call would return immediately and your long-running job would continue to run in the background.
If I understand your question correctly, what you're trying to do is break up some complicated calculation and run it in parallel, but not necessarily run it repeatedly on a schedule. What you want is DBMS_PARALLEL_EXECUTE, where you break a large job up into chunks based on some criteria and run those individually.
Upvotes: 1