Reputation: 23347
I run rufus-scheduler in a rake task that runs on a heroku worker. I get regularly SIGTERM exception, because of regular heroku dynos restart (see the heroku dyno docs). I'd like to implement graceful shutdown shown in the docs mentioned above and shutdown rufus scheduler during this process:
trap('TERM') do
scheduler.shutdown(:kill)
exit
end
However when I try to send SIGTERM to the process with this task, I get error:
can't be called from trap context
Is there any method to gracefully shutdown rufus scheduler on SIGTERM? I use ruby 2.0, rake 10.0.4, rufus-scheduler 3.0.2.
P.S. No, I can't use heroku scheduler, because I need to run this task every minute ;-).
EDIT (jmettraux)
test code: https://gist.github.com/jmettraux/a4c00374f58e9f7affa8
Ruby 2.0.0-p247, rufus-scheduler 3.0.5 on Debian GNU/Linux yields:
/home/jmettraux/w/rufus-scheduler/lib/rufus/scheduler/job_array.rb:74:
in `synchronize': can't be called from trap context (ThreadError)
from /home/jmettraux/w/rufus-scheduler/lib/rufus/scheduler/job_array.rb:74:in `to_a'
from /home/jmettraux/w/rufus-scheduler/lib/rufus/scheduler.rb:276:in `jobs'
from /home/jmettraux/w/rufus-scheduler/lib/rufus/scheduler.rb:127:in `shutdown'
from t.rb:8:in `block in <main>'
from t.rb:18:in `call'
from t.rb:18:in `sleep'
from t.rb:18:in `<main>'
Same platform, but with Ruby 1.9.3-p392 and it gracefully shuts down.
Upvotes: 1
Views: 970
Reputation: 14061
The issue you're having is due to Thread
synchronization in trap
. I would work around it this way:
p "#{RUBY_VERSION}-p#{RUBY_PATCHLEVEL}"
p $$
require 'rufus-scheduler'
s = Rufus::Scheduler.new
trap('TERM') do
$quit = true
end
s.every '10s' do
p :hello
end
s.every '1s' do
if $quit
p :bye
s.shutdown(:kill)
end
end
s.join
shell1:
$ ruby exit_scheduler.rb
"2.0.0-p0"
60580
shell2:
$ kill -s TERM 60580
shell1:
:bye
Upvotes: 1
Reputation: 3551
OK, it's Ruby 2.0 specific (https://www.ruby-forum.com/topic/4411227)
The next version of rufus-scheduler will include a workaround.
https://github.com/jmettraux/rufus-scheduler/issues/98
Thanks for reporting the issue.
Upvotes: 1