Reputation: 101595
What is the idiomatic Ruby analog of a pattern that represents a potentially deferred asynchronous computation with the possibility to subscribe to its completion? i.e. something along the lines of .NET System.Threading.Task
, or Python 3.x concurrent.futures.future
.
Note that this does not necessarily imply multithreading - the actual implementation of the "future" object would just as likely use some other way of scheduling the work and obtaining result, and is out of scope of the question. The question concerns strictly with the API that is presented to the user of the object.
Upvotes: 16
Views: 6840
Reputation: 12916
The thread gem might be of interest. You can make a thread-pool that processes stuff in the background. The gem also support a whole lot of other features like future, delay etc. Have a look at the github repo.
It appears to work with a wide range of ruby versions, not just 1.9+, which is why I use this.
Upvotes: 1
Reputation: 61
I found this to be extremely helpful:
https://github.com/wireframe/backgrounded
It is a gem that simply allows pushing methods onto a background task.
Upvotes: 1
Reputation:
Maybe I'm missing something, but if the situation is as you describe in your response to deepak, then why not wrap the C API as a Ruby extension and provide a Ruby method that accepts a block corresponding to your needed callback? That would also be very idiomatic Ruby.
Here's a sample chapter dealing with extending Ruby with C from the "Pickaxe" Book updated for Ruby 1.9: http://media.pragprog.com/titles/ruby3/ext_ruby.pdf.
Update: Here are some links dealing with Ruby exceptions in Ruby and in it's C interface.
Upvotes: 1
Reputation: 6675
lazy.rb provides "futures", but they don't seem to be exactly the same as you describe (or I would expect):
Additionally, the library provides futures, where a computation is run immediately in a background thread.
So, you can't compute them later, or insert values into them (from the network perhaps) by other means.
Upvotes: 1
Reputation: 8100
You can use a job queue like resque
Have coded some quick examples for pure ruby
by forking a child process
rd, wr = IO.pipe
p1 = fork do
rd.close
# sleep is for demonstration purpose only
sleep 10
# the forked child process also has a copy of the open file
# handles, so we close the handles in both the parent and child
# process
wr.write "1"
wr.close
end
wr.close
puts "Process detaching | #{Time.now}"
Process.detach(p1)
puts "Woot! did not block | #{Time.now}"
1.upto(10) do
begin
result = rd.read_nonblock(1)
rescue EOFError
break
rescue Exception
# noop
end
puts "result: #{result.inspect}"
system("ps -ho pid,state -p #{p1}")
sleep 2
end
rd.close
__END__
ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-darwin10.6.0]
Process detaching | 2012-02-28 17:05:49 +0530
Woot! did not block | 2012-02-28 17:05:49 +0530
result: nil
PID STAT
5231 S+
result: nil
PID STAT
5231 S+
result: nil
PID STAT
5231 S+
result: nil
PID STAT
5231 S+
result: nil
PID STAT
5231 S+
result: "1"
PID STAT
by having a callback on a thread
require 'thread'
Thread.abort_on_exception = true
module Deferrable
def defer(&block)
# returns a thread
Thread.new do
# sleep is for demonstration purpose only
sleep 10
val = block.call
# this is one way to do it. but it pollutes the thread local hash
# and you will have to poll the thread local value
# can get this value by asking the thread instance
Thread.current[:human_year] = val
# notice that the block itself updates its state after completion
end
end
end
class Dog
include Deferrable
attr_accessor :age, :human_age
attr_accessor :runner
def initialize(age=nil)
@age = age
end
def calculate_human_age_as_deferred!
self.runner = defer do
# can do stuff with the values here
human_age = dog_age_to_human_age
# and finally publish the final value
after_defer { self.human_age = human_age }
# return value of the block. used in setting the thread local
human_age
end
end
protected
def dog_age_to_human_age
(self.age / 7.0).round(2)
end
def after_defer(&block)
block.call
end
end
dog = Dog.new(8)
dog.calculate_human_age_as_deferred!
1.upto(10) do
sleep 2
puts "status: #{dog.runner.status} | human_age: #{dog.human_age.inspect}"
break unless dog.runner.status
end
puts "== using thread local"
dog = Dog.new(8)
dog.calculate_human_age_as_deferred!
1.upto(10) do
sleep 2
puts "status: #{dog.runner.status} | human_age: #{dog.runner[:human_year].inspect}"
break unless dog.runner.status
end
__END__
ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-darwin10.6.0]
status: sleep | human_age: nil
status: sleep | human_age: nil
status: sleep | human_age: nil
status: sleep | human_age: nil
status: false | human_age: 1.14
== using thread local
status: sleep | human_age: nil
status: sleep | human_age: nil
status: sleep | human_age: nil
status: sleep | human_age: nil
status: false | human_age: 1.14
threads consume less memory than forking a child process but forking is robust. An unhandled error in a thread can bring down the whole system. while an unhandled error in a child process, will only bring down the child process
Other people have pointed out fibres and eventmachine (using EM::Deferrable and EM.defer) are another option
Fibres and threads need careful coding. code can be wrong in subtle ways.
Also fibres use pre-emptive multitasking so the codebase has to be well behaved
Eventmachine is fast but it is an exclusive world (like twisted in python). It has its own separate IO stack, so all the libraries have to be written to support eventmachine. Having said that, i do not think library support is a problem for eventmachine
Upvotes: 1
Reputation: 230386
I am not sure about vanilla Ruby, but EventMachine has deferrables.
Also, check out this article.
EM.run {
detector = LanguageDetector.new("Sgwn i os yw google yn deall Cymraeg?")
detector.callback { |lang| puts "The language was #{lang}" }
detector.errback { |error| puts "Error: #{error}" }
}
Upvotes: 9
Reputation: 21791
Fiber?
Fibers are primitives for implementing light weight cooperative concurrency in Ruby. Basically they are a means of creating code blocks that can be paused and resumed, much like threads. The main difference is that they are never preempted and that the scheduling must be done by the programmer and not the VM. link
Upvotes: 1