Reputation: 41
Currently i have "streamgenerator" in Linux which outputs data to stdout/namedpipe etc. I found websocketd online which is kind of close to what I need but problem is it spawns new process for every client that connects.
What i am looking for is to stream same data (single process producing data to stdout) to multiple clients.
$ websocketd --port=8080 ./streamgenerator
Will create new streamgenerator process for every new connection to websocket. What I want is one instance of streamgenerator and output from it copied for all clients.
Is there some simple way? Only thing that comes into my mind right now is writing C program which will take STDIN into buffer of size X and for each client have pointer into that buffer (which points to where client was able to read until that point).... If client connects he will start getting just the new data... If client too slow... his "READ" pointer will fall out of buffer his connection will just be dropped since he can't keep up.
My question is is there any way without developing this tool? First i thought piping to named pipe and then make websocketd read from it... but that will of course not work because first client of websocket will read data and throw them away...
Upvotes: 0
Views: 704
Reputation: 19221
According to the websocketd
source code, it seems to use a process based server (a new process per connection).... I'm not a GO programmer, so I'm not sure I'm reading it right, but the README seems to indicate the same concept.
Hence...
problem is it spawns new process for every client that connects
This can't be avoided. The design means that new connections are inherently forked, along with their STDIN and STDOUT. Data streamed to the original STDIN isn't accessible from within the new connection...
... so using a single streamgenerator
with websocketd
isn't an option.
Only thing that comes into my mind right now is writing C program...
I think this might be the only way to avoid the multi-process design.
You don't have to use C for this. You could probably use Ruby or node.js just as easily (they will probably be easier to author, while you'll pay for convenience with a performance hit).
My question is is there any way without developing this tool?
I don't think so.
However, facil.io could make it fairly simple to author a C web socket tool that broadcasts data using it's native Pub/Sub API (allowing you to easily scale, in the future, using Redis)... but being the author, I'm biased.
Here's a short Ruby script that will broadcast data from the pipe to any connected web socket connection (data separated by lines).
File script.rb
:
#!/usr/bin/env ruby
require 'iodine'
class Example
def self.call(env)
if env['upgrade.websocket?'.freeze] && env["HTTP_UPGRADE".freeze] =~ /websocket/i.freeze
env['upgrade.websocket'.freeze] = Example.new
return [0,{}, []] # It's possible to set cookies for the response.
end
[404, {"Content-Length" => "12"}, ["Bad Request."] ]
end
def on_open
subscribe channel: :stream, force: :text
end
def on_message data
close # are we expecting any messages?
end
def on_close
# nothing to do
end
def on_shutdown
# server is going away, notify client.
end
end
Iodine::Rack.app = Example
# remove these two lines for automatice, core related, detection
Iodine.processes = 1;
Iodine.threads = 1;
# initialize the Redis engine for each iodine process.
require 'uri'
if ENV["REDIS_URL"]
uri = URI(ENV["REDIS_URL"])
Iodine.default_pubsub = Iodine::PubSub::RedisEngine.new(uri.host, uri.port, 0, uri.password)
else
puts "* No Redis, it's okay, pub/sub will still run on the whole process cluster."
end
# Create the loop that reads from ARGF (the pipe)
# defer threading because we might fork the main server
root_pid = Process.pid
Iodine.run do
puts "Starting to listen to pipe"
if(root_pid == Process.pid)
Thread.new do
ARGF.each_line do |s|
Iodine.publish channel: :stream, message: s
puts "read:", s
end
end
end
end
# start iodine
Iodine.start
You can use it from the terminal using:
$ streamgenerator | ruby script.rb
This is just a dirty example, but it sows how easy this could be.
Oh, it requires the iodine
gem, which is fail.io's port to Ruby (also mine).
EDIT 2
I added a bit of code that allows you to use Redis with the example code I provided and limits the publishing to a single process.
The Redis engine is native to facil.io (it's in C, with a Ruby bridge for iodine) and you can use it to send commands as well as Pub/Sub.
If you're using Redis to scale across a number of machines, I would consider splitting the script into a publisher script and a server application.
Also, you only need Redis if you're using more than one machine. If you run Iodine with Iodine.processes = 8
, the pub/sub engine would still work.
Iodine has a bunch of other features, if you need them, such as static file service etc'.
You can also package the whole thing in a middleware and make it part of an existing Rails/Sintara/Rack project using iodine as the server (for Websocket support).
...
As for:
$ ./streamgenerator | pub --channel "redisstream"
$ websocketd --port=8080 sub --channel "redisstream"
It sounds like this will mitigate the issue, although I think websocketd
will still open a new process per connection, which uses more resources than an evented "reactor pattern" such as the one used by servers such as nginx (and iodine, passenger, puma and a bunch of others).
Upvotes: 1
Reputation: 2191
I don't understand the problem. Shouldn't this do the trick:
streamgenerator | tee fifo1 | tee fifo2 | tee fifo3
Upvotes: 0