Reputation: 7099
In my application, I'm trying to create a handler that streams large files out to the client. These files are created by another module (tarfile
to be exact).
What I want is a file-like object that instead of writing to a socket or an actual file on the disk, proxies to the RequestHandler.write
method.
Here's what my current naive implementation looks like:
import tornado.gen
import tornado.ioloop
import tornado.web
class HandlerFileObject(object):
def __init__(self, handler):
self.handler = handler
@tornado.gen.coroutine
def write(self, data):
self.handler.write(data)
yield self.handler.flush()
def close(self):
self.handler.finish()
class DownloadHandler(tornado.web.RequestHandler):
def get(self):
self.set_status(200)
self.set_header("Content-Type", "application/octet-stream")
fp = HandlerFileObject(self)
with open('/dev/zero', 'rb') as devzero:
for _ in range(100*1024):
fp.write(devzero.read(1024))
fp.close()
if __name__ == '__main__':
app = tornado.web.Application([
(r"/", DownloadHandler)
])
app.listen(8888)
tornado.ioloop.IOLoop.instance().start()
It works, but the problem is that all of the data is loaded into RAM and is not released until I stop the application. What would be a better/more idiomatic/resourceful way of going about this?
Upvotes: 1
Views: 770
Reputation: 22134
get()
also needs to be a coroutine and yield when calling fp.write()
. By making write a coroutine you've made your object less file-like - most callers will simply ignore its return value, masking exceptions and interfering with flow control. The file-like interface is synchronous so you'll probably need to do these operations in other threads so you can block them as needed.
Upvotes: 3