Non-Blocking WebSocketHandler while receiving jobs from a queue

208 views Asked by At

Setup:

  • Tornado HTTP/WebSocket Server. WebSocketHandler reacts on messages from the client (e.g. put them in the job-queue)
  • A beanstalk job-queue which sends jobs to the different components
  • Some other components communicating over beanstalk, but those are unrelated to my problem.

Problem:

  • WebSocketHandler should react on jobs, but if he is listening on beanstalk, its blocking. A job could be e.g. 'send data xy to client xyz'

How can this be solved nicely? My first approach was running a jobqueue-listener in a separate thread which contained a list of the pickled WebSocketHandler. All should be stored in a redis-db. Since WebsocketHandler can't be pickled (and this approach seems to be very ugly) I'm searching for another solution.

Any ideas?

1

There are 1 answers

0
aychedee On BEST ANSWER

Instead of trying to pickle your WebSocketHandler instances you could store them in a class level (or just global) dictionary.

class MyHandler(WebSocketHandler):
    connections = {}

    def __init__(self, *args, **kwargs):
        self.key = str(self)
        self.connections[self.key] = self

Then you would pass the self.key along with the job to beanstalk, and when you get a job back you look up which connection to send the output to with the key, and then write to it. Something like (pseudo code...)

 def beanstalk_listener():
     for response in beanstalk.listen():
         MyHandler.connections[response.data[:10]].write_message(response[10:])

I don't think there is any value in trying to persist your websockethandler connections in redis. They are by nature ephemeral. If your tornado process restarts/dies they have no use. If what you are trying to do is keep a record of which user is waiting for the output of which job then you'll need to keep track of that separately.