Celery and Redis keep running out of memory

2.7k views Asked by At

I have a Django app deployed to Heroku, with a worker process running celery (+ celerycam for monitoring). I am using RedisToGo's Redis database as a broker. I noticed that Redis keeps running out of memory.

This is what my procfile looks like:

web: python app/manage.py run_gunicorn -b "0.0.0.0:$PORT" -w 3
worker: python lipo/manage.py celerycam & python app/manage.py celeryd -E -B --loglevel=INFO

Here's the output of KEYS '*':

  1. "_kombu.binding.celeryd.pidbox"
  2. "celeryev.643a99be-74e8-44e1-8c67-fdd9891a5326"
  3. "celeryev.f7a1d511-448b-42ad-9e51-52baee60e977"
  4. "_kombu.binding.celeryev"
  5. "celeryev.d4bd2c8d-57ea-4058-8597-e48f874698ca"
  6. `_kombu.binding.celery"

celeryev.643a99be-74e8-44e1-8c67-fdd9891a5326 is getting filled up with these messages:

{"sw_sys": "Linux", "clock": 1, "timestamp": 1325914922.206671, "hostname": "064d9ffe-94a3-4a4e-b0c2-be9a85880c74", "type": "worker-online", "sw_ident": "celeryd", "sw_ver": "2.4.5"}

Any idea what I can do to purge these messages periodically?

1

There are 1 answers

1
Boris S On

Is that a solution?

  1. in addition to _kombu.bindings.celeryev set there will be e.g. celeryev.i-am-alive. keys with TTL set (e.g. 30sec);
  2. celeryev process adds itself to bindings and periodically (e.g. every 5 sec) updates the celeryev.i-am-alive. key to reset the TTL;
  3. before sending the event worker process checks not only smembers on _kombu.bindings.celeryev but the individual celeryev.i-am-alive. keys as well and if key is not found (expired) then it gets removed from _kombu.bindings.celeryev (and maybe the del celeryev. or expire celeryev. commands are executed).

we can't just use keys command because it is O(N) where N is the total number of keys in DB. TTLs can be tricky on redis < 2.1 though.

expire celeryev. instead of del celeryev. can be used in order to allow temporary offline celeryev consumer to revive, but I don't know if it worths it.

author