I have a Django app deployed to Heroku, with a worker process running celery (+ celerycam for monitoring). I am using RedisToGo's Redis database as a broker. I noticed that Redis keeps running out of memory.
This is what my procfile looks like:
web: python app/manage.py run_gunicorn -b "0.0.0.0:$PORT" -w 3
worker: python lipo/manage.py celerycam & python app/manage.py celeryd -E -B --loglevel=INFO
Here's the output of KEYS '*':
- "_kombu.binding.celeryd.pidbox"
- "celeryev.643a99be-74e8-44e1-8c67-fdd9891a5326"
- "celeryev.f7a1d511-448b-42ad-9e51-52baee60e977"
- "_kombu.binding.celeryev"
- "celeryev.d4bd2c8d-57ea-4058-8597-e48f874698ca"
- `_kombu.binding.celery"
celeryev.643a99be-74e8-44e1-8c67-fdd9891a5326
is getting filled up with these messages:
{"sw_sys": "Linux", "clock": 1, "timestamp": 1325914922.206671, "hostname": "064d9ffe-94a3-4a4e-b0c2-be9a85880c74", "type": "worker-online", "sw_ident": "celeryd", "sw_ver": "2.4.5"}
Any idea what I can do to purge these messages periodically?
Is that a solution?
we can't just use keys command because it is O(N) where N is the total number of keys in DB. TTLs can be tricky on redis < 2.1 though.
expire celeryev. instead of del celeryev. can be used in order to allow temporary offline celeryev consumer to revive, but I don't know if it worths it.
author