How to configure and run remote celery worker correctly?

2.9k views Asked by At

I'm new to celery and may be doing something wrong, but I already spent a lot of trying to figure out how to configure celery correctly.

So, in my environment I have 2 remote servers; one is main (it has public IP address and most of the stuff like database server, rabbitmq server and web server running my web application is there) and another is used for specific tasks which I want to asynchronously invoke from the main server using celery.

I was planning to use RabbitMQ as a broker and as results back-end. Celery config is very basic:

CELERY_IMPORTS = ("main.tasks", ) 
BROKER_HOST = "Public IP of my main server" 
BROKER_PORT = 5672 
BROKER_USER = "guest" 
BROKER_PASSWORD = "guest" 
BROKER_VHOST = "/" 
CELERY_RESULT_BACKEND = "amqp" 

When I'm running a worker on the main server tasks are executed just fine, but when I'm running it on the remote server only a few tasks are executed and then worker gets stuck not being able to executed any task. When I restart the worker it executes a few more tasks and gets stuck again. There is nothing special inside the task and I even tried a test task that just adds 2 numbers. I tried to run the worker differently (demonizing and not, setting different concurrency and using celeryd_multi), nothing really helped.

What could be the reason? Did I miss something? Do I have to run something on the main server other than the broker (RabbitMQ)? Or is it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev, but none of them worked)?

Hm... I've just reproduced the same problem on the local worker, so I don't really know what it is... Is it required to restart celery worker after every N tasks executed?

Any help will be very much appreciated :)

2

There are 2 answers

0
Trent Gm On

Don't know if you ended up solving the problem, but I had similar symptoms. Turned out that (for whatever reason) print statements from within tasks was causing tasks not to complete (maybe some sort of deadlock situation?). Only some of the tasks had print statements, so when these tasks executed eventually the number of workers (set by concurrency option) were all exhausted, which caused tasks to stop executing.

0
Most Wanted On

Try to set your celery config to

CELERYD_PREFETCH_MULTIPLIER = 1
CELERYD_MAX_TASKS_PER_CHILD = 1

docs