I'm experiencing an issue with Celery where it doesn't respond automatically upon system boot. Here's a breakdown of my setup and the problem:
Setup:
- Django project using Celery for asynchronous tasks.
- Celery workers configured with the @shared_task decorator.
- Using Redis as the broker with the CELERY_BROKER_URL set to 'redis://localhost'.
- Celery workers configured to run as a systemd service on Ubuntu.
Problem:
- When the system boots up, Celery starts automatically, but asynchronous tasks are not executed.
- Manually starting Celery with the command
home/ubuntu/.venv/bin/celery -A febiox.celery worker -l info -E worksfine, and tasks are executed as expected. - Checking the status of Celery service with
sudo systemctl status celery.serviceshows that the service is active and running. - However, when attempting to inspect active Celery nodes with
celery -A febiox.celery inspect active, I get the error messageError: No nodes replied within time constraint. - Interestingly, running the same command after manually starting Celery returns the expected result showing the correct node.
Systemd Service Configuration:
[Unit]
Description=Celery Task Worker
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/febiox
ExecStart=/home/ubuntu/.venv/bin/python3 /home/ubuntu/.venv/bin/celery -A febiox.celery worker -l info -E
Restart=always
StandardOutput=file:/var/log/celery/celery.log
StandardError=file:/var/log/celery/celery_error.log
[Install]
WantedBy=multi-user.target
Additional Information:
- Redis receives data when attempting to execute tasks.
sudo systemctl reload celery.servicedoesn't produce any errors, but it also doesn't seem to resolve the issue.sudo systemctl start celery.servicestarts the service without errors.sudo systemctl status celery.serviceshows two instances of the Celery worker process, which seems unusual.