I've installed and configured Django-Q 1.3.5 (on Django 3.2 with Redis 3.5.3 and Python 3.8.5). This is my Cluster configuration:
# redis defaults
Q_CLUSTER = {
'name': 'my_broker',
'workers': 4,
'recycle': 500,
'timeout': 60,
'retry': 65,
'compress': True,
'save_limit': 250,
'queue_limit': 500,
'cpu_affinity': 1,
'redis': {
'host': 'localhost',
'port': 6379,
'db': 0,
'password': None,
'socket_timeout': None,
'charset': 'utf-8',
'errors': 'strict',
'unix_socket_path': None
}
}
where I have appropriately chosen timeout:60
and retry:65
to explain my problem.
I created this simple function to call via Admin Scheduled Task:
def test_timeout_task():
time.sleep(61)
return "Result of task"
And this is my "Scheduled Task page" (localhost:8000/admin/django_q/schedule/)
ID | Name | Func | Success |
---|---|---|---|
1 | test timeout | mymodel.tasks.test_timeout_task | ? |
When I run this task, I get the following warning:
10:18:21 [Q] INFO Process-1 created a task from schedule [test timeout]
10:19:22 [Q] WARNING reincarnated worker Process-1:1 after timeout
10:19:22 [Q] INFO Process-1:7 ready for work at 68301
and the task is no longer executed.
So, my question is: is there a way to correctly handle an unpredicted task?
You can set your timeout to :
'timeout': None
and it should handle your task without stopping.