I have a GPU-driven task that is initiated by messages in a RabbitMQ queue. The queue can be occasionally empty or contain messages in need of processing.
This task operates within a Kubernetes deployment. Due to the significant computational demands of this task, I've implemented Keda for auto-scaling, allowing pods to start only when there are messages in the queue. However, I've noticed that one pod remains continuously active. Upon inspecting the RabbitMQ management UI, I discovered that this is due to the presence of 19 unacknowledged messages in the queue that never get consumed or removed.
Even when I manually removed all the consumers, these unacknowledged messages persisted.
What might be causing this behavior, and what steps can I take to prevent it?
RabbitMQ version: 3.11.16