I'm currently facing a issue with Celery 5.3.4 that ask me to install backports module. the application is running in container (3.10.13-bullseye) with python version 3.10.13 on the debian 11 host. When I run the command celery -A app beat -l INFO. I encounter the following error:

Here are the details of my setup:

Host OS Debian 11 :

  • Python Version inside Container: 3.10.13-bullseye
  • Celery Version: 5.3.4
  • Docker: Running on both Debian 11 and Windows10(Docker Desktop)

On Debian11 Host

pip version:

root@5b45db7349aa:/app# pip3 --version
pip 23.3.1 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)
root@5b45db7349aa:/app# python --version
Python 3.10.13

Error:

root@5b45db7349aa:/app# celery -A app beat -l info
celery beat v5.3.4 (emerald-rush) is starting.
__    -    ... __   -        _
LocalTime -> 2023-11-03 10:48:13
Configuration ->
    . broker -> redis://redis:6379/0
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> celery.beat.PersistentScheduler
    . db -> celerybeat-schedule
    . logfile -> [stderr]@%DEBUG
    . maxinterval -> 5.00 minutes (300s)
[2023-11-03 10:48:13,939: DEBUG/MainProcess] Setting default socket timeout to 30
[2023-11-03 10:48:13,940: INFO/MainProcess] beat: Starting...
[2023-11-03 10:48:13,945: CRITICAL/MainProcess] beat raised exception <class 'ModuleNotFoundError'>: ModuleNotFoundError("No module named 'backports'")
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/shelve.py", line 111, in __getitem__
    value = self.cache[key]
KeyError: 'entries'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/celery/apps/beat.py", line 113, in start_scheduler
    service.start()
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 634, in start
    humanize_seconds(self.scheduler.max_interval))
  File "/usr/local/lib/python3.10/site-packages/kombu/utils/objects.py", line 31, in __get__
    return super().__get__(instance, owner)
  File "/usr/local/lib/python3.10/functools.py", line 981, in __get__
    val = self.func(instance)
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 677, in scheduler
    return self.get_scheduler()
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 668, in get_scheduler
    return symbol_by_name(self.scheduler_cls, aliases=aliases)(
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 513, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 264, in __init__
    self.setup_schedule()
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 541, in setup_schedule
    self._create_schedule()
  File "/usr/local/lib/python3.10/site-packages/celery/beat.py", line 570, in _create_schedule
    self._store['entries']
  File "/usr/local/lib/python3.10/shelve.py", line 114, in __getitem__
    value = Unpickler(f).load()
ModuleNotFoundError: No module named 'backports'

Additional Information:

  • The same container, when run on Windows 10 with Docker Desktop, works without any issues.

  • I have checked the version of pip, python and celery is the same.

    FROM python:3.10.13-slim-bullseye
    
     # Con questa riga Python stampa direttamente sulla console senza fare buffere dei messaggi
     ENV PYTHONUNBUFFERED 1
    
     # viene passato da Docker compose file e per il momento del sviluppo viene sostiuito ed e true
     ARG DEV=false
     # Creazione del enviorment and upgrading pip
     RUN apt-get update
     RUN apt-get install -y gcc python3-dev build-essential curl nano
    
     RUN apt-get install -y postgresql-client libjpeg-dev libpq-dev 
    
     #Download the desired package(s)
     RUN curl https://packages.microsoft.com/keys/microsoft.asc |  tee /etc/apt/trusted.gpg.d/microsoft.asc
     RUN curl https://packages.microsoft.com/config/debian/11/prod.list | tee /etc/apt/sources.list.d/mssql-release.list
     RUN apt-get update
     RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17
     # optional: for unixODBC development headers
     RUN apt-get install -y unixodbc-dev
     # optional: kerberos library for debian-slim distributions
     RUN apt-get install -y libgssapi-krb5-2
    
    
     RUN mv /etc/localtime /etc/localtime.old
     RUN ln -s /usr/share/zoneinfo/Europe/Rome /etc/localtime
    
     # locales
     #RUN echo "it_IT.UTF-8 UTF-8" >> /etc/locale.gen
     #RUN locale-gen
    
     COPY ./requirements.txt /tmp/requirements.txt
     COPY ./requirements.dev.txt /tmp/requirements.dev.txt
     RUN pip install --no-cache-dir -r /tmp/requirements.txt
    
     COPY ./compose/celery/celery_worker_start.sh /celery_worker_start.sh
     RUN sed -i 's/\r$//g' /celery_worker_start.sh
     RUN chmod +x /celery_worker_start.sh
    
     COPY ./compose/celery/celery_beat_start.sh /celery_beat_start.sh
     RUN sed -i 's/\r$//g' /celery_beat_start.sh
     RUN chmod +x /celery_beat_start.sh
    
     COPY ./compose/celery/celery_flower_start.sh /celery_flower_start.sh
     RUN sed -i 's/\r$//g' /celery_flower_start.sh
     RUN chmod +x /celery_flower_start.sh
    
     RUN mkdir /app
     WORKDIR /app
     EXPOSE 8000
     #esecuzuione del container come utente django 
    
     #CMD ["run.sh"]v
    
uname -a #from the container:
Linux 3865cf0f97ec 4.19.0-25-amd64 #1 SMP Debian 4.19.289-2 (2023-08-08) x86_64 GNU/Linux

uname -a #from the Host:
Linux VSRVDEB01 4.19.0-25-amd64 #1 SMP Debian 4.19.289-2 (2023-08-08) x86_64 GNU/Linux
3

There are 3 answers

0
isaacparrot On

Also, when you say 'The same container' do you mean the same source:

yes Same Source.

The same source for a container isn't guaranteed (and in fact is somewhat unlikely) to produce the same image. There are two pretty reasonable possibilities here

  • The base image, python:3.10.13-slim-bullseye, was broken and a fix was pushed. This is not uncommon, but I'd also expect some other people to report issues. A new revision can (and likely would be) pushed under the same tag if there was a glitch fixed.
  • There's some corruption of your particular image on your build host.

In this case I'd go with the latter explanation.

Short solution (assuming this is the issue): Remove the source container images and re-pull them before re-building

# Remove image
$ docker image rm python:3.10.13-slim-bullseye
# Pull image again
$ docker pull python:3.10.13-slim-bullseye
3.10.13-slim-bullseye: Pulling from library/python
0bc8ff246cb8: Pull complete
ea6c70a3b047: Pull complete
94293398fb1c: Pull complete
2b0c0766ac49: Pull complete
5e93a6aa11f2: Pull complete
Digest: sha256:829bfd6812e20121a26a14bc419b375ee405d1c24dc252f9b740637c4701b122
Status: Downloaded newer image for python:3.10.13-slim-bullseye
docker.io/library/python:3.10.13-slim-bullseye

BEWARE! The output must include Pull complete for each layer! Otherwise, the Docker daemon is re-using cached layers, which is what we want to avoid. If you're having trouble removing all the layers, try 'pruning' the Docker images (see https://stackoverflow.com/a/44791684/1120802) or, in the most extreme case, remove all the docker engine resources and start again (see Docker image corruption? Remove layers?). Be careful, both are destructive actions!

Next, re-build and set the --no-cache option

$ docker build -t your-image-name-here --no-cache .

If this doesn't resolve the issue, see the "Not in image issue?" section below.

Full Explanation:

This is an explanation and demonstration of how this happens, NOT steps to fix this class of issue!

To troubleshoot an issue with a bad image, we can use docker image ls to find the ID of our target image:

$ sudo docker image ls
REPOSITORY    TAG                     IMAGE ID       CREATED          SIZE
redis         latest                  7f27d60cb8e0   12 days ago      138MB
python        3.10.13-bullseye        1a28f256af27   4 weeks ago      911MB
python        3.10.13-slim-bullseye   ee6be26d226b   4 weeks ago      126MB
ubuntu        latest                  e4c58958181a   5 weeks ago      77.8MB

The image ID is the more precise identifier for the set of bytes composing a given Docker image. In my case, 3.10.13-slim-bullseye has the (shortened) ID of ee6be26d226b. If this were to differ between machines, we'd know that the image is different. For the sake of example, we can see how the ubuntu:latest image is different (on a different machine) before and after pulls:

$ docker image ls
REPOSITORY    TAG           IMAGE ID       CREATED        SIZE
ubuntu        latest        6a47e077731f   2 months ago   69.2MB
$ docker pull ubuntu:latest
latest: Pulling from library/ubuntu
bfbe77e41a78: Pull complete
Digest: sha256:2b7412e6465c3c7fc5bb21d3e6f1917c167358449fecac8176c6e496e5c1f05f
Status: Downloaded newer image for ubuntu:latest
$ docker image ls
REPOSITORY    TAG           IMAGE ID       CREATED        SIZE
ubuntu        latest        e343402cadef   5 weeks ago    69.2MB

A given docker image is composed of 'layers' (you've probably seen them referenced with pushing to or pulling from a registry). After pulling an image, these layers reside on the local filesystem. Unfortunately, the contents of these layers can be corrupted when the image is first written to disk, when pulling into the final image, or any time in-between. As an example of how this could happen, let's intentionally introduce corruption and examine how this appears. We'll use the base image referenced in this post, python:3.10.13-slim-bullseye as an example.

First, find the UpperDir of the image we're using. This is a one-liner that packs a bit together, but outputs the location on our filesystem where some of our base container contents are stored:

$ docker image inspect python:3.10.13-slim-bullseye --format="{{json .GraphDriver.Data.UpperDir}}" | jq -r | sed 's/\diff//'
/var/lib/docker/overlay2/a9c2824fa4a07f72827b358a7a92480975e9243b30158e4dabf8c5808ed65928/

Neat, now let's corrupt our image:

$ touch /var/lib/docker/overlay2/a9c2824fa4a07f72827b358a7a92480975e9243b30158e4dabf8c5808ed65928/hello-from-corruption-town

Finally, build a container using the referenced image. In this example I'm using the Dockerfile included below. Note the --no-cache!

$ docker build -t so-hack --no-cache .
[+] Building 5.6s (9/9) FINISHED                                                                                                                                                                                                                                     docker:default
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                           0.0s
 => => transferring dockerfile: 746B                                                                                                                                                                                                                                           0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                              0.0s
 => => transferring context: 2B                                                                                                                                                                                                                                                0.0s
 => [internal] load metadata for docker.io/library/python:3.10.13-slim-bullseye                                                                                                                                                                                                0.0s
 => CACHED [1/5] FROM docker.io/library/python:3.10.13-slim-bullseye                                                                                                                                                                                                           0.0s
 => [2/5] RUN <<EOF cat >> /tmp/requirements.txt                                                                                                                                                                                                                               0.4s
 => [3/5] RUN pip install --no-cache-dir -r /tmp/requirements.txt                                                                                                                                                                                                              4.6s
 => [4/5] WORKDIR /app                                                                                                                                                                                                                                                         0.0s
 => [5/5] RUN <<EOF cat >> app.py                                                                                                                                                                                                                                              0.3s
 => exporting to image                                                                                                                                                                                                                                                         0.2s
 => => exporting layers                                                                                                                                                                                                                                                        0.2s
 => => writing image sha256:8bcd9ca6e43b5d94644e3c284b99c83a156f6fdfe0fb3d81a3a4ce3bebb9c763                                                                                                                                                                                   0.0s
 => => naming to docker.io/library/so-hack                                                                                                                                                                                                                                     0.0s

Finally, run the image and list the contents of the root filesystem.

$ docker run -it --rm so-hack /bin/bash -c 'ls /'
app  bin  boot  dev  etc  hello-from-corruption-town  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Our flag, hello-from-corruption-town, is present our image! Let's see what happens if we try re-pulling the image:

$ docker pull python:3.10.13-slim-bullseye
3.10.13-slim-bullseye: Pulling from library/python
Digest: sha256:829bfd6812e20121a26a14bc419b375ee405d1c24dc252f9b740637c4701b122
Status: Image is up to date for python:3.10.13-slim-bullseye
docker.io/library/python:3.10.13-slim-bullseye
$ ls /var/lib/docker/overlay2/a9c2824fa4a07f72827b358a7a92480975e9243b30158e4dabf8c5808ed65928/
committed  diff  hello-from-corruption-town  link  lower  work

We can see that re-pulling a corrupt image will not fix the image. In this example we have added a flag (hello-from-corruption-town) but this could also be corruption of some other form, whether that's a bad binary, truncated file, or missing directory. The best solution is to remove all related images and download the image layers again.

Wait, not in image issue?

Try building a minimal reproduction of the issue, such as starting with a minimal Dockerfile such as the following:

FROM python:3.10.13-slim-bullseye

RUN <<EOF cat >> /tmp/requirements.txt
celery[redis]==5.3.4
EOF

RUN pip install --no-cache-dir -r /tmp/requirements.txt

WORKDIR /app

RUN <<EOF cat >> main.py
# Taken from https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html#entries
from celery import Celery
from celery.schedules import crontab

app = Celery()

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    # Calls test('hello') every 10 seconds.
    sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')

@app.task
def test(arg):
    print(arg)

@app.task
def add(x, y):
    z = x + y
    print(z)
EOF

CMD ["python", "/app/main.py"]

From here, iteratively tweak the Dockerfile until you can either reproduce the issue (and you'll know what step caused it).

1
JanMalte On

Maybe you are having some "old" scheduler data?

For me, removing the celerybeat-schedule.bak, celerybeat-schedule.dat and celerybeat-schedule.dir files and restart the celery beat solved the issue.

0
ssgakhal On

The problem was in the base Image i used python:3.10.13-slim-bullseye but my host has Debian Buster version 10.13 so in the container celery wants all "**-backports" modules to be installed, So i changed my base image to python:3.10.13-slim-bullseye -> debian:10.13-slim and compile python-3.10.13 manually:

Now its Working new verion:

On the Host

uname -a

Linux VSRV01 4.19.0-25-amd64 #1 SMP Debian 4.19.289-2 (2023-08-08) x86_64 GNU/Linux

lsb_release -a

No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

On the Container

uname -a

Linux 471c2e5c311e 4.19.0-25-amd64 #1 SMP Debian 4.19.289-2 (2023-08-08) x86_64 GNU/Linux

lsb_release -a

No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

My New Dockerfile:

# Use Debian as the base image
FROM debian:10.13-slim AS builder

# Install necessary dependencies
RUN apt-get update && \
    apt-get install -y build-essential libssl-dev zlib1g-dev libbz2-dev \
                       libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev \
                       libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev nano

# Set working directory
WORKDIR /usr/src/python

# Download and extract Python source
RUN curl -O https://www.python.org/ftp/python/3.10.13/Python-3.10.13.tgz && \
    tar -xzf  Python-3.10.13.tgz

# Build and install Python
WORKDIR /usr/src/python/Python-3.10.13
RUN ./configure --prefix=/usr/local --enable-optimizations --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib" && \
    make -j$(nproc) && \
    make altinstall

# Clean up unnecessary files
WORKDIR /
RUN rm -rf /usr/src/python
# Verify Python installation
RUN python3.10 --version

# Switch to a new image for the final build
FROM debian:10.13-slim

# Copy Python installation from the builder image
COPY --from=builder /usr/local/ /usr/local/

# Set aliases for python3.10 and pip3.10
RUN ln -s /usr/local/bin/python3.10 /usr/local/bin/python3 && \
    ln -s /usr/local/bin/python3.10 /usr/local/bin/python && \
    ln -s /usr/local/bin/pip3.10 /usr/local/bin/pip3 && \
    ln -s /usr/local/bin/pip3.10 /usr/local/bin/pip

# Verify Python and pip installation
RUN python3 --version && \
    pip3 --version

RUN apt-get update

ENV PYTHONUNBUFFERED=1 PYTHONDONTWRITEBYTECODE=1
ENV PYTHON_VERSION=3.10.13
ENV PYTHON_PIP_VERSION=23.0.1
ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/0d8570dc44796f4369b652222cf176b3db6ac70e/public/get-pip.py

RUN apt-get install -y python3-dev curl nano 
 
RUN apt-get install -y postgresql-client libjpeg-dev libpq-dev 

#Download the desired package(s)
RUN curl https://packages.microsoft.com/keys/microsoft.asc |  tee /etc/apt/trusted.gpg.d/microsoft.asc
RUN curl https://packages.microsoft.com/config/debian/11/prod.list | tee /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17
# optional: for unixODBC development headers
RUN apt-get install -y unixodbc-dev
# optional: kerberos library for debian-slim distributions
RUN apt-get install -y libgssapi-krb5-2
 

RUN mv /etc/localtime /etc/localtime.old
RUN ln -s /usr/share/zoneinfo/Europe/Rome /etc/localtime

# locales
#RUN echo "it_IT.UTF-8 UTF-8" >> /etc/locale.gen
#RUN locale-gen

COPY ./requirements.txt /tmp/requirements.txt
COPY ./requirements.dev.txt /tmp/requirements.dev.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt

COPY ./compose/celery/celery_worker_start.sh /celery_worker_start.sh
RUN sed -i 's/\r$//g' /celery_worker_start.sh
RUN chmod +x /celery_worker_start.sh

COPY ./compose/celery/celery_beat_start.sh /celery_beat_start.sh
RUN sed -i 's/\r$//g' /celery_beat_start.sh
RUN chmod +x /celery_beat_start.sh

COPY ./compose/celery/celery_flower_start.sh /celery_flower_start.sh
RUN sed -i 's/\r$//g' /celery_flower_start.sh
RUN chmod +x /celery_flower_start.sh
 
RUN mkdir /app
WORKDIR /app
EXPOSE 8000