I have a GitHub repository with a .devcontainer
file to customize the Codespace image, that fails to start. The repository has the following structure:
.
├── .devcontainer
│ └── devcontainer.json
├── .github
│ └── workflows
│ └── build_docker_image.yml
└── ingredients
└── .devcontainer
└── devcontainer.json
The workflow in build_docker_image.yml
simply builds a Docker container (based on the containers.dev tutorial on prebuilt devcontainers) and pushes the image to Docker Hub:
devcontainer build --workspace-folder ingredients --image-name ${{ github.repository }}
docker push ${{ github.repository }}
This is the image on Docker Hub: https://hub.docker.com/layers/kostrykin/mobi-devcontainer-python/latest/images/sha256-8158e3aaf271db4d2401563b1996ca9f1cf538c8cbdb3b3d372a697d10ebb199?context=repo
The contents of the ingredients/.devcontainer/devcontainer.json
file which is used for the image:
{
"name": "xxxx xxxx",
"image": "mcr.microsoft.com/devcontainers/universal:2"
}
The contents of the .devcontainer/devcontainer.json
file which is used to run the Codespace:
{
"image": "docker.io/kostrykin/mobi-devcontainer-python:latest"
}
Shouldn't this be equivalent to running a Codespace with the default image, i.e. the mcr.microsoft.com/devcontainers/universal:2
image? However, when running the codespace with the custom docker.io/kostrykin/mobi-devcontainer-python:latest
image, the Codespace fails to start:
[2B2023-11-22 11:16:32.738Z: docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: write /home/codespace/.local/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_adv_infer.so.8: no space left on device.
The full log is very long, so I posted it here.
My actual question is: What takes up the extra space, when using the custom image in comparison to using the original image? I see that there should be an extra layer, but shouldn't this be close to zero in size? Also, using docker inspect
I can confirm that both images are actually equal in size.
And the natural follow-up question is: Is there any elegant way to solve this issue, aside from raising the host requirements from 32GB to 64GB, or using docker system prune --all --force
as the initializeCommand
of the devcontainer?
In case it matters, here is the output of dh -h
ran inside of the Codespace after creating it with the initializeCommand
mentioned above:
/workspaces/mobi-devcontainer-python (main) $ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 32G 13G 18G 43% /
tmpfs 64M 0 64M 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/root 29G 22G 7.2G 76% /usr/sbin/docker-init
/dev/loop3 32G 13G 18G 43% /workspaces
/dev/sda1 44G 148K 42G 1% /tmp