Using host SSL cert across Docker containers

223 views Asked by At

Goal

I want to make sure the SSL cert that is installed on my host machine can be used by my Docker containers running on that machine.

My Setup

VM: My virtual machine is a Google Cloud Compute Engine instance running Linux (Debian 11). Until now, I've been running my webapp on this VM (which is also where the SSL cert is installed).

SSL Certs: I have used LetsEncrypt to install a certification for my domain on the VM. It was a successful install, and these files are stored at the location /root/etc/letsencrypt/live on the VM. I cannot find a way to download them because they are password protected and I don't have the password (I have no idea what password to use with sudo while SSH'd into my VM).

Docker: I'm migrating to Docker containers (from my current basic "run it directly on the VM" setup). I'm using a docker-compose file, with all my services defined (a webapp and a backend service). The webapp will need to access the SSL cert. The Dockerfile for each service is pretty basic - copies my local repo and has runtime CMD <startup_script.sh> which handles downloading keys from GCP secret manager and runs the app. I'm building the Docker image successfully (running docker compose build locally).

Docker-compose:

version: '3.12'

services:
  app:
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "8000"

App Dockerfile:

FROM python:3.12.0-bullseye
WORKDIR /home
COPY repo /home/repo
WORKDIR /home/repo
COPY /scripts/startup_script.sh /usr/local/bin
RUN chmod +x /usr/local/bin/startup_script.sh

CMD ["/usr/local/bin/startup_script.sh"]

Startup script:

#!/bin/bash

#Download secrets
mkdir -p secrets
gcloud secrets versions access "latest" --secret=google_workspace_credentials --out-file=secrets/credentials.json
gcloud secrets versions access "latest" --secret=credentials_webapp --out-file=secrets/credentials_webapp

#Copy over the google workspace credentials
cp "$HOME"/secrets/credentials.json auth/credentials.json
cp "$HOME"/secrets/credentials_webapp auth/credentials_web.json

#Acquire SSL Certificate
...don't know what to do here

#Setup the runtime packages
pip install -r requirements.txt

#Run the programs
python3 app.py

What I've tried/considered

1. Copying the cert to my local machine and using that to build the docker image This did not work because I need a password to sudo copy the cert from the VM, but I don't have a password for sudo (I never set one up, the VM was pre-built by Google Compute Engine). So I was stuck with that. I also think this might be risky to have the keys stored on the image itself.

2. I've read that it's possible to run certbot on the container itself as part of the setup script to setup new SSL cert every time But this seems like a bad idea because 1. if it doesn't work successfully you'll never know and 2. you're adding cert keys directly to the image which seems sketchy, and 3. LetsEncrypt (and other CA services have rate limits). However, this solution seems convenient because it means I don't have to deal with the keys in the VM and can spin up my container on any host without caring about the SSL certs.

3. Mount the certificate files to docker when using docker run I've read about this stack overflow response. If I'm interpreting this correctly, the method mounts the cert files when running a container in the VM using docker run path/to/certs. I'm a bit worried this won't work since the certs are password protected. But I haven't tried it yet because I don't understand this solution yet and how it works.

4. I may have read about a way to open a port on the container to listen for the host to communicate the cert. I don't know how to do this, but this seems ideal for security purposes.

Question

What approach should I take and what are steps to implement it (and sample code if possible as I'm a new dev)!


Edit:

I'm using Traefik, but I need help double checking my docker compose setup. Currently it has the traefik service defined and my webapp that uses the certs. I'm also copying over the certs at runtime CMD because I won't be building this image from the VM that has the certs (I build it locally).

services:
  traefik:
    image: traefik:v2.5
    command:
      - sh
      - -c
      - |
        mkdir -p /certs
        cp /etc/letsencrypt/live/dreamdai.io /certs/
        traefik
        --api.insecure=true
        --providers.docker=true
        --providers.docker.exposedbydefault=false
        --entrypoints.web.address=:80
        --entrypoints.websecure.address=:443
    ports:
      - "80:80"
      - "8080:8080"
      - "443:443"

  webapp:
    image: webapp
    build:
      context: .
      dockerfile: docker/webapp.dockerfile
    labels:
      - "traefik.http.routers.webapp.rule=Host(`dreamdai.io`)"
      - "traefik.http.routers.webapp.entrypoints=websecure"
      - "traefik.http.routers.webapp.tls=true"
      - "traefik.http.routers.webapp.tls.certresolver=myresolver"

I can't get the traefik service to build an image. I'm running docker compose build traefik and log says "[+] Building 0.0s (0/0)" But immediately exits after that.

1

There are 1 answers

1
Chris Becke On

Use Traefik to be an ingress router for your swarm. Configure Traefik to fetch the cert directly from LetsEncrypt as part of its ssl offloading setup.

Now youve setup one container, don't have to manage the cert yourself and have tls support for a myriad of docker based services.