As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml

Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.

For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.

Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):

  • In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.

  • I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)

TLDR; How can I set up my env so that

  • I use only one single docker-compose.yml for both dev and prod
  • I'm able to dev with live changes easily without recreating docker container

Thanks a lot!

2

There are 2 answers

3
Siyu On BEST ANSWER

Define your django service in docker-compose.yml as

services:
  backend:
    image: backend

Then add a file for dev: docker-compose.dev.yml

services:
  backend:
    extends:
      file: docker-compose.yml
      service: backend
    volume: local_path:path

To launch for prod, just docker-compose up

To launch for dev docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP

1
maxm On

I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:

  1. define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.

  2. Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/

I usually just comment out my volumes section, but that's probably not the best solution.