I am currently working on deploying a Django application to an AWS EC2 instance using Docker Compose. The setup involves containerizing both the Django app and NGINX, with AWS RDS utilized for the database. Additionally, there's a Redis service planned for use with a Celery worker. While the deployment process is functional, I have several questions and areas I'd like to improve.
Current Setup:
- Dockerfiles are used to create images for the Django app and NGINX, each requiring a 'TAG' variable.
- Docker Compose is configured to utilize environment variables and create three services: Django app, NGINX, and Redis (for future Celery worker).
- GitHub Actions workflow is implemented with two jobs: 'build' to build and push Docker images (tag: short version of the SHA-1 hash of the latest commit), and 'deploy' to copy Docker Compose to EC2 and execute the deployment.
Questions and Areas for Improvement:
- HTTPS Implementation: Currently, the setup only supports HTTP.
- Resource Limitation: Is it advisable to limit resources in Docker Compose? Locally, I noticed high CPU usage initially. How can I control this on EC2 to prevent performance issues?
- Multiple Environments: I aim to establish separate staging and production environments. Should I utilize AWS RDS for staging, or would a PostgreSQL service within Docker Compose suffice?
- Scalability Concerns: How can I ensure the system can handle increased traffic? What strategies should I consider for scaling resources?
- Handling Secrets: Currently, secrets are stored in GitHub and passed to EC2 via the GitHub Actions workflow. Is this approach secure and recommended?
Conclusion:
I seek advice on optimizing and refining my deployment setup for better security, scalability, and maintainability. Any suggestions or insights would be greatly appreciated. Thank you!