Access .env variables during build time in docker for AWS Elastic Beanstalk

994 views Asked by At

I am deploying a Laravel application using the Docker platform on AWS Elastic Beanstalk. One of the steps in my Dockerfile is bundling assets using Laravel mix by running npm run production as seen below.

# Dockerfile
...
RUN echo "${ALPINE_MIRROR}/edge/main" >> /etc/apk/repositories \
  && apk add --no-cache nodejs nodejs-npm  --repository="http://dl-cdn.alpinelinux.org/alpine/edge/community" \
  && npm install \
  && npm run production

When this command runs, one of the steps it triggers is copying the produced assets to Amazon S3 like below:

mix.webpackConfig({
    plugins: [
        new s3Plugin({
            exclude: /.*\.(html|php|htaccess|txt|json)/,
            s3Options: {
                accessKeyId: process.env.AWS_ACCESS_KEY_ID,
                secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
                region: process.env.AWS_DEFAULT_REGION,
            },
            s3UploadOptions: {
                Bucket: process.env.AWS_BUCKET,
            },
            directory: 'public',
        })
    ]
});

As seen above, the script needs the AWS environment variables to work. However, the build script is not picking them up even if they are configured in the Elastic Beanstalk dashboard (in the configuration). One quick fix would be to define ARG variables in the Dockerfile but I do not want this information to be visible in plain text.

I have tried to search around for solutions but I have no luck. Any advice would really be helpful. Thank you.

2

There are 2 answers

0
paradox On

If you want these env variables to reflect during the build step then add these in the respective code build project env section.


If you are using ECS with Elastic beanstalk then these environment variables should be defined in the Task definition environment.

0
realnsleo On

I managed to achieve what I needed using help from this guide:

https://aws.amazon.com/blogs/security/how-to-manage-secrets-for-amazon-ec2-container-service-based-applications-by-using-amazon-s3-and-docker/

Basically:

  1. Create bucket to store a file with the variables you want.
  2. Set bucket policy to limit access to within the same VPC
  3. In the Dockerfile, execute a script that downloads this file and loads the environment variables.

Thanks!