I am deploying a Laravel application using the Docker platform on AWS Elastic Beanstalk. One of the steps in my Dockerfile is bundling assets using Laravel mix by running npm run production
as seen below.
# Dockerfile
...
RUN echo "${ALPINE_MIRROR}/edge/main" >> /etc/apk/repositories \
&& apk add --no-cache nodejs nodejs-npm --repository="http://dl-cdn.alpinelinux.org/alpine/edge/community" \
&& npm install \
&& npm run production
When this command runs, one of the steps it triggers is copying the produced assets to Amazon S3 like below:
mix.webpackConfig({
plugins: [
new s3Plugin({
exclude: /.*\.(html|php|htaccess|txt|json)/,
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_DEFAULT_REGION,
},
s3UploadOptions: {
Bucket: process.env.AWS_BUCKET,
},
directory: 'public',
})
]
});
As seen above, the script needs the AWS environment variables to work. However, the build script is not picking them up even if they are configured in the Elastic Beanstalk dashboard (in the configuration). One quick fix would be to define ARG
variables in the Dockerfile but I do not want this information to be visible in plain text.
I have tried to search around for solutions but I have no luck. Any advice would really be helpful. Thank you.
If you want these env variables to reflect during the build step then add these in the respective code build project env section.
If you are using ECS with Elastic beanstalk then these environment variables should be defined in the Task definition environment.