Using Packer, how does one build an Amazon ECR Instance remotely

2.6k views Asked by At

My problem:

I want a docker image saved as an artifact in the Amazon EC2 Registry, built by packer (and ansible)

My limitations: The build needs to be triggered by Bitbucket Pipelines. Therefore the build steps need to be executed either in Bitbucket Pipelines itself or in an AWS EC2 instance/container.

This is because not all dev machines necessarily have the permissions/packages to build from their local environment. I only want these images to be built as a result of an automated CI process.

What I have tried:

Using Packer, I am able to build AMIs remotely. And I am able to build Docker images using Packer (built locally and pushed remotely to Amazon ECR).

However the Bitbucket Pipeline, which executes build steps within a docker container already, does not have access to the docker daemon process 'docker run'.

The error I recieve in Bitbucket Pipelines:

+ packer build ${BITBUCKET_CLONE_DIR}/build/pipelines_builder/template.json
docker output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: hashicorp/packer
    docker: Using default tag: latest
    docker: latest: Pulling from hashicorp/packer
    docker: 88286f41530e: Pulling fs layer
    ...
    ...
    docker: 08d16a84c1fe: Pull complete
    docker: Digest: sha256:c093ddf4c346297598aaa13d3d12fe4e9d39267be51ae6e225c08af49ec67fc0
    docker: Status: Downloaded newer image for hashicorp/packer:latest
==> docker: Starting docker container...
    docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker426823595:/packer-files -d -i -t hashicorp/packer /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
==> docker: See 'docker run --help'.
==> docker:
Build 'docker' errored: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Some builds didn't complete successfully and had errors:
--> docker: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Builds finished but no artifacts were created.

The following quote says it all (taken from link):

Other commands, such as docker run, are currently forbidden for security reasons on our shared build infrastructure.

So, I know why the following is happening. It is a limitation I am faced with. I am aware that I need to find an alternative.

A Possible Solution: The only solution I can think of at the moment is, a Bitbucket Pipeline using an image with terraform and ansible installed, containing the following:

  • ansible-local:

    • terraform apply (spins up an instance/container from AMI with ansible and packer installed)
  • ansible-remote (to the instance mentioned above)

    • clone devops repo with packer build script on it
    • execute packer build command (build command depends on ansible, build creates ec2 container registry image)
  • ansible-local

    • terraform destroy

Is the above solution a viable option? Are there alternatives? Can Packer not run commands and commit from a container running remotely in ECS?

My long term solution will be to only use bitbucket pipelines to trigger lambda functions in AWS, which will spin up containers in our EC2 Container Registry and perform the builds there. More control, and we can have devs trigger the lambda functions from their machines (with more bespoke dynamic variables).

3

There are 3 answers

1
BMW On

My understanding for your problem to block you is, the bitbucket pipelines (normally call them agents) have no enough permission to do the job (terraform apply, packer build) to your AWS Account.

Since bitbucket pipeline agents are running in Bitbucket cloud, not in your AWS account (which you can assign IAM role on them), you should create an account with IAM role (policies and permissions are list below) assign its AWS API keys (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and option of AWS_SESSION_TOKEN) as environment variables in your pipeline.

You can refer this document on how to add your AWS credentials to Bitbucket Pipelines

https://confluence.atlassian.com/bitbucket/deploy-to-amazon-aws-875304040.html

With that, you can run packer or terraform commands without issues.

For minimum policies you need to assign to run packer build, refer this document:

https://www.packer.io/docs/builders/amazon.html#using-an-iam-task-or-instance-role

{
  "Version": "2012-10-17",
  "Statement": [{
      "Effect": "Allow",
      "Action" : [
        "ec2:AttachVolume",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:CopyImage",
        "ec2:CreateImage",
        "ec2:CreateKeypair",
        "ec2:CreateSecurityGroup",
        "ec2:CreateSnapshot",
        "ec2:CreateTags",
        "ec2:CreateVolume",
        "ec2:DeleteKeypair",
        "ec2:DeleteSecurityGroup",
        "ec2:DeleteSnapshot",
        "ec2:DeleteVolume",
        "ec2:DeregisterImage",
        "ec2:DescribeImageAttribute",
        "ec2:DescribeImages",
        "ec2:DescribeInstances",
        "ec2:DescribeRegions",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeSnapshots",
        "ec2:DescribeSubnets",
        "ec2:DescribeTags",
        "ec2:DescribeVolumes",
        "ec2:DetachVolume",
        "ec2:GetPasswordData",
        "ec2:ModifyImageAttribute",
        "ec2:ModifyInstanceAttribute",
        "ec2:ModifySnapshotAttribute",
        "ec2:RegisterImage",
        "ec2:RunInstances",
        "ec2:StopInstances",
        "ec2:TerminateInstances"
      ],
      "Resource" : "*"
  }]
}

For terraform plan/apply, you need to assign the most permission because terraform can take care of nearly all aws resources.

Second, for your existing requirement, you only need to run packer and terraform commands, you needn't run docker command in bitbucket pipeline.

So the normal pipeline with above aws API environment, it should work directly.

image: hashicorp/packer

pipelines:
  default:
    - step:
        script:
          - packer build <your_packer_json_file>

You should be fine to run terraform command in image hashicorp/terraform as well.

1
Rob Lockwood-Blake On

I think I would approach it like this:

  1. Have a Packer build that produces a "Docker build AMI" that you can run on EC2. Essentially it would just be an AMI with Docker pre-installed, plus anything else you need. This Packer build can be stored in another BitBucket Git repo and you can have this image built and pushed to EC2 through another BitBucket pipeline so that any changes to your build AMI are automatically built and pushed as an AMI. As you already suggest, using the AWS Builder here for that.
  2. Have a Terraform script as part of your current project that is called by your BitBucket pipeline to spin-up an instance of the above "Docker Build" AMI when your pipeline starts e.g terraform apply
  3. Use the Packer Docker Builder on the above EC2 instance to build and push your Docker image to ECR (applying your Ansible scripts).
  4. Terraform destroy the environment once the build is complete

Doing that keeps everything in infrastructure as code and should make it fairly trivial for you to move the Docker build locally into the BitBucket pipeline if they offer support for running docker run at any point.

2
dnk8n On

I setup some terraform scripts which can be used to execute from any CI tool, with a few pre-requisites:

  • CI tool must have API access tokens to AWS (the only cloud provider supported so far)
  • CI tool must be able to run Terraform or dockerized Terraform container

This will spin up a fresh EC2 instance in your own VPC of your choosing and execute a script.

For this stack overflow question, that script would contain some packer commands to build and push a docker image. The AMI for the EC2 instance would need packer and docker installed.

Find more info at: https://github.com/dnk8n/remote-provisioner