Using a built docker image as the docker image for my test stage (GitLab CI/CD)

210 views Asked by At

Problem

The following is to do with GitLab CI/CD.

I have two stages. build and test.

In the build stage, I am currently building a docker image to my GCP Artifact Registry with the $CI_COMMIT_SHA as its tag and the $CI_PROJECT_NAME as its image name. This works great and pushes just fine.

I am having trouble with the test stage. I have the following issues:

  1. How can I authorise the pulling of the newly built image from my Artifact Registry to be used to run the test job in?

  2. How do I specify the specific image tag to use?

NB

I do not want to use docker-in-docker as it is considered bad practice and requires my runners to run in privileged mode. Therefore, I will be building my images using GCP's Cloud Build.

What doesn't work

I have found this link containing a closed issue for transferring environment variables across jobs.

This issue of authorising the the pull of the image from GCP Artifact Registry also poses some issues for similar reasons. In the past, I have authorised applications pulling docker images via a short-living GCP auth access token obtained via:

export ACCESS_TOKEN=$(gcloud auth print-access-token)

However, this won't work for a similar reason as the aforementioned:

test:
  stage: test
  image: https://oauth2accesstoken:${ACCESS_TOKEN}@${LOCATION}-docker.pkg.dev/${PROJECT_ID}/docker-repo/${IMAGE_NAME}:${IMAGE_TAG}
  ...

Summary

So in other words, I want to achieve the following, except that it works:

build: # builds image to GCP Artifact Registry. This already works.
  stage: build
  image: google/cloud-sdk
  ...

test: # run some unit tests using the image built in the `build` stage to mimic the production environment
  stage: test
  image: https://oauth2accesstoken:${ACCESS_TOKEN}@${LOCATION}-docker.pkg.dev/${PROJECT_ID}/docker-repo/${IMAGE_NAME}:${IMAGE_TAG}
  ...

How do I do this? Please help. Many thanks in advance!

1

There are 1 answers

0
aljaxus On BEST ANSWER

"prerequisite" answers to unasked questions I assume will help you out:

  1. You can "export" the access token using the dotenv artifact report that you then "import" in other jobs using dependencies. Afaik these env vars are only present in the "script execution" context, not while "compiling" the CI schema/script. So they would not be added in the URL (looking at your example).
  2. As far as I am aware, you can not authenticate to a container registry like that (user:pass@host/path).
  3. If I were you I would push the container image to gitlab's container registry, to which the CI jobs have implicit temporary access, removing the need to explicitly authenticate in order to use the just-built image. This may not work correctly because the worker running the test job might not pull the very latest image (assuming you will not be tagging test images with commit hashes, which are different on each new commit) and it would end up running the tests on an "old" image.
  4. You do not need DinD to build images using unprivileged runners. You can use GoogleContainerTools/kaniko. To simplify your life you can take a look at a "wrapper" I made for it cts/build-oci.

With that out of the way...

How do I specify the specific image tag to use?

Depends on how "granular" you want to be. If you want absolutely always-repeatable pipeline runs without ANY clashing possibility; use the short commit sha (CI_COMMIT_SHORT_SHA) as the container tag. Though I would recommend you use CI_COMMIT_REF_SLUG in order to not have hundreds of tags in the registry, but still differentiate between refs (branches & tags).

How can I authorise the pulling of the newly built image from my Artifact Registry to be used to run the test job in?

You can authenticate specific CI jobs by setting the DOCKER_AUTH_CONFIG env var. You can combine this with the previously-mentioned Dynamic child pipelines and you're good to go.