/busybox/sh: eval: line 109: apk: not found while build_backend on GitLab

1.5k views Asked by At

I try to pass build_backend stage before build_djangoapp with Dockerfile on GitLab, but it fails with this error.

/busybox/sh: eval: line 111: apk: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127

GitLab CI/CD project

.gitlab-ci.yml

# Official image for Hashicorp's Terraform. It uses light image which is Alpine
# based as it is much lighter.
#
# Entrypoint is also needed as image by default set `terraform` binary as an
# entrypoint.
image:
  name: hashicorp/terraform:light
  entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

# Default output file for Terraform plan
variables:
  GITLAB_TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
  PLAN: plan.tfplan
  PLAN_JSON: tfplan.json
  TF_ROOT: ${CI_PROJECT_DIR}
  GITLAB_TF_PASSWORD: ${CI_JOB_TOKEN}

cache:
  paths:
    - .terraform

before_script:
  - apk --no-cache add jq
  - alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create\":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
  - cd ${TF_ROOT}
  - terraform --version
  - echo ${GITLAB_TF_ADDRESS}
  - terraform init -backend-config="address=${GITLAB_TF_ADDRESS}" -backend-config="lock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="unlock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="username=${MY_GITLAB_USERNAME}" -backend-config="password=${MY_GITLAB_ACCESS_TOKEN}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5"
stages:
  - validate
  - build
  - test
  - deploy
  - app_deploy

validate:
  stage: validate
  script:
    - terraform validate

plan:
  stage: build
  script:
    - terraform plan -out=$PLAN
    - terraform show --json $PLAN | convert_report > $PLAN_JSON
  artifacts:
    name: plan
    paths:
      - ${TF_ROOT}/plan.tfplan
    reports:
      terraform: ${TF_ROOT}/tfplan.json

# Separate apply job for manual launching Terraform as it can be destructive
# action.
apply:
  stage: deploy
  environment:
    name: production
  script:
    - terraform apply -input=false $PLAN
  dependencies:
    - plan
  when: manual
  only:
    - master

build_backend:
 stage: build
 image:
   name: gcr.io/kaniko-project/executor:debug
   entrypoint: [""]
 script:
   - echo "{\"auths\":{\"https://gitlab.amixr.io:4567\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_JOB_TOKEN\"}}}" > /kaniko/.docker/config.json
   - /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination $CONTAINER_IMAGE:$CI_COMMIT_REF_NAME

# https://github.com/GoogleContainerTools/kaniko#pushing-to-google-gcr
build_djangoapp:
 stage: build
 image:
   name: gcr.io/kaniko-project/executor:debug
   entrypoint: [""]
 before_script:
   - echo 1
 script:
   - export GOOGLE_APPLICATION_CREDENTIALS=$TF_VAR_gcp_creds_file
   - /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination gcr.io/{TF_VAR_gcp_project_name}/djangoapp:$CI_COMMIT_REF_NAME
 when: manual
 only:
   - master
 needs: []

app_deploy:
  image: google/cloud-sdk
  stage: app_deploy
  before_script:
    - echo 1
  environment:
    name: production
  script:
    - gcloud auth activate-service-account --key-file=${TF_VAR_gcp_creds_file}
    - gcloud container clusters get-credentials my-cluster --region us-central1 --project ${TF_VAR_gcp_project_name}
    - kubectl apply -f hello-kubernetes.yaml
  when: manual
  only:
    - master
  needs: []
1

There are 1 answers

1
Jenneth On BEST ANSWER

I looked at your project and it appears you figured out this one already.

Your .gitlab-ci.yml has a global before_script block, that is trying to install packages with apk. But your build_backend is based on the kaniko-image, which is using Busybox, which doesn't have apk (nor apt-get for that matter). You overrided in a later commit the before_script for build_backend, and this took away the problem.