I have a plugin named "inventory-backend". It includes Knex migrations folder, "migrations". It works fine locally.
Here, I created a Docker image, completely following Backstage Docs (https://backstage.io/docs/deployment/k8s/ and https://backstage.io/docs/deployment/docker/). Here is my Dockerfile:
FROM node:18-bookworm-slim
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install -y --no-install-recommends python3 g++ build-essential && \
yarn config set python /usr/bin/python3
USER node
WORKDIR /app
ENV NODE_ENV production
COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
yarn install --frozen-lockfile --production --network-timeout 300000
COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
CMD ["node", "packages/backend", "--config", "app-config.yaml"]
Next, I'm deploying Postgre and Backstage, again, in the way that docs suggest. But the pod that gets created with the deployment gets this error:
Backend failed to start up [Error: ENOENT: no such file or directory, scandir '/app/plugins/inventory-backend/migrations'] { errno: -2, code: 'ENOENT', syscall: 'scandir', path: '/app/plugins/inventory-backend/migrations' }
When I look inside the container, I don't see my "migrations" folder. So I added this line to Dockerfile:
COPY plugins/inventory-backend/migrations /app/plugins/inventory-backend/migrations
Now when I check inside the container, I see that migrations folder is there. But it gives me the same error. How can this happen?
My package.json file:
{
"name": "root",
"version": "1.0.0",
"private": true,
"engines": {
"node": "16 || 18"
},
"scripts": {
"dev": "concurrently \"yarn start\" \"yarn start-backend\"",
"start": "yarn workspace app start",
"start-backend": "yarn workspace backend start",
"build:backend": "yarn workspace backend build",
"build:all": "backstage-cli repo build --all",
"build-image": "yarn workspace backend build-image",
"tsc": "tsc",
"tsc:full": "tsc --skipLibCheck false --incremental false",
"clean": "backstage-cli repo clean",
"test": "backstage-cli repo test",
"test:all": "backstage-cli repo test --coverage",
"lint": "backstage-cli repo lint --since origin/main",
"lint:all": "backstage-cli repo lint",
"prettier:check": "prettier --check .",
"new": "backstage-cli new --scope internal"
},
"workspaces": {
"packages": [
"packages/*",
"plugins/*"
]
},
"devDependencies": {
"@backstage/cli": "^0.22.13",
"@spotify/prettier-config": "^12.0.0",
"concurrently": "^6.0.0",
"lerna": "^4.0.0",
"prettier": "^2.3.2",
"typescript": "~5.0.0"
},
"resolutions": {
"@types/react": "^17",
"@types/react-dom": "^17"
},
"prettier": "@spotify/prettier-config",
"lint-staged": {
"*.{js,jsx,ts,tsx,mjs,cjs}": [
"eslint --fix",
"prettier --write"
],
"*.{json,md}": [
"prettier --write"
]
},
"dependencies": {
"@backstage/errors": "^1.2.2",
"@manypkg/get-packages": "^1.1.3",
"@types/pg": "^8.10.2",
"@types/uuid": "^9.0.3",
"express": "^4.18.2",
"express-promise-router": "^4.1.1",
"node-gyp": "^9.4.0",
"p5": "^1.7.0",
"pg": "^8.11.3"
}
}
And my tsconfig file:
{
"extends": "@backstage/cli/config/tsconfig.json",
"include": [
"packages/*/src",
"plugins/*/src",
"plugins/*/dev",
"plugins/*/migrations"
],
"exclude": ["node_modules"],
"compilerOptions": {
"jsx": "react",
"outDir": "dist-types",
"rootDir": "."
}
}
ls output of the /app directory:
app-config.production.yaml app-config.yaml node_modules package.json packages plugins yarn.lock
Here, app-config.production.yaml does not have overriding effect on app-config.yaml, as a note. Thank you.
SOLVED: It seems that my minikube was using its own Docker runtime, not the local Docker runtime that I was using. So I changed the runtime environment to local Docker, problem solved.
If anyone has the same case, FYI.