Can't run Bazel nodejs_image with Puppeteer (Error: libgobject-2.0.so.0)

818 views Asked by At

I am using Bazel to build Docker containers:

ts_config(
    name = "tsconfig",
    src = "tsconfig.lib.json",
)

ts_project(
    name = "lib",
    srcs = ["index.ts"],
    declaration = True,
    tsconfig = "tsconfig",
    deps = [
        "@npm//@types/node",
        "@npm//puppeteer",
    ],
)

nodejs_binary(
    name = "server",
    data = [
        "lib",
    ],
    entry_point = "index.ts",
)

nodejs_image(
    name = "image",
    binary = "server",
)

Running the nodejs_binary works fine.

But running the nodejs_image "image" throws an error:

(node:44) UnhandledPromiseRejectionWarning: Error: Failed to launch the browser process!
/app/server.runfiles/node_puppeteer/node_modules/puppeteer/.local-chromium/linux-901912/chrome-linux/chrome: error while loading shared libraries: libgobject-2.0.so.0: cannot open shared object file: No such file or directory

TROUBLESHOOTING: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md

Hence I've tried to add a custom base (this one) image like this:

nodejs_image(
    name = "base_image",
    base = "@nodejs_puppeteer//image",
    binary = "server",
)

and in WORKSPACE:

load("@io_bazel_rules_docker//container:container.bzl", "container_pull")

container_pull(
    name = "nodejs_puppeteer",
    digest = "sha256:22ec485fa257ec892efc2a8b69ef9a3a2a81a0f6622969ffe2d416d2a076214b",
    registry = "docker.io",
    repository = "drakery/node-puppeteer:latest",
)

However running the updated nodejs_image "base_image", throws this error:

[link_node_modules.js] An error has been reported: [Error: EACCES: permission denied, symlink '/app/server.runfiles/npm/node_modules' -> 'node_modules'] {
  errno: -13,
  code: 'EACCES',
  syscall: 'symlink',
  path: '/app/server.runfiles/npm/node_modules',
  dest: 'node_modules'
} Error: EACCES: permission denied, symlink '/app/server.runfiles/npm/node_modules' -> 'node_modules'

How can I add the missing dependencies into the nodejs_image?

A minimal reproduction of the issue can be found here: https://github.com/flolu/bazel-node-puppeteer

2

There are 2 answers

3
Florian Ludewig On BEST ANSWER

As suggested by @Rohan Singh and @Noam Yizraeli, I've tried to change the custom base image to my development environment. Hence I've created a Docker image with Ubuntu as is it's base and Node.js as well as Chrome installed:

FROM ubuntu:20.04

# Install Node.js
RUN apt-get update \
  && apt-get install -y curl
RUN curl --silent --location https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install --yes nodejs
RUN apt-get install --yes build-essential

# Install Chrome
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
  && apt-get install -y wget gnupg \
  && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
  && sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
  && apt-get update \
  && apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf libxss1 \
  --no-install-recommends \
  && rm -rf /var/lib/apt/lists/*

I pulled it in Bazel like this:

load("@io_bazel_rules_docker//container:container.bzl", "container_pull")

container_pull(
    name = "ubuntu",
    digest = "sha256:a1ceb3aac586b6377821ffe6aede35c3646649ee5ac38c3566799cd04745257f",
    registry = "docker.io",
    repository = "drakery/node-puppeteer",
)

And used it like this:

nodejs_image(
    name = "custom_ubuntu",
    base = "@ubuntu//image",
    binary = "server",
)

Here is the final working repository: https://github.com/flolu/bazel-node-puppeteer/tree/050376d36bccb67a93933882a459f0af3051eabd

4
Rohan Singh On

This is a long-standing issue with building native dependencies with rules_nodejs.

The native dependency is built and linked on your host machine. The built version is copied into a container image by nodejs_image, but the locations of shared libraries like libgobject-2.0.so.0 is often different in the image than on your development machine.

There is no general solution to this, but here are some possible workarounds:

  1. Run npm rebuild in the entrypoint to your image. This should rebuild native modules and relink them based on the shared library locations in the running container. The downside is that it will increase your container startup time, and it won't necessarily work in all cases. Environment-specific stuff can still leak in from the host platform.

  2. Don't copy in node_modules from the host at all. Instead, run npm install or yarn on container startup. This could be necessary if just running npm rebuild at container startup doesn't do what you need. However, it's going to increase startup time even further.

  3. Strictly control your development environment so that it matches the base image that you use for nodejs_image. For example, require that all development work happens on Ubuntu 20.04, and use that same distribution as the base image for all your nodejs_image targets. Or go a step further and containerize your development environment using the same base image, and do your development in that container.

That last option is what I actually do in practice. We have developer virtual machines that run the exact same OS that we use as the base for all of our built images, so things "just work".