How to upload large docker images to google container registry?

848 views Asked by At

I'm creating a lap image from this (https://github.com/ulsmith/alpine-apache-php7) and in my application, I have a 2.5 GB folder of images the image is built successfully but when I try to push it to google container registry it doesn't work due to large size and sometimes server timeout and that will be like after a minimum of 20 min. I get the error of image after 15 mins or more and before that the google cloud SDK is showing nothing. Here is the command I am using: gcloud builds submit --tag us.gcr.io/[project-id]/test:v1

I checked crcmod and it is enabled.

enter image description here

This is my Dockerfile:+

FROM alpine:edge
MAINTAINER Paul Smith <pa.ulsmith.net>

RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories

RUN apk update && apk upgrade && apk add \
    bash apache2 php7-apache2 curl ca-certificates openssl openssh git php7 php7-phar php7-json php7-iconv php7-openssl tzdata openntpd nano

RUN curl -sS https://getcomposer.org/installer | php && mv composer.phar /usr/local/bin/composer

RUN apk add \
    php7-ftp \
    php7-xdebug \
    php7-mcrypt \
    php7-mbstring \
    php7-soap \
    php7-gmp \
    php7-pdo_odbc \
    php7-dom \
    php7-pdo \
    php7-zip \
    php7-mysqli \
    php7-sqlite3 \
    php7-pdo_pgsql \
    php7-bcmath \
    php7-gd \
    php7-odbc \
    php7-pdo_mysql \
    php7-pdo_sqlite \
    php7-gettext \
    php7-xml \
    php7-xmlreader \
    php7-xmlwriter \
    php7-tokenizer \
    php7-xmlrpc \
    php7-bz2 \
    php7-pdo_dblib \
    php7-curl \
    php7-ctype \
    php7-session \
    php7-redis \
    php7-exif \
    php7-intl \
    php7-fileinfo \
    php7-ldap \
    php7-apcu

RUN apk add php7-simplexml

RUN cp /usr/bin/php7 /usr/bin/php \
    && rm -f /var/cache/apk/*

RUN sed -i "s/#LoadModule\ rewrite_module/LoadModule\ rewrite_module/" /etc/apache2/httpd.conf \
    && sed -i "s/#LoadModule\ session_module/LoadModule\ session_module/" /etc/apache2/httpd.conf \
    && sed -i "s/#LoadModule\ session_cookie_module/LoadModule\ session_cookie_module/" /etc/apache2/httpd.conf \
    && sed -i "s/#LoadModule\ session_crypto_module/LoadModule\ session_crypto_module/" /etc/apache2/httpd.conf \
    && sed -i "s/#LoadModule\ deflate_module/LoadModule\ deflate_module/" /etc/apache2/httpd.conf \
    && sed -i "s#^DocumentRoot \".*#DocumentRoot \"/app/public\"#g" /etc/apache2/httpd.conf \
    && sed -i "s#/var/www/localhost/htdocs#/app/public#" /etc/apache2/httpd.conf \
    && printf "\n<Directory \"/app/public\">\n\tAllowOverride All\n</Directory>\n" >> /etc/apache2/httpd.conf
ENV PHP_ALLOW_URL_INCLUDE=On
RUN mkdir /app && mkdir /app/public && chown -R apache:apache /app && chmod -R 755 /app && mkdir bootstrap
COPY app/ /app/public/
ADD start.sh /bootstrap/
RUN chmod +x /bootstrap/start.sh

EXPOSE 80
ENTRYPOINT ["/bootstrap/start.sh"]

Is there a better way to push large images?

2

There are 2 answers

0
Gautham On

Please contact us with the un-redacted image at [email protected]

In general (this may not be an issue for you), the problem with large images is that the short-lived access tokens you receive via our normal token exchange will result in failed uploads. You're going to have to explore JSON key authentication in order to enable those very long sessions when uploading your images to GCR

Also to note that Container Registry does not support Docker chunked uploads. Some container image tools support uploading large container images with either chunked uploads or a single monolithic upload. You must use monolithic uploads when you push container images to Container Registry.

0
Izzy Lazerson On

If you are pushing a large image to GCR a much quicker way would be to:

  • Spin up a GCP VM instance
  • Upload your Dockerfile (and any relevant files needed for the build)
  • Build the container and push to GCR from there