Running "npm ci" when building docker image is much slower - node.js

I tried to run npm ci command using the same package.json and package-lock.json files in three different environments:
docker host machine - takes ~27s to complete
inside a docker container - takes ~32s to complete
during building a docker image - takes ~163s to complete
I wonder why it takes much more time to install packages when building an image. What the difference between running commands when building an image and when running commands inside a container manually? Perhaps it's related to the amount of resources (CPU, Memory) docker uses when building an image?
I use the same node and npm version in all three environments. Docker host is a Windows Server 2019 VM that has 2 virtual CPUs and 2GB of memory. Docker version is 18.09.2.

Related

Docker Can't Parallelize NPM Install Despite Running Parallel Stages

I've been working on a fullstack project that's end-to-end JavaScript, and have a Dockerfile that simplifies to this
FROM node:18-slim AS backend-builder
WORKDIR /backend-staging
COPY ./backend/package.* .
RUN npm install
# bring in other files
FROM node:18-slim AS frontend-builder
WORKDIR /frontend-staging
COPY ./frontend/package.* .
RUN npm install
# bring in other files, run webpack, etc
FROM node:18-slim AS final
WORKDIR /core
COPY ./package.* # root file w/ some non-webpack'd externals and scripts to access after the contianer builds
RUN npm install
COPY --from=backend-builder /backend-staging/build/. .
COPY --from-frontend-builder /frontend-staging/build ./webapp
# setup some postinstall things, set start command, fin
When I run docker build, after pulling the node:18-slim image, I see that buildkit parallelization takes place across my stages, with all 3 npm install commands showing up in the output at the same time. However, it seems that the actual installation goes one stage at a time, with the output first appearing from the final stage and running to completion, then the backend, then the frontend, before the copying and whatnot resumes. It almost seems like there's a mutex on access to the actual npm program, so while the command can be parallelized, the execution is first-come-first-serve single threaded.
I was reading this question, but I seem to have a different problem, as the build stages are definitely being parallelized; their timers all start simultaneously. It's the commands within the build stages that are serialized across stages, for seemingly no reason.
I'm accessing the Docker engine/daemon via Docker Desktop integration w/ WSL2, with buildkit enabled through my Docker Desktop config. The exact command I'm running to build is docker build . -t 'some-image-tag'.

How do I create a distributable Docker image?

I'd like to build a NodeJS server packaged as an executable, which can then be installed and run on any Linux machine without any pre-requisite dependencies. I was considering packaging it as a Docker image, but that would mean that the user would need Docker to be installed on their system. Is there a way to package a Docker image itself as an executable, so that all the user needs to do is to run an executable file?
With docker NO
The answer for the executable from docker is no.
You can create docker/docker-compose project which you can simply run
if you have docker installed.
Without docker YES
But you can still package it without using docker (with the whole nodejs included in the executable).
Look at this link https://www.npmjs.com/package/pkg

Meteor build is hanging with Docker build but working inside the container

I am trying to create a docker file to create a windows container with meteor app bundle. But my docker build hangs at "meteor build" step and sometime it completes in hours.
ONBUILD RUN meteor build --server-only --allow-superuser --directory "c:\tmp\bundle-dir" --architecture os.windows.x86_64
But if i commented this step and completes my docker build to create a windows server 2019 based docker image than build completes successfully. And I can run "meteor build" inside the container, after running a container with this image.
docker run -it <image_name> cmd
I dont know what is going on here. My changes are available on github at https://github.com/singh-ajeet/meterd-windows.
I am using Google cloud VM with configuration - 8 core and 30GB RAM.

Continuous integration: Where to build the project?

I have a Jenkins server on which I observe a private git repository for changes, which then triggers a pipeline script (the repository contains a nodejs app). In this pipeline script I need to do the following steps:
Install dependencies (npm install)
Build my application (npm run build, which creates a dist folder)
Build a docker container (docker build) and run the container (which runs a script in the dist folder)
Which of the following two options would be the recommended way to do this, and why?
Option A: Run npm install and npm run build in the jenkins pipeline and copy the dist folder to the docker container during the docker build. This would allow me to only install runtime dependencies in the docker container using npm install --only=production, therefore reducing the image size significantly.
Option B: Run npm install and npm run build during docker build (In the Dockerfile). This would allow me to run the docker container outside the CI server if I have to (I don't have a use case for it now, but it seems cleaner because it is more independent). However, the image size would significantly increase and I am not sure if this is the recommended way.
Any suggestions?
I would choose option B.
The reason behind it would be that there are some npm packages that runs a node-gyp, gcc, and other platform-dependent builds.
Look at the popular bcrypt package as an example.
Going with option A would mean that your docker and Jenkins machine need to hold the same infra for such builds which is not common, to say the least.

Only some locally built Docker images fail to work on remote server (error: "No command specified")

I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3

Resources