What is docker node:9-stretch? - node.js

I'm looking at a Dockefile we have in one of our projects that essentially builds our UI.
I see it's using FROM node:9-stretch which ships with npm 5.6.0. I want to use npm ci which requires 5.7.0 so i need to update my dockerfile node base image.
I do not see a node:9-stretch in docker hub https://hub.docker.com/_/node/
Where is this pulling node:9-stretch from and what is a -stretch version of a base image?

The node 9 build was dropped after this commit https://github.com/nodejs/docker-node/commit/b22fb6c84e3cac83309b083c973649b2bf6b092d. You can find Dockerfile in diff.
The node:9-stretch image you can pull build before the commit, and persisted in docker hub. The 9-stretch tag exists in Tags page, as for now https://hub.docker.com/_/node?tab=tags&page=18.

Related

Best way to build a Docker image

I have just started to learn Docker. I have a nodeJs web application and built 2 images of it using 2 different Dockerfile. The 2 images are really different in size. Can someone tell me why they are really different in size even though they use the same alpine, node, and npm version, and which one is the recommended way to build a nodeJS application image?
First Dockerfile which created a 91.92MB image:
FROM alpine
RUN apk add --update nodejs npm curl
COPY . /src
WORKDIR /src
RUN npm install
EXPOSE 8080
ENTRYPOINT ["node", "./app.js"]
Second Dockerfile which created a 259.57MB image:
FROM node:19-alpine3.16
RUN apk add --update npm curl
COPY . /src
WORKDIR /src
RUN npm install
CMD [ "node", "./app.js" ]
If it meets your technical and non-technical needs, using the prebuilt node image is probably simpler and easier to update, and I'd generally recomment using it.
The node image already includes npm; you do not have to apk add it. Moreover, the Dockerfile doesn't use apk to install Node, but instead installs it either from a tar file or from source. So the apk add npm call winds up installing an entire second copy of Node that you don't need; that's probably the largest difference in the two image sizes.
The base image you start from hugely affects your final image size - assuming you are adding the same stuff to each base image. In your case, you are comparing a base alpine image (5.5mb) to a base node image (174mb). The base alpine image will have very little stuff in it while the node image - while based on alpine - has at least got node in it but presumably lots of extra stuff too. Whether you do or do not need that extra stuff is not for me to say.
You can examine the Dockerfile used to build any of these public images if you wish to see exactly what was added. You can also use tools like dive to examine a local image layers.

How would I update node.js inside of Docker container on Windows

I am running into a problem finding good documentation on how to update and check the version of Node that runs on my Docker container. This is just a regular container for .net 2 application. Thank you in advance.
To answer your questions directly:
If you don't have the Image's Dockerfile:
You can probably (!) determine the version of NodeJS in your container by getting a shell into the container and then running node --version or perhaps npm --version. You could try docker run .... --entrypoint=bash your-container node --version
You can change a running Container and then use docker commit to create a new Container Image from the result.
If you do have the Image's Dockerfile:
You should be able to review it to determine which version of NodeJS it installs.
You can update the NodeJS version in the Dockerfile and rebuild the Image.
It's considered bad practice to update images and containers directly (i.e. not updating the Dockerfile).
Dockerfiles are used to create Images and Images are used to create containers. The preferred approach is to update the Dockerfile to update the Image and to use the updated Image to create update Containers.
The main reason is to ensure reproducibility and to provide some form of audit in being able to infer the change from the changed Dockerfile.
In your Dockerfile
FROM node:14-alpine as development

Node Docker Build and Production Container Best Practices

I have a Node project that uses MongoDB. To for automated testing, we use Mongo Memory Server
For Mongo Memory Server, Alpine is not supported my Mongo, so it can't run on Alpine images
From the docs:
There isn't currently an official MongoDB release for alpine linux. This means that we can't pull binaries for Alpine (or any other platform that isn't officially supported by MongoDB), but you can use a Docker image that already has mongod built in and then set the MONGOMS_SYSTEM_BINARY variable to point at that binary. This should allow you to use mongodb-memory-server on any system on which you can install mongod.
I can run all of my tests in a Docker container using a Node base image, but for production I want to use an Alpine image to save on memory.
so my Dockerfile looks something like this.
FROM node:x.x.x as test
WORKDIR /app
COPY . /app
npm install
npm run build # we use Typescript, this runs the transpilation
npm test # runs our automated tests
FROM node:x.x.x-alpine
WORKDIR /app
COPY --from=test /app/src /app/src
COPY --from=test /app/package.json /app/package.json
COPY --from=test /app/package-lock.json /app/package-lock.json
COPY --from=test /app/config /app/config
COPY --from=test /app/scripts /app/scripts
RUN npm install --production
RUN npm run build
Doing smoke testing, the resulting Alpine image seems to work okay. I assume it is safe because I install the modules in the alpine image itself.
I am wondering, is this the best practice? Is there a better way to do something like this? That is, for Node specifically, have a larger test container and a small production container safely.
Few points
If you are building twice, what is the point of the multistage build. I don't do much node stuff. But the reason you would want a multistage build is that you build you application with npm build that take those artifacts and copy them to the image and serve/run that in some way. In go world it would be something like building in a builder stage then just running the binary.
You always want to have the most changing things on the top of the union file system. What it means is that instead of copying the entire application code and running npm install, you should copy just package.json and run npm install on it. That way docker can cache the result of npm install and save on downloading the node files if nothing has changed on top. You application code changes way more than the package.json
On the second stage same idea. If you have to - copy package.json first and run npm install then copy the rest of the stuff.
You can have more stages if you want. The name of the game is to get the leanest and cleanest final stage image. Thats the one that goes on registry. Everything else can be and should be removed.
Hope it helps.

Gitlab CI: How to avoid rebuilding of images on each commit

I'm using Gitlab CI to store and deploy docker images but I got a big issue. Gitlab CI is rebuilding all images on every commit.
The first step is to build my common image which takes around 8 minutes. Currently I only modify child images but the common image is still rebuild on every commit.
Thus the build is a waste of time because the push won't have an effect as the image is already inside the Gitlab repository.
How to avoid the rebuild of images when it's already inside your Gitlab repository?
Below gitlab-ci.yml
build:
tags:
- docker_ci_build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- >
docker build
-t $CI_PROJECT/common:6.0.1 common
Below a dockerfile:
FROM ubuntu:bionic
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install -y packages && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY file.conf /etc/common/
Caching appears to be more complicated than what I first thought, and the docker documentation for the various pull options fails to mention any nuances.
If the docker image may or may not yet exist, you can actually adjust the docker pull command to pass your build even if the pull fails:
- docker image pull $LATEST || true
This will pull the image $LATEST if it exists, otherwise will carry on with your build. The next step is to make sure that your build can actually take use of it.
The --pull option for docker build seemed like what you wanted, and docker's documentation says
--pull Always attempt to pull a newer version of the image
but it appears that it doesn't actually do anything for the actual image you're building. More promising is --cache-from which lets you name images that docker can cache from. In my simple test however, I was not seeing docker actually use my cache, probably because it doesn't have enough layers in the docker image to leverage the cache.
- docker image pull $LATEST_IMAGE || true
- docker build --pull -t $TAGGED_IMAGE --cache-from $LATEST_IMAGE .
- docker tag $TAGGED_IMAGE $LATEST_IMAGE
- docker push $LATEST_IMAGE
- docker push $TAGGED_IMAGE
This may be more useful if your build image is multi-stage build, and can actually leverage the cache. Optionally, you could adjust your gitlab-ci script to avoid building if the image exists.
Optionally, you may want to look at gitlab-ci pipelines and creating optional stages and triggers for when you build and push common, and when you build your sub-projects.
I define the variables in a variables section at the beginning of the file
Latest is defined with the :latest tag which is basically the most recent build from any CI branch for the project. $TAGGED_IMAGE is defined with :$CI_COMMIT_SHORT_SHA which tags the docker image with the git SHA, which makes kubernetes deployments to a development environment easier to track.

Continuous integration: Where to build the project?

I have a Jenkins server on which I observe a private git repository for changes, which then triggers a pipeline script (the repository contains a nodejs app). In this pipeline script I need to do the following steps:
Install dependencies (npm install)
Build my application (npm run build, which creates a dist folder)
Build a docker container (docker build) and run the container (which runs a script in the dist folder)
Which of the following two options would be the recommended way to do this, and why?
Option A: Run npm install and npm run build in the jenkins pipeline and copy the dist folder to the docker container during the docker build. This would allow me to only install runtime dependencies in the docker container using npm install --only=production, therefore reducing the image size significantly.
Option B: Run npm install and npm run build during docker build (In the Dockerfile). This would allow me to run the docker container outside the CI server if I have to (I don't have a use case for it now, but it seems cleaner because it is more independent). However, the image size would significantly increase and I am not sure if this is the recommended way.
Any suggestions?
I would choose option B.
The reason behind it would be that there are some npm packages that runs a node-gyp, gcc, and other platform-dependent builds.
Look at the popular bcrypt package as an example.
Going with option A would mean that your docker and Jenkins machine need to hold the same infra for such builds which is not common, to say the least.

Resources