Jest Unit Testing in Docker Container - node.js

I am trying to move my tests to my docker image build stage but it seems to ignore my test build at all and just skip it when I build the image.
What can be the problem?
# Base build
FROM node:16.13 AS build
RUN mkdir /app
WORKDIR /app
ADD package*.json ./
RUN npm i -g npm#6.14.15
RUN npm i --production
COPY . .
RUN npm run build
# Clean
RUN npm prune --production
RUN npm install -g node-prune
RUN node-prune
# Test
FROM build AS test
RUN npm i -g npm#6.14.15
RUN npm i -D jest typescript
RUN npm i -D ts-jest #types/jest
RUN npm i -D #shelf/jest-mongodb
RUN npx jest
# Release
FROM node:16.13-alpine
RUN mkdir /app
WORKDIR /app
COPY --from=build /app/dist ./dist/
COPY --from=build /app/node_modules ./node_modules/
COPY --from=build /app/package.json ./
CMD npm start
Above you can see my Dockerfile where I am preparing the build, then plan to have tests and after that, I am making my release image.
I've already been playing around with that for hours; I cleared cache, made tweaks with the order in the file) but it didn't help. It keeps ignoring my test build.
Any hint of that and in general on my Dockerfile is welcomed

Docker internally has two different systems to build images. Newer Dockers default to a newer system called BuildKit. One of the major differences is the way the two systems handle multi-stage builds: the older system just runs all of the build stages until the end, but BuildKit figures out that the final stage COPY --from=build but doesn't use anything from the test stage, and just skips it.
I wouldn't run these tests in a Dockerfile (since it doesn't produce a runnable artifact) or in Docker at all. I'd probably run them in my host-based development environment, before building a Docker image:
# On the host
npm install # devDependencies include all test dependencies
npm run tests # locally
# repeat until the tests pass (without involving a container)
# Build the image _after_ the tests pass
docker build -t myapp .
docker build also has a --target option to name a specific stage so you could tell Docker to "build the test image", but you'd just immediately delete it. This is where I don't think Docker quite makes sense as a test runner.
docker build -t delete-me --target test
docker build -t my-app .
docker rmi delete-me
Your Dockerfile also installs a MongoDB test helper. If the test depends on a live database rather than a mock implementation, the other thing to be aware of here is that code in a Dockerfile can never connect to other containers. You have to run this test from somewhere else in this case.

Related

What is the optimal way to exploit Docker multi-stage builds?

I'm using multi-stage builds in my Dockerfile (the first stage is a BUILD, and the second is the RUN).
I want to know if I should, in my second stage, copy the node_modules folder or if I should run an npm i. What is the optimal way ?
Note: All the apk packages that I install in the first stage are required to run npm ci properly (I had many errors : node-gyp, etc)
# Build container stage
FROM node:alpine AS BUILD_IMAGE
RUN apk --no-cache add -u --virtual build-dependencies \
g++ gcc libgcc libstdc++ linux-headers make python3
WORKDIR /app
COPY package*.json ./
RUN npm ci && npm clean cache --force && apk del build-dependencies
COPY . .
RUN npm run lint
RUN npm run tsc
RUN npm prune --production
# Run container stage
FROM node:alpine AS app
WORKDIR /app
COPY /package*.json ./
# Should I copy the `node_modules` folder or
# should I run an `npm i` ? What is the optimal method?
COPY --from=BUILD_IMAGE /app/dist ./dist
COPY --from=BUILD_IMAGE /app/node_modules ./node_modules
# Clean dev packages
EXPOSE 8080
# Run the container with a non-root User
USER node
CMD [ "node", "dist/src/app.js" ]
I always run npm install (ci) inside of the docker when I want to build the image, because if you copy node_modules from dev environment to docker image, depend on you OS you use for developing, many of the binary files inside node_modules are wrong ( Windows/mac => Linux ) also if you use ubuntu and your image is based on Alpine again you will have some problem.
The best option is always to make a layer just for node_modules build and after that, make an application layer based on your node_module layer => you will use cache and faster build.
Note: if you wanna copy node_modules on Windows use WSL2 (Ubuntu) and also build your images based on Ubuntu(Debian) then you don't worry about errors

Creating React application for production with Docker build?

I am creating a React application using docker build with the following Dockerfile:
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci
RUN npm install react-scripts -g
RUN npm install --save #fortawesome/fontawesome-free
RUN apk add nano
RUN apk add vim
COPY . ./
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I believe the Dockerfile is not of extreme importance here however. In my source code there is a master configuration file, which I want to leave out of the docker image to be able to deploy my React App easily. This causes a compilation error during the Dockerfile command RUN npm run build, since the compilator does not find a file that is referenced by another file. For development versions this was not an issue, since npm start is not that sensitive.
I would add the configuration file as a docker volume in the final application, so the code would be able to find it without problems. I am just wondering how to approach a situation like this, since it has not come up earlier on my path?
Also feel free to comment on or optimize my Dockerfile, as I am unsure of e.g. whether Nginx is the way to go in these production builds for website front-end applications.
If your app currently requires the configuration file, it's akin to "hard-coding" the values into it at build time, as you've noticed. If you do need to be able to dynamically swap in another configuration file at runtime, you would need to use e.g. fetch() to load it, not bundle it (as require does).
If configuring things at build-time is fine, then I'd also suggest looking at CRA custom environment variables; you could then inject the suitable values as environment variables at build time.
Beyond that, if you're looking for critique for your Dockerfile, from one Aarni to another:
Your package.json is broken if you need to do anything beyond npm ci or yarn during a build to install stuff. react-scripts should be a dev dependency and Font Awesome should be a regular dependency.
You don't need nano and vim in the temporary container, and even if you did, it'd be better to apk add them in a single step.
You shouldn't need to modify the PATH in the build container.
Using Nginx is absolutely fine.
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Here is a sample of my react docker file. May be you can use this if you want to optimize.
PS: i am running it from kubernetes.
# ############################# Stage 0, Build the app #####################
# pull official base image
FROM node:13.12.0-alpine as build-stage
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package*.json ./
#RUN npm install
RUN npm install
# add app
COPY . ./
#build for production
RUN npm run-script build
# #### Stage 1, push the compressed built app into nginx ####
FROM nginx:1.17
COPY --from=build-stage /app/build/ /usr/share/nginx/html

npm command not found error while running docker container

I am trying some something out with gitlab-runner image,
FROM gitlab/gitlab-runner:alpine
WORKDIR /app
COPY . /app
RUN apk add yarn && yarn install
RUN yarn --version # this layer prints 1.16.0
RUN ng build --prod
EXPOSE 3000
CMD ["yarn", "run", "start"]
above is the docker file I have created
docker build -t runner:1 .
I was able to build the image successfully
docker run -p 3000:3000 runner:1
but when I try to run the container it gives me below error
`*FATAL: Command yarn not found.*`
not sure about the behavior, if it is able to install yarn (apk add yarn) in base images and install the dependencies using yarn install then how it is not able to find the yarn command while running the container? Where I am going wrong.
Also at which directory yarn is installed in the alpine ?
I know it is not an efficient docker file, but I am trying to run the container first before optimizing it.
It outputs the version. It means the yarn is installed already. You could find the path the same as you find the version.
RUN which yarn
Step 6/10 : RUN which yarn
---> Running in 0f633b81f2ed
/usr/bin/yarn
We can see the /usr/bin/ has added to PATH.
Step 7/11 : RUN echo $PATH
---> Running in fc3f40b6bfd9
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
But I couldn't figure out why isn't reading yarn from the PATH.
So, we have set the PATH explicitly in our Dockerfile.
ENV PATH=${PATH}
But still, the issue persists. Now we have to separate yarn and commands as ENTRYPOINT and CMD respectively in the Dockerfile.
ENTRYPOINT ["yarn"]
CMD ["run", "start"]
Updated Dockerfile
FROM gitlab/gitlab-runner:alpine
ENV PATH=${PATH}
WORKDIR /app
COPY . /app
RUN apk add yarn && yarn install
RUN yarn --version # this layer prints 1.16.0
RUN ng build --prod
EXPOSE 3000
ENTRYPOINT ["yarn"]
CMD ["run", "start"]
---
$ docker run -p 3000:3000 harik8/yarn:latest
yarn run v1.16.0
error Couldn't find a package.json file in "/app"
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
The behaviours of the base image look unusual. It'd be better to go through it.
To build your app you shouldn't use gitlab-runner image, but a 'node' one.
Gilab-runner image is for running a gitlab agent which can be connected to docker engine and spawn the node container in which you will execute your build, in your case a docker image build.
To use gilab you need to prepare a gitlab-ci file where you will define what steps and which 'services' you need to do your build.
Tl;dr: change base image to node:latest and as a entirely separate work setup gitlab runner.
However if your aim is to have your application to extend gitlab runner, try docker multistage builds.
First, use node:latest image to build your app, and then copy the build output into gitlab-runner.
Runtime images such as gitlab-runner are stripped from build tools like yarn or npm, that's why your image fails. Main goal is to keep runtime images as small as possible and sdk's are not needed and sometimes dangerous when it comes to production-level work.

serve a gridsome website from within the container

I have a static website built with gridsome, I'm experimenting with docker containerization and I came up with the following Dockerfile:
FROM node:14.2.0-alpine3.10
RUN apk update && apk upgrade
RUN apk --no-cache add git g++ gcc libgcc libstdc++ linux-headers make python yarn
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i --global gridsome
RUN yarn global add serve
COPY --chown=node:node gridsome-website/ /home/node/build/
RUN echo && ls /home/node/build/ && echo
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# My attempt to serve the website from within, build is successful but serve command doesn't work
CMD ~/.npm-global/bin/gridsome build && serve -d dist/
Gridsome uses the port 8080 by default, after running the container via:
docker run --name my-website -d my-website-image
It doesn't fail, but I can't access my website using the link: http://localhost:8080, the container stops execution right after I run it. I tried copying the ``` dist/`` folder from my container via:
$ docker cp my-website:/home/node/build/dist ./dist
then serving it manually from terminal using:
$ serve -d dist/
It works from terminal, but why does it fail from within the container?
The point of Gridsome is that it produces a static site that you can host anywhere you can put files, right? You don't need nodejs or anything except for a simple webserver to host it.
If you want to produce a clean production Docker image to serve the static files built by Gridsome, then that's a good use-case for a multistage Dockerfile. In the first stage you build your Gridsome project. In the second and final stage you can go with a clean nginx image that contains your dist folder and nothing else.
It could be as simple as this:
FROM node:current AS builder
WORKDIR /build
COPY gridsome-website ./
RUN npm install && gridsome build
FROM nginx:stable
COPY --from=builder /build/dist /usr/share/nginx/html/
EXPOSE 80
After you build and run the image with 8080->80 port mapping, you can access your site at http://127.0.0.1:8080.
docker build -t my-website-image .
docker run -p 8080:80 my-website-image
It's probably a good idea to set up a .dockerignore file as well that ignores at least node_modules.
$ cat .dockerignore
gridsome-website/node_modules

How to shrink size of Docker image with NodeJs

I created new Angular2 app by angular-cli and run it in Docker.
At first I init app on my local machine:
ng new project && cd project && "put my Dockerfile there" && docker build -t my-ui && docker run.
My Dockerfile
FROM node
RUN npm install -g angular-cli#v1.0.0-beta.24 && npm cache clean && rm -rf ~/.npm
RUN mkdir -p /opt/client-ui/src
WORKDIR /opt/client-ui
COPY package.json /opt/client-ui/
COPY angular-cli.json /opt/client-ui/
COPY tslint.json /opt/client-ui/
ADD src/ /opt/client-ui/src
RUN npm install
RUN ng build --prod --aot
EXPOSE 4200
ENV PATH="$PATH:/usr/local/bin/"
CMD ["npm", "start"]
Everything is OK, problem is size of image: 939MB!!! I tried to use FROM: ubuntu:16.04 and install NodeJs on it (it works), but still my image has ~450 MB. I know that node:alpine exists, but I am not able to install angular-cli in it.
How can I shrink image size? Is it necessary to run "npm install" and "ng build" in Dockerfile? I would expect to build app on localhost and copy it to image. I tried to copy dist dir and and package.json etc files, but it does not work (app start fail). Thanks.
You can certainly use my alpine-ng image if you like.
You can also check out the dockerfile, if you want to try and modify it in some way.
I regret to inform you that even based on alpine, it is still 610MB. An improvement to be sure, but there is no getting around the fact that the angular compiler is grossly huge.
For production, you do not need to distribute an image with Node.js, NPM dependencies, etc. You simply need an image that can be used to start a data volume container that provides the compiled sources, release source maps and other assets, effectively no more than what you would redistributed with a package via NPM, that you can attach to your webserver.
So, for your CI host, you can pick one of the node:alpine distributions and copy the sources and install the dependencies therein, then you can re-use the image for running containers that test the builds until you finally run a container that performs a production compilation, which you can name.
docker run --name=compile-${RELEASE} ci-${RELEASE} npm run production
After you have finished compiling the sources within a container, run a container that has the volumes from the compilation container attached and copy the sources to a volume on the container and push that to your Docker upstream:
docker run --name=release-${RELEASE} --volumes-from=compile-${RELEASE} -v /srv/public busybox cp -R /myapp/dist /srv/public
docker commit release-${RELEASE} release-${RELEASE} myapp:${RELEASE}
Try FROM mhart/alpine-node:base-6 maybe it will work.

Resources