How to shrink size of Docker image with NodeJs - node.js

I created new Angular2 app by angular-cli and run it in Docker.
At first I init app on my local machine:
ng new project && cd project && "put my Dockerfile there" && docker build -t my-ui && docker run.
My Dockerfile
FROM node
RUN npm install -g angular-cli#v1.0.0-beta.24 && npm cache clean && rm -rf ~/.npm
RUN mkdir -p /opt/client-ui/src
WORKDIR /opt/client-ui
COPY package.json /opt/client-ui/
COPY angular-cli.json /opt/client-ui/
COPY tslint.json /opt/client-ui/
ADD src/ /opt/client-ui/src
RUN npm install
RUN ng build --prod --aot
EXPOSE 4200
ENV PATH="$PATH:/usr/local/bin/"
CMD ["npm", "start"]
Everything is OK, problem is size of image: 939MB!!! I tried to use FROM: ubuntu:16.04 and install NodeJs on it (it works), but still my image has ~450 MB. I know that node:alpine exists, but I am not able to install angular-cli in it.
How can I shrink image size? Is it necessary to run "npm install" and "ng build" in Dockerfile? I would expect to build app on localhost and copy it to image. I tried to copy dist dir and and package.json etc files, but it does not work (app start fail). Thanks.

You can certainly use my alpine-ng image if you like.
You can also check out the dockerfile, if you want to try and modify it in some way.
I regret to inform you that even based on alpine, it is still 610MB. An improvement to be sure, but there is no getting around the fact that the angular compiler is grossly huge.

For production, you do not need to distribute an image with Node.js, NPM dependencies, etc. You simply need an image that can be used to start a data volume container that provides the compiled sources, release source maps and other assets, effectively no more than what you would redistributed with a package via NPM, that you can attach to your webserver.
So, for your CI host, you can pick one of the node:alpine distributions and copy the sources and install the dependencies therein, then you can re-use the image for running containers that test the builds until you finally run a container that performs a production compilation, which you can name.
docker run --name=compile-${RELEASE} ci-${RELEASE} npm run production
After you have finished compiling the sources within a container, run a container that has the volumes from the compilation container attached and copy the sources to a volume on the container and push that to your Docker upstream:
docker run --name=release-${RELEASE} --volumes-from=compile-${RELEASE} -v /srv/public busybox cp -R /myapp/dist /srv/public
docker commit release-${RELEASE} release-${RELEASE} myapp:${RELEASE}

Try FROM mhart/alpine-node:base-6 maybe it will work.

Related

Jest Unit Testing in Docker Container

I am trying to move my tests to my docker image build stage but it seems to ignore my test build at all and just skip it when I build the image.
What can be the problem?
# Base build
FROM node:16.13 AS build
RUN mkdir /app
WORKDIR /app
ADD package*.json ./
RUN npm i -g npm#6.14.15
RUN npm i --production
COPY . .
RUN npm run build
# Clean
RUN npm prune --production
RUN npm install -g node-prune
RUN node-prune
# Test
FROM build AS test
RUN npm i -g npm#6.14.15
RUN npm i -D jest typescript
RUN npm i -D ts-jest #types/jest
RUN npm i -D #shelf/jest-mongodb
RUN npx jest
# Release
FROM node:16.13-alpine
RUN mkdir /app
WORKDIR /app
COPY --from=build /app/dist ./dist/
COPY --from=build /app/node_modules ./node_modules/
COPY --from=build /app/package.json ./
CMD npm start
Above you can see my Dockerfile where I am preparing the build, then plan to have tests and after that, I am making my release image.
I've already been playing around with that for hours; I cleared cache, made tweaks with the order in the file) but it didn't help. It keeps ignoring my test build.
Any hint of that and in general on my Dockerfile is welcomed
Docker internally has two different systems to build images. Newer Dockers default to a newer system called BuildKit. One of the major differences is the way the two systems handle multi-stage builds: the older system just runs all of the build stages until the end, but BuildKit figures out that the final stage COPY --from=build but doesn't use anything from the test stage, and just skips it.
I wouldn't run these tests in a Dockerfile (since it doesn't produce a runnable artifact) or in Docker at all. I'd probably run them in my host-based development environment, before building a Docker image:
# On the host
npm install # devDependencies include all test dependencies
npm run tests # locally
# repeat until the tests pass (without involving a container)
# Build the image _after_ the tests pass
docker build -t myapp .
docker build also has a --target option to name a specific stage so you could tell Docker to "build the test image", but you'd just immediately delete it. This is where I don't think Docker quite makes sense as a test runner.
docker build -t delete-me --target test
docker build -t my-app .
docker rmi delete-me
Your Dockerfile also installs a MongoDB test helper. If the test depends on a live database rather than a mock implementation, the other thing to be aware of here is that code in a Dockerfile can never connect to other containers. You have to run this test from somewhere else in this case.

npm command not found error while running docker container

I am trying some something out with gitlab-runner image,
FROM gitlab/gitlab-runner:alpine
WORKDIR /app
COPY . /app
RUN apk add yarn && yarn install
RUN yarn --version # this layer prints 1.16.0
RUN ng build --prod
EXPOSE 3000
CMD ["yarn", "run", "start"]
above is the docker file I have created
docker build -t runner:1 .
I was able to build the image successfully
docker run -p 3000:3000 runner:1
but when I try to run the container it gives me below error
`*FATAL: Command yarn not found.*`
not sure about the behavior, if it is able to install yarn (apk add yarn) in base images and install the dependencies using yarn install then how it is not able to find the yarn command while running the container? Where I am going wrong.
Also at which directory yarn is installed in the alpine ?
I know it is not an efficient docker file, but I am trying to run the container first before optimizing it.
It outputs the version. It means the yarn is installed already. You could find the path the same as you find the version.
RUN which yarn
Step 6/10 : RUN which yarn
---> Running in 0f633b81f2ed
/usr/bin/yarn
We can see the /usr/bin/ has added to PATH.
Step 7/11 : RUN echo $PATH
---> Running in fc3f40b6bfd9
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
But I couldn't figure out why isn't reading yarn from the PATH.
So, we have set the PATH explicitly in our Dockerfile.
ENV PATH=${PATH}
But still, the issue persists. Now we have to separate yarn and commands as ENTRYPOINT and CMD respectively in the Dockerfile.
ENTRYPOINT ["yarn"]
CMD ["run", "start"]
Updated Dockerfile
FROM gitlab/gitlab-runner:alpine
ENV PATH=${PATH}
WORKDIR /app
COPY . /app
RUN apk add yarn && yarn install
RUN yarn --version # this layer prints 1.16.0
RUN ng build --prod
EXPOSE 3000
ENTRYPOINT ["yarn"]
CMD ["run", "start"]
---
$ docker run -p 3000:3000 harik8/yarn:latest
yarn run v1.16.0
error Couldn't find a package.json file in "/app"
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
The behaviours of the base image look unusual. It'd be better to go through it.
To build your app you shouldn't use gitlab-runner image, but a 'node' one.
Gilab-runner image is for running a gitlab agent which can be connected to docker engine and spawn the node container in which you will execute your build, in your case a docker image build.
To use gilab you need to prepare a gitlab-ci file where you will define what steps and which 'services' you need to do your build.
Tl;dr: change base image to node:latest and as a entirely separate work setup gitlab runner.
However if your aim is to have your application to extend gitlab runner, try docker multistage builds.
First, use node:latest image to build your app, and then copy the build output into gitlab-runner.
Runtime images such as gitlab-runner are stripped from build tools like yarn or npm, that's why your image fails. Main goal is to keep runtime images as small as possible and sdk's are not needed and sometimes dangerous when it comes to production-level work.

serve a gridsome website from within the container

I have a static website built with gridsome, I'm experimenting with docker containerization and I came up with the following Dockerfile:
FROM node:14.2.0-alpine3.10
RUN apk update && apk upgrade
RUN apk --no-cache add git g++ gcc libgcc libstdc++ linux-headers make python yarn
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i --global gridsome
RUN yarn global add serve
COPY --chown=node:node gridsome-website/ /home/node/build/
RUN echo && ls /home/node/build/ && echo
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# My attempt to serve the website from within, build is successful but serve command doesn't work
CMD ~/.npm-global/bin/gridsome build && serve -d dist/
Gridsome uses the port 8080 by default, after running the container via:
docker run --name my-website -d my-website-image
It doesn't fail, but I can't access my website using the link: http://localhost:8080, the container stops execution right after I run it. I tried copying the ``` dist/`` folder from my container via:
$ docker cp my-website:/home/node/build/dist ./dist
then serving it manually from terminal using:
$ serve -d dist/
It works from terminal, but why does it fail from within the container?
The point of Gridsome is that it produces a static site that you can host anywhere you can put files, right? You don't need nodejs or anything except for a simple webserver to host it.
If you want to produce a clean production Docker image to serve the static files built by Gridsome, then that's a good use-case for a multistage Dockerfile. In the first stage you build your Gridsome project. In the second and final stage you can go with a clean nginx image that contains your dist folder and nothing else.
It could be as simple as this:
FROM node:current AS builder
WORKDIR /build
COPY gridsome-website ./
RUN npm install && gridsome build
FROM nginx:stable
COPY --from=builder /build/dist /usr/share/nginx/html/
EXPOSE 80
After you build and run the image with 8080->80 port mapping, you can access your site at http://127.0.0.1:8080.
docker build -t my-website-image .
docker run -p 8080:80 my-website-image
It's probably a good idea to set up a .dockerignore file as well that ignores at least node_modules.
$ cat .dockerignore
gridsome-website/node_modules

How can I copy node_modules folder out of my docker container onto the build machine?

I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.

How do I get node.js "live editing" to work with Docker

My weekend project was to explore Docker and I thought a simple node.js project would be good. By "live edit" I mean I'd like to be able to manipulate files on my host system and (with as little effort as possible) see the Docker container reflect those changes immediately.
The Dockerizing a Node.js web app went smoothly, and then the Googling and thrashing began. I think that I now know the following:
If I use the ADD method noted on the nodejs tutorial, then I can't have live edit because ADD is fulfilled completely at docker build (not at docker run).
If I mount the node project's directory with something like -v `pwd`:/usr/src/app, then it won't run because either node_modules doesn't exist (and the volume is not available at build time to get populated because -v is a docker run argument) or I need to prepopulate node_modules in the host's project directory, which just doesn't feel right and could have OS compatibility issues.
My mad newbie thrashing can be distilled to three attempts, each with their own drawbacks or apparent failures.
1) The Node.js tutorial using ADD Works perfectly, but no "live edit". I have no expectation this should be what I need, but it at least proved I have the basic wiring in place and running.
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
2) Try building the node dependencies as globals from the Dockerfile. This seemed less cool, but reasonable (since I wouldn't expect to change dependencies often). However, it also simply didn't work, which really surprised me.
FROM node:argon
RUN npm install -g express
WORKDIR /usr/src/app
# which will be added via `docker run -v...`
EXPOSE 8080
3) Upon build, ADD only the package.json to a temporary location and get node_modules set up, then move that to the host's project directory, then mount the project's directory with -v `pwd`:/usr/src/app. If this would have worked I'd have tried to add nodemon and would theoretically have what I want. This seemed to me to be by far the most clever and commonsense, but this simply didn't work. I monkeyed with a few things attempting to fix, including host directory permissions, and had no joy.
FROM node:argon
WORKDIR /usr/src/app
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN cp -a /tmp/node_modules /usr/src/app/
EXPOSE 8080
I suspect there are multiple instances of me not understanding some basic concept, but as I searched around it seemed like there were a lot of different approaches, sometimes complicated by additional project requirements. I was, perhaps foolishly, trying to keep it simple. :)
Stats:
Running on Mac OS 10.10
Docker 1.12.0-rc2-beta17 (latest at time of writing)
Kitematic 0.12.0 (latest at time of writing)
Node.js 4.4.7 (latest pre-6 at time of writing)
UPDATE: Attempting to pull some of what I've tried to learn together, I've done the following and had better luck. Now it builds but docker run -v `pwd`:/usr/src/app -p 49160:8080 -d martink/node-docker3 doesn't stay running. However, I can "Run" and "Exec" in Kiteomatic and in shell I can see that node_modules looks good and has been moved into the right place, and I can manually do node server.js and have joy.
FROM node:argon
# Copy over the host's project files
COPY . /usr/src/app
# This provides a starting point, but will later be overridden by `-v`, I hope
# Use this app directory moving forward through this file
WORKDIR /usr/src/app
# PULL TOGETHER `NODE_MODULES`
## Grab the package.json from the host and copy into `tmp`
COPY package.json /tmp/package.json
## Use that to get `node_modules` set up in `tmp`
RUN cd /tmp && npm install
## Copy that resulting node_modules into the WORKDIR
RUN cp -a /tmp/node_modules /usr/src/app/
EXPOSE 8080
I think my questions might now be whittled down to...
How do I make server.js start when I run this?
How do I see "live edits" (possibly start this with nodemon)?
It appears this Dockerfile gets me what I need.
FROM node:argon
# Adding `nodemon` as a global so it's available to the `CMD` instruction below
RUN npm install -g nodemon
# Copy over the host's project files
COPY . /usr/src/app
# This provides a starting point, but will later be overridden by `docker run -v...`
# Use this app directory moving forward through this file
WORKDIR /usr/src/app
# PREPARE `NODE_MODULES`
## Grab the `package.json` from the host and copy into `tmp`
COPY package.json /tmp/package.json
## Use that to get `node_modules` set up
RUN cd /tmp && npm install
## Copy that resulting `node_modules` into the WORKDIR
RUN cp -a /tmp/node_modules /usr/src/app/
EXPOSE 8080
CMD [ "nodemon", "./server.js" ]
I build this like so:
docker build -t martink/node-docker .
And run it like so:
docker run -v `pwd`:/usr/src/app -p 49160:8080 -d martink/node-docker
This...
Stays running
Responds as expected at http://localhost:49160
Immediately picks up changes to server.js made on the host machine's file
I'm happy that it seems I have this working. If I've any bad practices in there, I'd appreciate feedback. :)

Resources