How should I Accomplish a Better Docker Workflow? - node.js

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?

It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.

What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image

You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

Related

How to avoid npm build node.js packages every time when creating a Docker image via Circle CI?

I deploy my Node.Js app via AWS ECS Docker container using Circle CI.
However, each time I build a new image it runs npm build (because it's in my Dockerfile) and downloads and builds all the node modules again every time. Then it uploads a new image to the AWS ECS repository.
As my environment stays the same I don't want it to build those packages every time. So do you think it is possible for Docker to actually update an existing image rather than building a new one from scratch with all the modules every time? Is this generally a good practice?
I was thinking the following workflow:
Check if there are any new Node packages compared to the previous image
If yes, run npm build
If not, just keep the old node_modules folder, don't run build and simply update the code
Deploy
What would be the best way to do that?
Here's my Dockerfile
FROM node:12.18.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
COPY package.json package-lock.json* ./
RUN npm install
RUN npm install pm2 -g
EXPOSE 3000
CMD [ "pm2-runtime", "ecosystem.config.js"]
My Circle CI workflow (from the ./circleci/config.yml):
workflows:
version: 2.1
test:
jobs:
- test
- aws-ecr/build-and-push-image:
create-repo: true
no-output-timeout: 10m
repo: 'stage-instance'
Move the COPY . . line after the RUN npm install line.
The way Docker's layer caching works, it will skip re-running a RUN line if it knows that it's already run it. So given this Dockerfile fragment:
FROM node:12.18.0-alpine
WORKDIR /usr/src/app
COPY package.json package-lock.json* ./
RUN npm install
Docker keeps track of the hashes of the files it COPYs in. When it gets to the RUN line, if the image up to this point is identical to one it's previously built, it will also skip over the RUN line.
If you have COPY . . first, then if any file in your source tree changes, it will invalidate the layer cache for everything afterwards. If you only copy package.json and the lock file first, then npm install only gets re-run if either of those two files change.
(CircleCI may or may not perform the same layer caching, but "install dependencies, then copy the application in" is a typical Docker-level optimization.)

Why I can't install node_module with docker-sompose run command

I have a express application. And I use the docker-compose to run it. To run my app I use command:
docker-compsoe up
If I run it at first time and don't have any node_modules - I have an error in terminal, sth like "The module 'express' not found, please install it and try again...". So, I just open one more terminal, and run next command:
docker-compose exec backend npm i
Modules are installed for a few seconds. And and my app start working in the previous terminal. I allways use this method, but now I found command run for docker-compose. It allows you to exec some command in container, when it is not raised. So I wanted to try this command and I deleted ./node_modules directory, stop all containers, close all terminals, open terminal and run command:
docker-compose run backend npm i
Modules started to install, I wait for about 10 minutes but it is stops in the middle. I don't understand why? If I try up and npm i in second terminal it works, but with command run - not. What I do wrong?
You should not install your node modules in a running container. Instead, you shoud install it in your image via your Docker file and then run it via docker or docker-compose.
Your Dockerfile should look like something like this:
FROM node:10 # or the version of node you are using
WORKDIR /usr/src/app #replace this with your app code path
COPY package.json /usr/src/app
RUN npm install
COPY app-code/ /usr/src/app/app-code # again, use your own path
EXPOSE 3000
CMD ["npm", "start"]
You have to run npm install from your dockerfile and not copy your development node folder because the environment from the container may differ from your development environment.
Then you can just run it from your docker-compose file.

Docker build from Dockerfile hangs indefinitely and occasionally crashes with error 'failed to start service utility VM'

I am currently using Docker Desktop for Windows and following this tutorial for using Docker and VSCode ( https://scotch.io/tutorials/docker-and-visual-studio-code ) and when I am attempting to build the image, the daemon is able to complete the first step of the Dockerfile, but then hangs indefinitely on the second step. Sometimes, but very rarely, after an indeterminate amount of time, it will error out and give me this error
failed to start service utility VM (createreadwrite): CreateComputeSystem 97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm: The operation could not be started because a required feature is not installed.
(extra info: {"SystemType":"container","Name":"97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm","Layers":null,"HvPartition":true,"HvRuntime":{"ImagePath":"C:\\Program Files\\Linux Containers","LinuxInitrdFile":"initrd.img","LinuxKernelFile":"kernel"},"ContainerType":"linux","TerminateOnLastHandleClosed":true})
I have made sure that virtualization is enabled on my machine, uninstalled and reinstalled Docker, uninstalled Docker and deleted all files related to it before reinstalling, as well as making sure that the experimental features are enabled. These are fixes that I have found from various forums while trying to find others who have had the same issue.
Here is the Dockerfile that I am trying to build from. I have double checked with the tutorial that it is correct, though its still possible that I missed something (outside of the version number in the FROM line).
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD npm start
I would expect the image to build correctly as I have followed the tutorial to a T. I have even full reset and started the tutorial over again and I'm still getting this same issue where it hangs indefinitely.
well, you copy some files two times. I would not do that.
so for the minimum change to your Dockerfile I would try:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent && mv node_modules ../
EXPOSE 3000
CMD npm start
I would also think about the && mv node_modules ../ part, if it is really needed.
If you don't do it already I advise you to write a .dockerignore file right next to your Dockerfile with the minimum content of:
/node_modules
so that your local node_modules directory does not get also copied while building the image (saves time).
hope this helps.

cannot build docker image

I have been trying to build a Docker image by using this Dockerfile:
FROM mhart/alpine-node:base-6
MAINTAINER techhadmin
COPY ./package.json src/
RUN cd src && npm install
COPY . /src
WORKDIR /src
EXPOSE 3000
CMD ["npm", "start"]
But I receive this error:
/bin/sh: npm: not found
The command '/bin/sh -c cd src && npm install' returned a non-zero code: 127
Any idea how I can solve this?
Read the docs:
https://hub.docker.com/r/mhart/alpine-node/
Is written:
# If you need npm, don't use a base tag
# RUN npm install
So don't use base-6 tag and change FROM image to something like 7
FROM mhart/alpine-node:7
You are seeing this error message because when you tried to run npm install, there is no copy of npm available.
You are using alpine as the base image.
By default, alpine is a small image and so it has a limited set of default programs inside of it. What programs are in the alpine image? Not much.
So if you are trying to run an alpine image with Nodejs you need to do additional work.
To solve it, you have two options:
Find a different base image. - You can try to find a base image that already has Node and NPM inside of it.
Run alpine with some additional commands that attempts to install npm inside of it.
Use someone else's work or building it from scratch.
I recommend finding an image preconfigured with npm inside of it. You can navigate to DockerHub, which is a repository of images.
There is an official Node repository in DockerHub.
https://hub.docker.com/_/node
So you could do something like this:
# Specify base image
FROM node:alpine
# Install some dependencies
RUN npm install
# Setup default command
CMD ["npm", "start"]
The nice thing about node:alpine is that you will not get any additional unnecessary packages, just the absolute stripped down version of Nodejs and nothing else aside from the basics such as the ping command, cat, ls and so on.

ELF Header or installation issue with bcrypt in Docker container

Kind of a longshot, but has anyone had any problems using bcrypt in a linux container (specifically docker) and know of an automated workaround? I have the same issue as these two:
Invalid ELF header with node bcrypt on AWSBox
bcrypt invalid elf header when running node app
My Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD ["npm", "start"]
I get the previously mentioned invalid ELF header error if I have bcrypt already installed in my node_modules, but if I remove it (either just itself or all my packages), it isn't installed for some reason when I build the container. I have to manually enter the container after the build and install it inside.
Is there an automated workaround?
Or maybe, just, what would be a good alternative to bcrypt with a Node stack?
Liam's comment is on the money, just expanding on it for future travellers on the internets.
The issue is that you've copied your node_modules folder into your container. The reason that this is a problem is that bcrypt is a native module. It's not just javascript, but also a bunch of C code that gets compiled at the time of installation.
The binaries that come out of that compilation get stored in the node_modules folder and they're customised to the place they were built. Transplanting them out of their OSX home into a strange Linux land causes them to misbehave and complain about ELF headers and fairy feet.
The solution is to echo node_modules >> .dockerignore and run npm install as part of your Dockerfile. This means that the native modules will be compiled inside the container rather than outside it on your laptop.
With this in place, there is no need to run npm install before your start CMD. Just having it in the build phase of the Dockerfile is fine.
protip: the official node images set NODE_ENV=production by default, which npm treats the same as the --production flag. Most of the time this is a good thing. It is not a good thing when your Dockerfile also contains some build steps that rely on dev dependencies (webpack, etc). In that case you want NODE_ENV=null npm install
pro protip: you can take better advantage of Docker's caching by copying in your package.json separately to the rest of your code. Make your Dockerfile look like this:
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Set working directory to /data
WORKDIR /data
# Set working directory to /data
COPY package.json /data
# Install dependencies from package.json
RUN npm install
# Add current directory into path /data in image
ADD . /data
# Run index.js
CMD npm start
And that way Docker will only re-run npm install when you change your package.json, not every time you change a line of code.
Okay, so I have a working automated workaround:
Call npm install --production in the CMD instruction. I'm going to wave my hands at figuring out why I have to install bcrypt at the time of executing the container, but it works.
Updated Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD npm install --production; npm start
Add this command before RUN npm install in your Dockerfile
RUN apk --no-cache add --virtual builds-deps build-base python3
It worked for me. Maybe it will work for you :)

Resources