workflow for node developing with docker - node.js

I'm developing a webapp and I need node for my development environment.
I don't want a docker production container, but a development one: I need to share files between docker container and local development machines. I don't want to run docker each time I change a source file.
Currently my dockerfile is:
#React development
FROM node:4.1.1-wheezy
MAINTAINER xxxxx
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -y install sudo locales apt-utils
RUN locale-gen es_ES.UTF-8
RUN dpkg-reconfigure locales
RUN echo Europe/Madrid | sudo tee /etc/timezone && sudo dpkg-reconfigure --frontend noninteractive tzdata
ADD src/package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /src && cp -a /tmp/node_modules /src/
WORKDIR /src
EXPOSE 3000
VOLUME /src
I need directory to put all my source files (share a directory via data volume). I also need to execute npm install in my dockerfile so I get my node_modules directory inside my sources directory (/src/node_modules).
However when I mount a host directory as a data volume, as /src dir already exists inside the container’s image, its contents will be replaced by the contents of /src directory on the host so I don't have my /src/node_modules directory anymore:
docker run -it --volumes-from data-container --name node-dev user/dev-node /bin/bash
My host directory doesn't have node_modules directory because I get it through github and is not sync because it's quite a heavy dir.

My solution is to copy node_modules directory using an ENTRYPOINT directive.
docker-entrypoint.sh:
#!/bin/bash
if ! [ -d node_modules ]; then
cp -a /tmp/node_modules /src/
fi
exec "$#"
2 lines added to dockerfile:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

Disclaimer: the following has nothing to do with Docker specifically.
I use SSHFS. The Ubuntu Wiki has a pretty good description:
SSHFS is a tool that uses SSH to enable mounting of a remote
filesystem on a local machine; the network is (mostly) transparent to
the user.
Not sure if this is useful in your scenario, but all my hosts run a SSH server anyways so it was a no-brainer for me. The win-sshfs project doesn't seem to be actively developed anymore, but it still runs fine in win 8/10 (though the setup is a little weird). OS X and Linux both have better support for this through FUSE.

Related

serve a gridsome website from within the container

I have a static website built with gridsome, I'm experimenting with docker containerization and I came up with the following Dockerfile:
FROM node:14.2.0-alpine3.10
RUN apk update && apk upgrade
RUN apk --no-cache add git g++ gcc libgcc libstdc++ linux-headers make python yarn
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i --global gridsome
RUN yarn global add serve
COPY --chown=node:node gridsome-website/ /home/node/build/
RUN echo && ls /home/node/build/ && echo
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# My attempt to serve the website from within, build is successful but serve command doesn't work
CMD ~/.npm-global/bin/gridsome build && serve -d dist/
Gridsome uses the port 8080 by default, after running the container via:
docker run --name my-website -d my-website-image
It doesn't fail, but I can't access my website using the link: http://localhost:8080, the container stops execution right after I run it. I tried copying the ``` dist/`` folder from my container via:
$ docker cp my-website:/home/node/build/dist ./dist
then serving it manually from terminal using:
$ serve -d dist/
It works from terminal, but why does it fail from within the container?
The point of Gridsome is that it produces a static site that you can host anywhere you can put files, right? You don't need nodejs or anything except for a simple webserver to host it.
If you want to produce a clean production Docker image to serve the static files built by Gridsome, then that's a good use-case for a multistage Dockerfile. In the first stage you build your Gridsome project. In the second and final stage you can go with a clean nginx image that contains your dist folder and nothing else.
It could be as simple as this:
FROM node:current AS builder
WORKDIR /build
COPY gridsome-website ./
RUN npm install && gridsome build
FROM nginx:stable
COPY --from=builder /build/dist /usr/share/nginx/html/
EXPOSE 80
After you build and run the image with 8080->80 port mapping, you can access your site at http://127.0.0.1:8080.
docker build -t my-website-image .
docker run -p 8080:80 my-website-image
It's probably a good idea to set up a .dockerignore file as well that ignores at least node_modules.
$ cat .dockerignore
gridsome-website/node_modules

Nodemon inside docker container

I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?
In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).
TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.
In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury
Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.

access data volume of node modules outside docker container

I have docker setup for a node.js web service which creates a data volume for node modules like this:
volumes:
- ./service:/usr/src/service
- /usr/src/service/node_modules
The solution with data volume works fine. But using a data volume, the node_modules are not accessible or visible in my host (not container) which creates other restrictions. For example: I can not run tests in web-storm as they expect node_modules to be present.
Is there any alternative solution other than using data volume to mount node modules which also allows to access them on host?
The docker file looks like:
FROM node:6.3.1-slim
ARG NPM_TOKEN
ARG service
ENV NODE_ENV development
RUN apt-get update && apt-get -y install build-essential git-core python-dev vim
# use nodemon for development
RUN npm install --global nodemon
COPY .npmrc /root/.npmrc
RUN mkdir -p /usr/src/$service
RUN mkdir -p /modules/$service/
COPY sources/$service/package.json /modules/$service/package.json
RUN cd /modules/$service/ && npm install && rm -f /root/.npmrc && mv /modules/$service/node_modules /usr/src/$service/node_modules
WORKDIR /usr/src/$service
COPY sources/$service /usr/src/$service
ENV service=${service}
COPY start_services.sh /root/start_services.sh
COPY .env /root/.env
COPY services.sh /root/services.sh
RUN ["chmod", "+x", "/root/start_services.sh"]
RUN ["chmod", "+x", "/root/services.sh"]
CMD /root/start_services.sh
Specify node_modules as follows:
volumes:
- .:/usr/src/service/
- /usr/src/service/node_modules

Docker permission issue after copying files

I have the following docker file
FROM ubuntu:14.04
#Install Node
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt-get install nodejs -y
RUN apt-get install nodejs-legacy -y
RUN apt-get install npm -y
RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# COPY distribution
COPY dist dist
COPY package.json package.json
# Substitute dependencies from environment variables
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
And here is the entrypoint script
#!/bin/sh
cp package.json /usr/src/app/dist/
cd /usr/src/app/dist/
echo "starting server"
exec npm start
When I run the image it fails with this error
sh: 1: http-server: not found
npm ERR! weird error 127
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
I tried various kinds of installation but still get the same error, I also tried checking if the node_modules conatins the http-server executable and it does. I tried forcing 777 permission on all the files but still running into the same error
What could be the problem?
It looks like you're just missing an npm install call somewhere, so the node_modules directory, nor any of its contents (like http-server) are present on the image. After the COPY package.json package.json, if you add a RUN npm install line, that might be all you need.
There are a few other things that could be simpler too though, like you probably don't need an ENTRYPOINT script to run the app and copy package.json since that's already done. Here's a simplified version of a Node Docker image I've been running with. I'm using the base Node images which, I believe, are Linux-based, but you could probably keep the Ubuntu stuff if you wanted to and it shouldn't be an issue.
FROM node:6.9.5
# Create non-root user to run app with
RUN useradd --user-group --create-home --shell /bin/bash my-app
# Set working directory
WORKDIR /home/my-app
COPY package.json ./
# Change user so that everything that's npm-installed belongs to it
USER my-app
# Install dependencies
RUN npm install --no-optional && npm cache clean
# Switch to root and copy over the rest of our code
# This is here, after the npm install, so that code changes don't trigger an un-caching
# of the npm install line
USER root
COPY .eslintrc index.js ./
COPY app ./app
RUN chown -R my-app:my-app /home/my-app
USER my-app
CMD [ "npm", "start" ]
It's good practice to make a specific user for owning/running your code and not using root, but, as I understand it, you need to use root to put files onto your image, hence the switching users a couple times here (which is what USER ... does).
I'll also note that I use this image with Docker Compose for local development, which is what the comment about code changes is referring to.

How to Increase the Speed of Docker Builds While Using Mounted Volumes

Docker is slow writing to the host file system when using volumes. This makes tasks like npm install, in NodeJS, incredibly painful. How can I exclude the node_modules folder from the volume so the build is faster?
It's common knowledge that Docker's mounted volume support on macOS is pathetically slow (click here for more info). For us Node developers, this means starting up your app is incredibly slow because of the requisite node install command. Well, here's a quick lil' trick to get around that slowness.
First, a quick look at the project:
(source: fredlackey.com)
Long story short, I'm mapping everything in my project's root (./) to one of the container's volumes. This allows me to use widgets, like gulp.watch() and nodemon to automagically restart the project, or inject any new code, whenever I modifiy a file.
This is 50% of the actual problem!
Because the root of the project is being mapped to the working directory within the container, calling npm install causes node_modules to be created in the root... which is actually on the host file system. This is where Docker's incredibly slow mounted volumes kick the project in the nads. As is, you could spend as long as five minutes waiting for your project to come up once you issue docker-compose up.
"Your Docker setup must be wrong!"
As you'll see, Docker is quite vanilla for this lil' project.
First, ye 'ole Dockerfile:
FROM ubuntu:16.04
MAINTAINER "Fred Lackey" <fred.lackey#gmail.com>
RUN mkdir -p /var/www \
&& echo '{ "allow_root": true }' > /root/.bowerrc \
&& apt-get update \
&& apt-get install -y curl git \
&& curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates
VOLUME /var/www
EXPOSE 3000
And, of course, the beloved docker-compose.yml:
version: '2'
services:
uber-cool-microservice:
build:
context: .
container_name: uber-cool-microservice
command:
bash -c "npm install && nodemon"
volumes:
- .:/var/www
working_dir: /var/www
ports:
- "3000"
As you can see, as-is this test project is lean, mean, and works as expected.... except that the npm install is sloooooooooow.
At this point, calling npm install causes all of the project's dependencies to be installed to the volume which, as we all know, is the host filesystem. This is where the pain comes in.
"So what's the 'trick' you mentioned?"
If only we could benefit from having the root of the project mapped to the volume but somehow exclude node_modules and allow it to be written to Docker's union file system inside of the container.
According to Docker's docs, excluding a folder from the volume mount is not possible. Which, makes sense I guess.
However, it is actually possible!
The trick? Simple! An additional volume mount!
By adding one line to the Dockerfile:
FROM ubuntu:16.04
MAINTAINER "Fred Lackey"
RUN mkdir -p /var/www \
&& echo '{ "allow_root": true }' > /root/.bowerrc \
&& apt-get update \
&& apt-get install -y curl git \
&& curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates
VOLUME /var/www
VOLUME /var/www/node_modules
EXPOSE 3000
... and one line to the docker-compose.yml file ...
version: '2'
services:
uber-cool-microservice:
build:
context: .
container_name: uber-cool-microservice
command:
bash -c "npm install && nodemon"
volumes:
- .:/var/www
- /var/www/node_modules
working_dir: /var/www
ports:
- "3000"
That's it!
In case you missed it, we added:
VOLUME /var/www/node_modules
and
- /var/www/node_modules
Say what!?!?
In short, the additional volume causes Docker to create the internal hooks within the container (folder, etc.) and wait for it to be mounted. Since we are never mounting the folder, we basically trick Docker into just writing to the folder within the container.
The end result is we are able to mount the root of our project, take advantage of tools like gulp.watch() and nodemon, while writing the contents of node_modules to the much faster union file system.
Quick Note re: node_modules:
For some reason, while using this technique, Docker will still create the node_modules folder within the root of your project, on the host file system. It simply will not write to it.
The original article is on my blog.

Resources