access data volume of node modules outside docker container - node.js

I have docker setup for a node.js web service which creates a data volume for node modules like this:
volumes:
- ./service:/usr/src/service
- /usr/src/service/node_modules
The solution with data volume works fine. But using a data volume, the node_modules are not accessible or visible in my host (not container) which creates other restrictions. For example: I can not run tests in web-storm as they expect node_modules to be present.
Is there any alternative solution other than using data volume to mount node modules which also allows to access them on host?
The docker file looks like:
FROM node:6.3.1-slim
ARG NPM_TOKEN
ARG service
ENV NODE_ENV development
RUN apt-get update && apt-get -y install build-essential git-core python-dev vim
# use nodemon for development
RUN npm install --global nodemon
COPY .npmrc /root/.npmrc
RUN mkdir -p /usr/src/$service
RUN mkdir -p /modules/$service/
COPY sources/$service/package.json /modules/$service/package.json
RUN cd /modules/$service/ && npm install && rm -f /root/.npmrc && mv /modules/$service/node_modules /usr/src/$service/node_modules
WORKDIR /usr/src/$service
COPY sources/$service /usr/src/$service
ENV service=${service}
COPY start_services.sh /root/start_services.sh
COPY .env /root/.env
COPY services.sh /root/services.sh
RUN ["chmod", "+x", "/root/start_services.sh"]
RUN ["chmod", "+x", "/root/services.sh"]
CMD /root/start_services.sh

Specify node_modules as follows:
volumes:
- .:/usr/src/service/
- /usr/src/service/node_modules

Related

node_modules not mounted in docker container

I have dockerized a php application which require some npm dependencies, so I've installed in the Docker container nodejs and the required packages using:
FROM php:8.0.2-fpm-alpine
WORKDIR /var/www/html
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install mysqli
RUN apk add icu-dev
RUN docker-php-ext-configure intl && docker-php-ext-install intl
RUN apk add --update libzip-dev curl-dev &&\
docker-php-ext-install curl && \
apk del gcc g++ &&\
rm -rf /var/cache/apk/*
COPY docker/php-fpm/config/php.ini /usr/local/etc/php/
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN apk add --update nodejs nodejs-npm
RUN npm install gulp-cli -g
RUN npm install
COPY src src/
CMD ["php-fpm"]
EXPOSE 9000
this is my docker-compose.yml:
version: '3.7'
services:
php-fpm:
container_name: boilerplate_app
restart: always
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
volumes:
- ./src:/var/www/html
the problem's that when I enter in the container using: docker exec -ti boilerplate_app sh
and launch that command: ls -la I can't see any node_modules folder, infact, if I try to execute the installed dependency gulp I get:
Local modules not found in /var/www/html
Try running: npm install
What I did wrong?
There are two issues:
You are running npm install in a folder that does not contain any package.json listing the required node modules to install. If you inspect the logs you should see something like
no such file or directory, open '/var/www/html/package.json'
Moreover, when you mount your local src folder, you are replacing the content of /var/www/html/ with src content, which might not include any node_modules folder
volumes:
- ./src:/var/www/html

Files different docker container export versus Docker Compose container when transpiling with Babel?

Edit/Solution
The below issue was taking place because I was mounting my local files as volumes with the volumes section of the docker-compose. This was a legacy piece of code that I didn't initially notice. Leaving this up in case someone else has a similar issue. Thanks for the help.
Original Question
My Dockerfile (below) utilizes Babel to transpile ES6+ code into NodeJS10 code before it's shipped for prod. When I run docker container export myService..., the file structure is exactly as I expect with my import blah from /elsewhere statements being transpiled to var blah = require('elsewhere').
However, when I run this service locally with docker-compose and exec in with something like docker exec -it d66e623c8695 /bin/bash, I see the ES6+ syntax rather than transpiled code. This forces me to start my service locally with babel-node src/index.js when I'd much prefer to run it post-transpile with node src/index.js.
I haven't been able to wrap my head around why these are different, when starting my container with docker-compose up begins by building my container anew.
Any clarification about how this works would be greatly appreciated.
Dockerfile
FROM node:10.14.1 AS base
LABEL maintainer Pat D "me#me.com"
RUN mkdir /home/app
WORKDIR /home/app
RUN mkdir /procedures
RUN mkdir ./src
COPY procedures ./procedures
COPY src ./src
COPY package.json yarn.lock .sequelizerc jest.integration.json deploy.sh Dockerrun.aws.json.template ./
# Install postgresql so we can run the migrations. -qq and piping is to keep apt-get from spamming with logs
# see https://peteris.rocks/blog/quiet-and-unattended-installation-with-apt-get/
RUN apt-get update &&\
apt-get install -y --no-install-recommends apt-utils
RUN apt-get -qq clean < /dev/null > /dev/null &&\
apt-get -qq update < /dev/null > /dev/null &&\
apt-get -qq install -y curl postgresql libpq-dev build-essential < /dev/null > /dev/null &&\
rm -rf /var/lib/apt/lists/* &&\
rm -rf /var/cache/*
# Install dependencies
FROM base AS dependencies
RUN yarn install --quiet &&\
yarn install --silent && \
cp -R node_modules prod_node_modules
# Build
FROM dependencies AS build
COPY . .
RUN npm run build
RUN rm -rf src
RUN ls
RUN mv dist src
RUN ls
RUN cat src/index.js
# this outputs transpiled code when i build in docker compose
EXPOSE 3030
CMD ["node", "src/index.js"]
docker-compose.yml
version: '3.7'
services:
pat-service:
build:
context: .
target: build
working_dir: /home/app
command: ls src
volumes:
- ./:/home/app
- /home/app/node_modules
ports:
- 3030:3000
environment:
NODE_ENV: development
PGUSER: user
PGHOST: db
PGPASSWORD: password
PGDATABASE: db
PGPORT: 5432
LOGGLY_TOKEN: logglyuuidv4
LOGGLY_SUBDOMAIN: lsd1
STACK_NAME: test
networks:
default:
external:
name: architecture_default

How to run Docker image and configure it with nginx

I have made a Docker image for a nodeJS and it is running in Local perfectly but in Production, I have to configure it with Nginx(Which I installed in the host machine). We normally did like
location /location_of_app_folder {
proxy_pass http://api.prv:51967/info;
}
How will I configure this in nginx for docker image and how to run docker image. We used pm2 in nodeJS wch I added in Docker file But it is running till I press ctrl+C.
FROM keymetrics/pm2:latest-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json ./
COPY .npmrc ./
RUN npm config set registry http://private.repo/:_authToken=authtoken.
RUN npm install utilities#0.1.9
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*
RUN set NODE_ENV=production
RUN npm config set registry https://registry.npmjs.org/
RUN npm install
COPY . /app
RUN ls -al -R
EXPOSE 51967
CMD [ "pm2-runtime", "start", "pm2.json" ]
I am running the container with the command:
sudo docker run -it --network=host docker_repo_name
expose the docker image port and use the same nginx configuration, ex:
sudo docker run -it -p 51967:51967 docker_repo_name

How to cache the RUN npm install instruction when docker build a Dockerfile

I am currently developing a Node backend for my application.
When dockerizing it (docker build .) the longest phase is the RUN npm install. The RUN npm install instruction runs on every small server code change, which impedes productivity through increased build time.
I found that running npm install where the application code lives and adding the node_modules to the container with the ADD instruction solves this issue, but it is far from best practice. It kind of breaks the whole idea of dockerizing it and it cause the container to weight much more.
Any other solutions?
Ok so I found this great article about efficiency when writing a docker file.
This is an example of a bad docker file adding the application code before running the RUN npm install instruction:
FROM ubuntu
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y install python-software-properties git build-essential
RUN add-apt-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get -y install nodejs
WORKDIR /opt/app
COPY . /opt/app
RUN npm install
EXPOSE 3001
CMD ["node", "server.js"]
By dividing the copy of the application into 2 COPY instructions (one for the package.json file and the other for the rest of the files) and running the npm install instruction before adding the actual code, any code change wont trigger the RUN npm install instruction, only changes of the package.json will trigger it. Better practice docker file:
FROM ubuntu
MAINTAINER David Weinstein <david#bitjudo.com>
# install our dependencies and nodejs
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y install python-software-properties git build-essential
RUN add-apt-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get -y install nodejs
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
COPY package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
COPY . /opt/app
EXPOSE 3000
CMD ["node", "server.js"]
This is where the package.json file added, install its dependencies and copy them into the container WORKDIR, where the app lives:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
To avoid the npm install phase on every docker build just copy those lines and change the ^/opt/app^ to the location your app lives inside the container.
Weird! No one mentions multi-stage build.
# ---- Base Node ----
FROM alpine:3.5 AS base
# install node
RUN apk add --no-cache nodejs-current tini
# set working directory
WORKDIR /root/chat
# Set tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]
# copy project file
COPY package.json .
#
# ---- Dependencies ----
FROM base AS dependencies
# install node packages
RUN npm set progress=false && npm config set depth 0
RUN npm install --only=production
# copy production node_modules aside
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install
#
# ---- Test ----
# run linters, setup and tests
FROM dependencies AS test
COPY . .
RUN npm run lint && npm run setup && npm run test
#
# ---- Release ----
FROM base AS release
# copy production node_modules
COPY --from=dependencies /root/chat/prod_node_modules ./node_modules
# copy app sources
COPY . .
# expose port and define CMD
EXPOSE 5000
CMD npm run start
Awesome tuto here: https://codefresh.io/docker-tutorial/node_docker_multistage/
I've found that the simplest approach is to leverage Docker's copy semantics:
The COPY instruction copies new files or directories from and adds them to the filesystem of the container at the path .
This means that if you first explicitly copy the package.json file and then run the npm install step that it can be cached and then you can copy the rest of the source directory. If the package.json file has changed, then that will be new and it will re-run the npm install caching that for future builds.
A snippet from the end of a Dockerfile would look like:
# install node modules
WORKDIR /usr/app
COPY package.json /usr/app/package.json
RUN npm install
# install application
COPY . /usr/app
I imagine you may already know, but you could include a .dockerignore file in the same folder containing
node_modules
npm-debug.log
to avoid bloating your image when you push to docker hub
you don't need to use tmp folder, just copy package.json to your container's application folder, do some install work and copy all files later.
COPY app/package.json /opt/app/package.json
RUN cd /opt/app && npm install
COPY app /opt/app
I wanted to use volumes, not copy, and keep using docker compose, and I could do it chaining the commands at the end
FROM debian:latest
RUN apt -y update \
&& apt -y install curl \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash - \
&& apt -y install nodejs
RUN apt -y update \
&& apt -y install wget \
build-essential \
net-tools
RUN npm install pm2 -g
RUN mkdir -p /home/services_monitor/ && touch /home/services_monitor/
RUN chown -R root:root /home/services_monitor/
WORKDIR /home/services_monitor/
CMD npm install \
&& pm2-runtime /home/services_monitor/start.json

workflow for node developing with docker

I'm developing a webapp and I need node for my development environment.
I don't want a docker production container, but a development one: I need to share files between docker container and local development machines. I don't want to run docker each time I change a source file.
Currently my dockerfile is:
#React development
FROM node:4.1.1-wheezy
MAINTAINER xxxxx
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -y install sudo locales apt-utils
RUN locale-gen es_ES.UTF-8
RUN dpkg-reconfigure locales
RUN echo Europe/Madrid | sudo tee /etc/timezone && sudo dpkg-reconfigure --frontend noninteractive tzdata
ADD src/package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /src && cp -a /tmp/node_modules /src/
WORKDIR /src
EXPOSE 3000
VOLUME /src
I need directory to put all my source files (share a directory via data volume). I also need to execute npm install in my dockerfile so I get my node_modules directory inside my sources directory (/src/node_modules).
However when I mount a host directory as a data volume, as /src dir already exists inside the container’s image, its contents will be replaced by the contents of /src directory on the host so I don't have my /src/node_modules directory anymore:
docker run -it --volumes-from data-container --name node-dev user/dev-node /bin/bash
My host directory doesn't have node_modules directory because I get it through github and is not sync because it's quite a heavy dir.
My solution is to copy node_modules directory using an ENTRYPOINT directive.
docker-entrypoint.sh:
#!/bin/bash
if ! [ -d node_modules ]; then
cp -a /tmp/node_modules /src/
fi
exec "$#"
2 lines added to dockerfile:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Disclaimer: the following has nothing to do with Docker specifically.
I use SSHFS. The Ubuntu Wiki has a pretty good description:
SSHFS is a tool that uses SSH to enable mounting of a remote
filesystem on a local machine; the network is (mostly) transparent to
the user.
Not sure if this is useful in your scenario, but all my hosts run a SSH server anyways so it was a no-brainer for me. The win-sshfs project doesn't seem to be actively developed anymore, but it still runs fine in win 8/10 (though the setup is a little weird). OS X and Linux both have better support for this through FUSE.

Resources