Dockerize meteor application - node.js

I have a meteor application.This app works well on the Centos7 VM.
I need to create docker container of this app and install or import this container on other virtual machines.
What do ِdocker file need to save and load container on another VM?
NodeJs?
Mongodb?
MeteorJs?
Shouldn't I store Mongodb file in Docker container?
this is my docker file:
# Pull base image.
FROM node:8.11.4
# Install build tools to compile native npm modules
RUN npm install -g node-gyp
RUN apt-get install curl -y
RUN curl https://install.meteor.com/ | sh
# Create app directory
RUN mkdir -p /usr/app
COPY . /usr/app
RUN cd /usr/app/programs/server
RUN npm install
WORKDIR /usr/app
CMD ["node", "main.js"]
EXPOSE 3000

There are many ways to skin this cat ... lets assume you have researched the alternatives on how to execute a meteor app using containers by using tools which automates the below setup - meteor calls their version of this automation Galaxy
I suggest you run the meteor commands outside the container intended to run your app from since a meteor install is huge, slow to install and some of the libraries you may pull in, or the libraries your libraries pull in, may need c or c++ compilers so meteor and its friends do not need to get installed into your app container everytime you want to recompile your app ... your app container only needs nodejs and your bundle ... when you execute a meteor app it does not use meteor instead the app is executed using nodejs directly since at this point your code has been compiled into a bundle which is pure nodejs
Yes you would do well to put mongodb into its own container
No, no need to put MeteorJs inside your app container instead just like meteor itself those compile time tools are not needed during execution time so install MeteorJs as well as all other tools needed for a successful meteor build on your host machine which is where you execute your meteor build command
In your above Dockerfile the last statement EXPOSE 3000 will never get reached so put it before your CMD node
So outside your container get meteor installed then issue
cd /your/webapp/src
meteor build --server https://example.com --verbose --directory /webapp --server-only
above will compile your meteor project into a bundle dir living at
ls -la /webapp/bundle/
then copy into that freshly cut bundle dir your Dockerfile etc :
.bashrc
Dockerfile
bundle/
then create your container
docker build --tag localhost:5000/hygge/loudweb-admin --no-cache .
docker push localhost:5000/hygge/loudweb-admin
here is a stripped down Dockerfile
cat Dockerfile
# normal mode - raw ubuntu run has finished and base image exists so run in epoc mode
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
ENV NODE_VER=v8.11.4
ENV NODE_NAME=node-${NODE_VER}
ENV OS_ARCH=linux-x64
ENV COMSUFFIX=tar.gz
ENV NODE_PARENT=/${NODE_NAME}-${OS_ARCH}
ENV PATH=${NODE_PARENT}/bin:${PATH}
ENV NODE_PATH=${NODE_PARENT}/lib/node_modules
RUN apt-get update && apt-get install -y wget && \
wget -q https://nodejs.org/download/release/${NODE_VER}/${NODE_NAME}-${OS_ARCH}.${COMSUFFIX} && \
tar -xf ${NODE_NAME}-${OS_ARCH}.${COMSUFFIX}
ENV MONGO_URL='mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT/meteor'
ENV ROOT_URL=https://example.com
ENV PORT 3000
EXPOSE 3000
RUN which node
WORKDIR /tmp
# CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf" ]
# I strongly suggest you wrap below using supervisord
CMD ["node", "main.js"]
to launch your container issue
docker-compose -f /devopsmicro/docker-compose.yml pull loudmail loud-devops nodejs-enduser
docker-compose -f /devopsmicro/docker-compose.yml up -d
here is a stripped down docker compose yaml file
version: '3'
services:
nodejs-enduser:
image: ${GKE_APP_IMAGE_ENDUSER}
container_name: loud_enduser
restart: always
depends_on:
- nodejs-admin
- loudmongo
- loudmail
volumes:
- /cryptdata6/var/log/loudlog-enduser:/loudlog-enduser
- ${TMPDIR_GRAND_PARENT}/curr/loud-build/${PROJECT_ID}/webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
- loudmail
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Once you have your app executing using containers you can work to stop using ubuntu as your container base and use a smaller, simpler docker base image like nodejs, busybox, etc however using ubuntu is easier initially since it has ability to let you install packages from inside a running container which is nice during development
the machinations surrounding above are vast ... above is a quick copy N paste plucked from the devops side of the house with hundreds of helper binaries + scripts, config templates, tls certs ... this is a tiny glimpse into the world of getting an app to execute

#Scott Stensland answer is good, in that it explains how to manually create a docker container for Meteor.
There is a simpler way use Meteor-up (mup) http://meteor-up.com/
EASILY DEPLOY YOUR APP
Meteor Up is a production quality Meteor app deployment tool.
Install with one command:
$ npm install --global mup
You set up a simple config file, and it looks after creating the container, doing npm install, setting up ssl certs etc. Much less work than doing it by hand

Related

Deploying node app using jenkins to a docker container

Using common CI/CD workflow with Jenkins and docker. Deploying app to a server without external internet connection, only jenkins has external internet, so i'm building up node app:
npm install
in a jenkins pipeline, then deploying it to a docker container.
Dockerfile:
FROM node:12
WORKDIR /var/www/cms
COPY . .
RUN chmod +x ./strapi.sh
EXPOSE 1337
CMD ["./strapi.sh"]
After npm install i'm copying whole directory to a docker container, that step takes approximately 15 minutes to finish up. What's the best way to speed it up?
you should add npm install in the docker file.
it means you will download all the packages modules inside the docker and will not need to copy them from from the outside.

Access raspistill / pi camera inside a Docker container

I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.
The problem comes when an app dependency (raspicam) requires raspistill to make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.
Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:
FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Then in the Raspberry, if I pull the image and run it with:
docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]
I get:
Error: spawn /opt/vc/bin/raspistill ENOENT
After some researching, I also tried running with:
docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]
And with that command, I get:
stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directory
Can someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance.
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.
FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.js
There are 3 main points here:
Add /opt/vc/bin to your PATH so that you can call raspistill without referencing the full path.
Add /opt/vc/lib to your config file so that raspistill can find all dependencies it needs.
Reload config file (ldconfig) during container's runtime rather than build-time.
The last point is the main reason why Anton's solution didn't work. ldconfig needs to be executed in a running container so either use similar approach to mine or go with entrypoint.sh file instead.
Try replace this from the Dockerfile:
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
With the following:
ADD 00-vmcs.conf /etc/ld.so.conf.d/
RUN ldconfig
And create the file 00-vmcs.conf:
/opt/vc/lib
Edit:
If it still doesn't work, try loading a Raspbian Docker image for example balenalib/rpi-raspbian:
FROM balenalib/rpi-raspbian

Node.js Deployment through Elastic Beanstalk using Docker

I am trying to deploy a node.js react based isomorphic application using a Dockerfile linked up to Elastic Beanstalk.
When I run my docker build locally I am able to do so successfully. I have noticed however that the npm install command is taking a fair amount of time to complete.
When trying to deploy the application using the eb deploy command it is pretty much crashing the Amazon service or I get an error like this:
ERROR: Timed out while waiting for command to Complete
My guess is that this is down to my node_modules folder being 300MB big. I have also tried adding an artifact declaration into the config.yml file and deploying that way but get the same error.
Is there a best practice way of deploying a node application to AWS Beanstalk or is the best way to manually setup an EC2 instance and relying on Code Commit git hooks?
My Dockerfile is below:
FROM node:argon
ADD package.json /tmp/package.json
RUN npm config set registry https://registry.npmjs.org/
RUN npm set progress=false
RUN cd /tmp && npm install --silent
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
EXPOSE 8000
CMD npm run build && npm run start
...and this is my config.yml file:
branch-defaults:
develop:
environment: staging
master:
environment: production
global:
application_name: website-2016
default_ec2_keyname: key-pair
default_platform: 64bit Amazon Linux 2015.09 v2.0.6 running Docker 1.7.1
default_region: eu-west-1
profile: eb-cli
sc: git
You should change your platform to a more current one (I'm using
docker 1.9.1, and there might be newer versions)
I'm using an image from docker hub to deploy my apps into beanstalk. I build them using our CI servers and then run a deploy
command that pulls the image from docker hub. This can save you a
lot of build errors (and build time) and is actually more in touch with the Docker
philosophy of immutable infrastructure.
300MB for node_modules is not small but should present no problem. We deploy this size of dependencies and code regularly.

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

Where should i run my grunt build step when building my docker image for staging and production environments?

I'm really struggling to figure out where i should put my grunt build step when building my docker image and deploying to dockerhub.
My workflow at the moment is as follows:
Push branch to github
CircleCI installs all dependencies, builds project, and runs tests on branch.
Merge branch branch to staging branch
CircleCI installs all dependencies, builds project, and runs tests on branch.
If tests pass, package the built files into the docker image with the source and also run npm install --production. CircleCI then deploys this staging image to dockerhub
Tutum is linked to dockerhub and deploys my image to DigitalOcean whenever a new image is pushed.
I do the same workflow as above, when merging to master, and a production image is created instead.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I've seen quite a lot of people including the grunt/gulp build step in their dockerfiles, but that doesn't feel right either as all the devDependencies, and bower_components will then be in the image along with the built code.
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile? I'm also after the most efficient way to create my docker image for staging and production.
Below is my circleCI.yml file, followed by my Dockerfile.
circle.yml:
machine:
node:
version: 4.2.1
# Set the timezeone - any value from /usr/share/zoneinfo/ is valid here
timezone:
Europe/London
services:
- docker
pre:
- sudo curl -L -o /usr/bin/docker 'http://s3-external-1.amazonaws.com/circle-downloads/docker-1.8.2-circleci'; sudo chmod 0755 /usr/bin/docker; true
dependencies:
pre:
- docker --version
- sudo pip install -U docker-compose==1.4.2
- sudo pip install tutum
override:
- npm install:
pwd: node
post:
- npm run bower_install:
pwd: node
- npm run grunt_build:
pwd: node
test:
override:
- cd node && npm run test
deployment:
staging:
branch: staging
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_stage:latest
- docker push tutum.co/${DOCKER_USER}/dh_stage:latest
master:
branch: master
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_prod:latest
- docker push tutum.co/${DOCKER_USER}/dh_prod:latest
Dockerfile:
FROM node:4.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --production
COPY . /usr/src/app
#
#
# Commented the following steps out, as these
# now run on CircleCI before the image is built.
# (Whether that's right, or not, i'm not sure.)
#
# Install bower
# RUN npm install -g bower # grunt-cli
#
# WORKDIR src/app
# RUN bower install --allow-root
#
# Expose port
EXPOSE 3000
# Run app using nodemon
CMD ["npm", "start"]
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile?
It's better to run the build steps themselves outside of docker. Thus the same steps work for local development, non-docker deployment, etc. Keep your coupling to docker itself loose when you can. Thus build your artifacts with regular build tools and scripts and simply ADD built files to your docker image via your Dockerfile.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I would recommend instead using exactly the image you have already built and tested on stage in production. Once you rebuild the image, you become vulnerable to discrepancies breaking your production image even though your stage image worked OK. At this point neither docker nor npm can deliver strictly reproducible builds across time, thus once it's built and tested gold, it's gold and goes to production bit-for-bit identical.
Your circle ci should download all the dependencies and then create docker image from that downloaded packages. All testing is passed with the specified dependencies and should be carry forwarded to production. Once. The image is pushed to docker hub with all dependencies and tumtum will deploy the same to your production and as the dependencies are already downloaded it will take seconds to create containers.
Answering to your second query of building the same image. I would suggest to deploy the same image to production. This will guarantee you that what worked great on staging is also working the same on production.

Resources