Node.js Deployment through Elastic Beanstalk using Docker - node.js

I am trying to deploy a node.js react based isomorphic application using a Dockerfile linked up to Elastic Beanstalk.
When I run my docker build locally I am able to do so successfully. I have noticed however that the npm install command is taking a fair amount of time to complete.
When trying to deploy the application using the eb deploy command it is pretty much crashing the Amazon service or I get an error like this:
ERROR: Timed out while waiting for command to Complete
My guess is that this is down to my node_modules folder being 300MB big. I have also tried adding an artifact declaration into the config.yml file and deploying that way but get the same error.
Is there a best practice way of deploying a node application to AWS Beanstalk or is the best way to manually setup an EC2 instance and relying on Code Commit git hooks?
My Dockerfile is below:
FROM node:argon
ADD package.json /tmp/package.json
RUN npm config set registry https://registry.npmjs.org/
RUN npm set progress=false
RUN cd /tmp && npm install --silent
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
EXPOSE 8000
CMD npm run build && npm run start
...and this is my config.yml file:
branch-defaults:
develop:
environment: staging
master:
environment: production
global:
application_name: website-2016
default_ec2_keyname: key-pair
default_platform: 64bit Amazon Linux 2015.09 v2.0.6 running Docker 1.7.1
default_region: eu-west-1
profile: eb-cli
sc: git

You should change your platform to a more current one (I'm using
docker 1.9.1, and there might be newer versions)
I'm using an image from docker hub to deploy my apps into beanstalk. I build them using our CI servers and then run a deploy
command that pulls the image from docker hub. This can save you a
lot of build errors (and build time) and is actually more in touch with the Docker
philosophy of immutable infrastructure.
300MB for node_modules is not small but should present no problem. We deploy this size of dependencies and code regularly.

Related

Deploying node app using jenkins to a docker container

Using common CI/CD workflow with Jenkins and docker. Deploying app to a server without external internet connection, only jenkins has external internet, so i'm building up node app:
npm install
in a jenkins pipeline, then deploying it to a docker container.
Dockerfile:
FROM node:12
WORKDIR /var/www/cms
COPY . .
RUN chmod +x ./strapi.sh
EXPOSE 1337
CMD ["./strapi.sh"]
After npm install i'm copying whole directory to a docker container, that step takes approximately 15 minutes to finish up. What's the best way to speed it up?
you should add npm install in the docker file.
it means you will download all the packages modules inside the docker and will not need to copy them from from the outside.

Dockerize meteor application

I have a meteor application.This app works well on the Centos7 VM.
I need to create docker container of this app and install or import this container on other virtual machines.
What do ِdocker file need to save and load container on another VM?
NodeJs?
Mongodb?
MeteorJs?
Shouldn't I store Mongodb file in Docker container?
this is my docker file:
# Pull base image.
FROM node:8.11.4
# Install build tools to compile native npm modules
RUN npm install -g node-gyp
RUN apt-get install curl -y
RUN curl https://install.meteor.com/ | sh
# Create app directory
RUN mkdir -p /usr/app
COPY . /usr/app
RUN cd /usr/app/programs/server
RUN npm install
WORKDIR /usr/app
CMD ["node", "main.js"]
EXPOSE 3000
There are many ways to skin this cat ... lets assume you have researched the alternatives on how to execute a meteor app using containers by using tools which automates the below setup - meteor calls their version of this automation Galaxy
I suggest you run the meteor commands outside the container intended to run your app from since a meteor install is huge, slow to install and some of the libraries you may pull in, or the libraries your libraries pull in, may need c or c++ compilers so meteor and its friends do not need to get installed into your app container everytime you want to recompile your app ... your app container only needs nodejs and your bundle ... when you execute a meteor app it does not use meteor instead the app is executed using nodejs directly since at this point your code has been compiled into a bundle which is pure nodejs
Yes you would do well to put mongodb into its own container
No, no need to put MeteorJs inside your app container instead just like meteor itself those compile time tools are not needed during execution time so install MeteorJs as well as all other tools needed for a successful meteor build on your host machine which is where you execute your meteor build command
In your above Dockerfile the last statement EXPOSE 3000 will never get reached so put it before your CMD node
So outside your container get meteor installed then issue
cd /your/webapp/src
meteor build --server https://example.com --verbose --directory /webapp --server-only
above will compile your meteor project into a bundle dir living at
ls -la /webapp/bundle/
then copy into that freshly cut bundle dir your Dockerfile etc :
.bashrc
Dockerfile
bundle/
then create your container
docker build --tag localhost:5000/hygge/loudweb-admin --no-cache .
docker push localhost:5000/hygge/loudweb-admin
here is a stripped down Dockerfile
cat Dockerfile
# normal mode - raw ubuntu run has finished and base image exists so run in epoc mode
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
ENV NODE_VER=v8.11.4
ENV NODE_NAME=node-${NODE_VER}
ENV OS_ARCH=linux-x64
ENV COMSUFFIX=tar.gz
ENV NODE_PARENT=/${NODE_NAME}-${OS_ARCH}
ENV PATH=${NODE_PARENT}/bin:${PATH}
ENV NODE_PATH=${NODE_PARENT}/lib/node_modules
RUN apt-get update && apt-get install -y wget && \
wget -q https://nodejs.org/download/release/${NODE_VER}/${NODE_NAME}-${OS_ARCH}.${COMSUFFIX} && \
tar -xf ${NODE_NAME}-${OS_ARCH}.${COMSUFFIX}
ENV MONGO_URL='mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT/meteor'
ENV ROOT_URL=https://example.com
ENV PORT 3000
EXPOSE 3000
RUN which node
WORKDIR /tmp
# CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf" ]
# I strongly suggest you wrap below using supervisord
CMD ["node", "main.js"]
to launch your container issue
docker-compose -f /devopsmicro/docker-compose.yml pull loudmail loud-devops nodejs-enduser
docker-compose -f /devopsmicro/docker-compose.yml up -d
here is a stripped down docker compose yaml file
version: '3'
services:
nodejs-enduser:
image: ${GKE_APP_IMAGE_ENDUSER}
container_name: loud_enduser
restart: always
depends_on:
- nodejs-admin
- loudmongo
- loudmail
volumes:
- /cryptdata6/var/log/loudlog-enduser:/loudlog-enduser
- ${TMPDIR_GRAND_PARENT}/curr/loud-build/${PROJECT_ID}/webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
- loudmail
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Once you have your app executing using containers you can work to stop using ubuntu as your container base and use a smaller, simpler docker base image like nodejs, busybox, etc however using ubuntu is easier initially since it has ability to let you install packages from inside a running container which is nice during development
the machinations surrounding above are vast ... above is a quick copy N paste plucked from the devops side of the house with hundreds of helper binaries + scripts, config templates, tls certs ... this is a tiny glimpse into the world of getting an app to execute
#Scott Stensland answer is good, in that it explains how to manually create a docker container for Meteor.
There is a simpler way use Meteor-up (mup) http://meteor-up.com/
EASILY DEPLOY YOUR APP
Meteor Up is a production quality Meteor app deployment tool.
Install with one command:
$ npm install --global mup
You set up a simple config file, and it looks after creating the container, doing npm install, setting up ssl certs etc. Much less work than doing it by hand

Unable to deploy React Application to Kubernetes

I am trying to deploy an application created using Create-React App to Kubernetes through Docker.
When the docker file tries to create the container using Jenkins pipeline, it fails with the below error :
"Starting the development server...
Failed to compile.
./src/index.js
Module not found: Can't resolve './App.js' in '/app/src'
The folder structure is exactly similar to the default 'create-react app' folder structure.
Also below is the Dockerfile:
FROM node:10.6.0-jessie
# set working directory
RUN mkdir /app
WORKDIR /app
COPY . .
# add `/usr/src/app/node_modules/.bin` to $PATH
#ENV PATH /usr/src/app/node_modules/.bin:$PATH
RUN npm install
#RUN npm install react-scripts -g --silent
# start app
CMD ["npm", "start"]
I am unable to understand where I might be going wrong.
Edit 1: I would also like to mention that I am able to run the docker container on my local machine using this config.
So any help would be appreciated.
Update 1 :
I was able to do a kubectl exec -it pod_name -- bash to the container inside the pod. I found out due to some reason the "App.js" file was getting copied to the container as "app.js". Since linux is case sensitive so it was not able to find the file. Changing the import statement in index.js fixed the problem. But I still have no idea as to what might have caused the file to get copied with a lower-case since in my local the file exists as "App.js".
The problem you're having will be omitted when you adjust your deployment process to a more production-ready setup.
What you're doing currently is installing all (development) dependencies on every Kubernetes node, compiling your application, and then starting a development webserver. This makes your deployed builds inconsistent and increases load and bloat on the deployment nodes.
Instead what you want to do is create a production-ready build by running npm run build on a build machine, which will compile your application and output to the build folder in your project. You then want to transfer this folder to your server in a .zip file, which will need a production-ready webserver installed (Nginx is highly recommended and industry standard) to serve the static files from your build.

How can you get Grunt livereload to work inside Docker?

I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.

Where should i run my grunt build step when building my docker image for staging and production environments?

I'm really struggling to figure out where i should put my grunt build step when building my docker image and deploying to dockerhub.
My workflow at the moment is as follows:
Push branch to github
CircleCI installs all dependencies, builds project, and runs tests on branch.
Merge branch branch to staging branch
CircleCI installs all dependencies, builds project, and runs tests on branch.
If tests pass, package the built files into the docker image with the source and also run npm install --production. CircleCI then deploys this staging image to dockerhub
Tutum is linked to dockerhub and deploys my image to DigitalOcean whenever a new image is pushed.
I do the same workflow as above, when merging to master, and a production image is created instead.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I've seen quite a lot of people including the grunt/gulp build step in their dockerfiles, but that doesn't feel right either as all the devDependencies, and bower_components will then be in the image along with the built code.
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile? I'm also after the most efficient way to create my docker image for staging and production.
Below is my circleCI.yml file, followed by my Dockerfile.
circle.yml:
machine:
node:
version: 4.2.1
# Set the timezeone - any value from /usr/share/zoneinfo/ is valid here
timezone:
Europe/London
services:
- docker
pre:
- sudo curl -L -o /usr/bin/docker 'http://s3-external-1.amazonaws.com/circle-downloads/docker-1.8.2-circleci'; sudo chmod 0755 /usr/bin/docker; true
dependencies:
pre:
- docker --version
- sudo pip install -U docker-compose==1.4.2
- sudo pip install tutum
override:
- npm install:
pwd: node
post:
- npm run bower_install:
pwd: node
- npm run grunt_build:
pwd: node
test:
override:
- cd node && npm run test
deployment:
staging:
branch: staging
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_stage:latest
- docker push tutum.co/${DOCKER_USER}/dh_stage:latest
master:
branch: master
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_prod:latest
- docker push tutum.co/${DOCKER_USER}/dh_prod:latest
Dockerfile:
FROM node:4.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --production
COPY . /usr/src/app
#
#
# Commented the following steps out, as these
# now run on CircleCI before the image is built.
# (Whether that's right, or not, i'm not sure.)
#
# Install bower
# RUN npm install -g bower # grunt-cli
#
# WORKDIR src/app
# RUN bower install --allow-root
#
# Expose port
EXPOSE 3000
# Run app using nodemon
CMD ["npm", "start"]
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile?
It's better to run the build steps themselves outside of docker. Thus the same steps work for local development, non-docker deployment, etc. Keep your coupling to docker itself loose when you can. Thus build your artifacts with regular build tools and scripts and simply ADD built files to your docker image via your Dockerfile.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I would recommend instead using exactly the image you have already built and tested on stage in production. Once you rebuild the image, you become vulnerable to discrepancies breaking your production image even though your stage image worked OK. At this point neither docker nor npm can deliver strictly reproducible builds across time, thus once it's built and tested gold, it's gold and goes to production bit-for-bit identical.
Your circle ci should download all the dependencies and then create docker image from that downloaded packages. All testing is passed with the specified dependencies and should be carry forwarded to production. Once. The image is pushed to docker hub with all dependencies and tumtum will deploy the same to your production and as the dependencies are already downloaded it will take seconds to create containers.
Answering to your second query of building the same image. I would suggest to deploy the same image to production. This will guarantee you that what worked great on staging is also working the same on production.

Resources