pm2: command not found if used in travis-ci - node.js

i am using pm2 in the shell and it works fine. But when i add it to .travis.yml, it shows me
$ pm2 restart index.js
No command 'pm2' found
pm2 is in /usr/local/bin and when i echo $PATH, it includes the path /usr/local/bin。i know nothing about it.
.travis.yml
language: node_js
node_js:
- 8.9.1
branchs:
only:
- master
cache:
apt: true
directories:
- node_modules
install:
- git pull
- rm -f package-lock.json && npm install
script:
- echo $PATH
- pm2 restart index.js
after_success:
- chmod 600 ~/.ssh/id_rsa
before_install:
- openssl aes-256-cbc -K $encrypted_a46a360c8512_key -iv $encrypted_a46a360c8512_iv
-in id_rsa.enc -out ~/.ssh/id_rsa -d

I think you have mixed up build container with the actual server where you want the final app to run.
Travis builds projects inside a VM/Container that ends when the Travis build ends.
PM2 is supposed to be installed and run on the actual web server that hosts the app.
So, from Travis ci you probably should:
Upload latest project files to the actual server via ssh.
Run pm2 on the actual server via ssh.
Here's something along these lines: https://oncletom.io/2016/travis-ssh-deploy/

Related

AWS codepipeline cloning issues

I have been trying to create CI/CD using code deploy and bitbucket repo.
The pipeline is successful but I am not seeking any codes into ec2. I can only see the node module in ec2.
If anyone came through the same issues or could help me to solve them that would be great.
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/gt
hooks:
ApplicationStart:
- location: scripts/start_server.sh
runas: root
start_server.sh
sudo apt-get update
# install the application using npm
# we need to traverse to where the application bundle is copied too.
#some comments
#added commets
sudo su
rm -rf /home/ubuntu/gt
mkdir /home/ubuntu/gt
echo installing application with npm
cd /home/ubuntu/gt
sudo apt-get install -y npm
echo installing pm2
npm install pm2 -g
sudo yarn
pm2 delete gt
pm2 start npm --name 'gt' -- start
I am not seeking any codes into ec2
This could be because you are removing all content of your folder:
rm -rf /home/ubuntu/gt
since your ApplicationStart runs after files. So whatever you copy in files, gets deleted in ApplicationStart. For the order of execution, please have a look here.

Travis do not upload files using lftp

This is my configuration:
language: node_js
node_js:
- '12'
cache: npm
script:
- npm test
- npm run build
after_success:
- sudo apt-get -y install lftp
- echo "set dns:order \"inet inet6\"" > ~/.lftprc
- lftp -e "mirror -eR ./app ~/tmp" -u ${USERNAME}:${PASSWORD} ftp://${FTP_SERVER}
And everything works fine. Except the last command. When I'm trying to upload files to the server using this command it takes 1-2 minutes, but travis cannot do it at all. It says that time is out and raises an error. Even if I increase timeout to 30 minutes nothing really changes.
I want to test, build and then deploy my site to the server using FTP protocol. And as I've already said I can do it from my machine using lftp.
How can I fix it?
I really tried to find the answer, but there's nothing. But istead I had found the solution - Github Actions. It works much better, then Travis.

npm install 4x slower in docker container compared to host machine

I'm trying to provision a project locally that's using NodeJs with NPM.
I'm running npm install on my host machine (MacBook Pro Retina, 15-inch, Mid 2015) using nvm with node version 10.19:
added 2335 packages from 985 contributors and audited 916010 packages in 61.736s
When I run the same setup in Docker, the result is much slower. This is my docker-compose.yml file:
version: '3.4'
services:
node:
image: node:10.19-alpine
container_name: node
volumes:
- .:/app/
- npm-cache:/root/.npm
working_dir: /app
command: ["tail", "-f", "/dev/null"]
volumes:
npm-cache:
external: false
Then I execute:
docker-compose up -d node; docker exec -t node npm install
And the result is:
added 2265 packages from 975 contributors and audited 916010 packages in 259.895s
(I'm assuming the number of resulting packages is different due to a different platform).
I thought the speedy installation was achieved by having a local cache (that's why there is an extra volume for caching in the docker-compose) but then I ran:
$ npm cache clean --force && rm -rf ~/.npm && rm -rf node_modules
and the result for installation on the host machine is still consistently ~60 seconds.
When it comes to resources allocated to the Docker VM, it shouldn't be a problem, here's my Docker VM configuration:
I don't know where else to look, any help would be greatly appreciated.
Thanks
This slowdown is caused by sharing files between a container and your host machine.
In order to cope with it, you can give a try to docker-sync.
This tool supports different strategies for automatical syncing between a host machine and containers (including rsync).
However, beware that it has own issues like occasional sync freezing.
Here is how I got around the issue.
Create a base docker image using a similar Dockerfile.
FROM node:latest
RUN mkdir -p /node/app
COPY ./package.json /node/app/package.json
WORKDIR "/node/app"
RUN yarn install --network-timeout 100000
Then in your container, make this script the entry point:
#!/bin/bash
mkdir -p /node/app
cp /srv/package.json /node/app
cd /node/app
yarn install
sleep 1
rm -f /srv/node_modules
ln -s /node/app/node_modules /srv/node_modules
cd /srv
sleep 1
yarn serve
This installs npm modules in another directory that is not synced between container and host, and links the directory for the app. This seems to work just fine until this is properly resolved by Docker.

Dockerize meteor application

I have a meteor application.This app works well on the Centos7 VM.
I need to create docker container of this app and install or import this container on other virtual machines.
What do ِdocker file need to save and load container on another VM?
NodeJs?
Mongodb?
MeteorJs?
Shouldn't I store Mongodb file in Docker container?
this is my docker file:
# Pull base image.
FROM node:8.11.4
# Install build tools to compile native npm modules
RUN npm install -g node-gyp
RUN apt-get install curl -y
RUN curl https://install.meteor.com/ | sh
# Create app directory
RUN mkdir -p /usr/app
COPY . /usr/app
RUN cd /usr/app/programs/server
RUN npm install
WORKDIR /usr/app
CMD ["node", "main.js"]
EXPOSE 3000
There are many ways to skin this cat ... lets assume you have researched the alternatives on how to execute a meteor app using containers by using tools which automates the below setup - meteor calls their version of this automation Galaxy
I suggest you run the meteor commands outside the container intended to run your app from since a meteor install is huge, slow to install and some of the libraries you may pull in, or the libraries your libraries pull in, may need c or c++ compilers so meteor and its friends do not need to get installed into your app container everytime you want to recompile your app ... your app container only needs nodejs and your bundle ... when you execute a meteor app it does not use meteor instead the app is executed using nodejs directly since at this point your code has been compiled into a bundle which is pure nodejs
Yes you would do well to put mongodb into its own container
No, no need to put MeteorJs inside your app container instead just like meteor itself those compile time tools are not needed during execution time so install MeteorJs as well as all other tools needed for a successful meteor build on your host machine which is where you execute your meteor build command
In your above Dockerfile the last statement EXPOSE 3000 will never get reached so put it before your CMD node
So outside your container get meteor installed then issue
cd /your/webapp/src
meteor build --server https://example.com --verbose --directory /webapp --server-only
above will compile your meteor project into a bundle dir living at
ls -la /webapp/bundle/
then copy into that freshly cut bundle dir your Dockerfile etc :
.bashrc
Dockerfile
bundle/
then create your container
docker build --tag localhost:5000/hygge/loudweb-admin --no-cache .
docker push localhost:5000/hygge/loudweb-admin
here is a stripped down Dockerfile
cat Dockerfile
# normal mode - raw ubuntu run has finished and base image exists so run in epoc mode
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
ENV NODE_VER=v8.11.4
ENV NODE_NAME=node-${NODE_VER}
ENV OS_ARCH=linux-x64
ENV COMSUFFIX=tar.gz
ENV NODE_PARENT=/${NODE_NAME}-${OS_ARCH}
ENV PATH=${NODE_PARENT}/bin:${PATH}
ENV NODE_PATH=${NODE_PARENT}/lib/node_modules
RUN apt-get update && apt-get install -y wget && \
wget -q https://nodejs.org/download/release/${NODE_VER}/${NODE_NAME}-${OS_ARCH}.${COMSUFFIX} && \
tar -xf ${NODE_NAME}-${OS_ARCH}.${COMSUFFIX}
ENV MONGO_URL='mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT/meteor'
ENV ROOT_URL=https://example.com
ENV PORT 3000
EXPOSE 3000
RUN which node
WORKDIR /tmp
# CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf" ]
# I strongly suggest you wrap below using supervisord
CMD ["node", "main.js"]
to launch your container issue
docker-compose -f /devopsmicro/docker-compose.yml pull loudmail loud-devops nodejs-enduser
docker-compose -f /devopsmicro/docker-compose.yml up -d
here is a stripped down docker compose yaml file
version: '3'
services:
nodejs-enduser:
image: ${GKE_APP_IMAGE_ENDUSER}
container_name: loud_enduser
restart: always
depends_on:
- nodejs-admin
- loudmongo
- loudmail
volumes:
- /cryptdata6/var/log/loudlog-enduser:/loudlog-enduser
- ${TMPDIR_GRAND_PARENT}/curr/loud-build/${PROJECT_ID}/webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
- loudmail
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Once you have your app executing using containers you can work to stop using ubuntu as your container base and use a smaller, simpler docker base image like nodejs, busybox, etc however using ubuntu is easier initially since it has ability to let you install packages from inside a running container which is nice during development
the machinations surrounding above are vast ... above is a quick copy N paste plucked from the devops side of the house with hundreds of helper binaries + scripts, config templates, tls certs ... this is a tiny glimpse into the world of getting an app to execute
#Scott Stensland answer is good, in that it explains how to manually create a docker container for Meteor.
There is a simpler way use Meteor-up (mup) http://meteor-up.com/
EASILY DEPLOY YOUR APP
Meteor Up is a production quality Meteor app deployment tool.
Install with one command:
$ npm install --global mup
You set up a simple config file, and it looks after creating the container, doing npm install, setting up ssl certs etc. Much less work than doing it by hand

Where should i run my grunt build step when building my docker image for staging and production environments?

I'm really struggling to figure out where i should put my grunt build step when building my docker image and deploying to dockerhub.
My workflow at the moment is as follows:
Push branch to github
CircleCI installs all dependencies, builds project, and runs tests on branch.
Merge branch branch to staging branch
CircleCI installs all dependencies, builds project, and runs tests on branch.
If tests pass, package the built files into the docker image with the source and also run npm install --production. CircleCI then deploys this staging image to dockerhub
Tutum is linked to dockerhub and deploys my image to DigitalOcean whenever a new image is pushed.
I do the same workflow as above, when merging to master, and a production image is created instead.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I've seen quite a lot of people including the grunt/gulp build step in their dockerfiles, but that doesn't feel right either as all the devDependencies, and bower_components will then be in the image along with the built code.
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile? I'm also after the most efficient way to create my docker image for staging and production.
Below is my circleCI.yml file, followed by my Dockerfile.
circle.yml:
machine:
node:
version: 4.2.1
# Set the timezeone - any value from /usr/share/zoneinfo/ is valid here
timezone:
Europe/London
services:
- docker
pre:
- sudo curl -L -o /usr/bin/docker 'http://s3-external-1.amazonaws.com/circle-downloads/docker-1.8.2-circleci'; sudo chmod 0755 /usr/bin/docker; true
dependencies:
pre:
- docker --version
- sudo pip install -U docker-compose==1.4.2
- sudo pip install tutum
override:
- npm install:
pwd: node
post:
- npm run bower_install:
pwd: node
- npm run grunt_build:
pwd: node
test:
override:
- cd node && npm run test
deployment:
staging:
branch: staging
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_stage:latest
- docker push tutum.co/${DOCKER_USER}/dh_stage:latest
master:
branch: master
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_prod:latest
- docker push tutum.co/${DOCKER_USER}/dh_prod:latest
Dockerfile:
FROM node:4.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --production
COPY . /usr/src/app
#
#
# Commented the following steps out, as these
# now run on CircleCI before the image is built.
# (Whether that's right, or not, i'm not sure.)
#
# Install bower
# RUN npm install -g bower # grunt-cli
#
# WORKDIR src/app
# RUN bower install --allow-root
#
# Expose port
EXPOSE 3000
# Run app using nodemon
CMD ["npm", "start"]
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile?
It's better to run the build steps themselves outside of docker. Thus the same steps work for local development, non-docker deployment, etc. Keep your coupling to docker itself loose when you can. Thus build your artifacts with regular build tools and scripts and simply ADD built files to your docker image via your Dockerfile.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I would recommend instead using exactly the image you have already built and tested on stage in production. Once you rebuild the image, you become vulnerable to discrepancies breaking your production image even though your stage image worked OK. At this point neither docker nor npm can deliver strictly reproducible builds across time, thus once it's built and tested gold, it's gold and goes to production bit-for-bit identical.
Your circle ci should download all the dependencies and then create docker image from that downloaded packages. All testing is passed with the specified dependencies and should be carry forwarded to production. Once. The image is pushed to docker hub with all dependencies and tumtum will deploy the same to your production and as the dependencies are already downloaded it will take seconds to create containers.
Answering to your second query of building the same image. I would suggest to deploy the same image to production. This will guarantee you that what worked great on staging is also working the same on production.

Resources