Use pm2 with CircleCI - node.js

I am using pm2 on my remote ubuntu server and CircleCI for CI, I've got the following configuration files:
version: 2.1
orbs:
node: circleci/node#1.1.6
jobs:
deploy-prod:
docker:
# specify the version you desire here (you might not want node)
- image: circleci/node:7.10
steps:
- checkout
- run: ssh -oStrictHostKeyChecking=no -v $DROPLET_USER#$DROPLET_IP ./deploy_project.sh $MICROSERVICE_NAME
workflows:
build-and-test:
jobs:
- deploy-prod:
filters:
branches:
only:
- master
In my deploy script I do the following:
cd /var/www/nodejs/$1
git pull git#github.com:DevandScorp/hippocrates_authorizationmicroservice.git
cd ..
pm2 restart ecosystem.config.js --only $1
But I've got the following error:
./deploy_project.sh: line 4: pm2: command not found
Is it possible to run my server's pm2 in CircleCI config or can I reload my microservice automatically in another way?

So, if you want to make anything on your server using CircleCI, it's just a waste of time. CircleCI provides a virtual environment, where you can, for example, make some tests. Also you can push changes on your remote server, but CircleCI will not have any access to your server's system. So if we speak about pm2, you can enable watch mode and relaunch your microservice everytime CircleCI push changes to it

Add your deploy SSH keys to Github (or whatever your remote source control is), link it to CircleCI to allow Circle CI to SSH into your remote server and have it run pm2 there.
Adding your SSH Keys to Github
You'd follow similar steps for Bitbucket, etc
https://github.com/settings/ssh/new
Add your SSH fingerprint to your CI steps:
- add_ssh_keys:
fingerprints:
- "bb:bb:bb:bb:bb:bb:bb:bb:bb:bb:bb:bb:bb:bb:bb:bb"
CI SSHes into your remote server and runs pm2:
steps:
- checkout
- run:
name: Deploy over SSH
command: ssh -p your_port_number your_user#your_host "cd ../path/to/your/project; git pull; pm2 start hello_sts";
I followed this guide: https://scribe.rip/#blueish/how-to-setup-ci-cd-with-circleci-and-deploy-your-nodejs-project-to-a-remote-server-275a31d1f6c4
although it's for Bitbucket, not Github.

Related

How can i create a file on gitlab-runner server by .gitlab-ci.yml file?

I am learning GitLab CI/CD. I use https://gitlab.com/ as repository and I installed an Ubuntu server on virtual machine as local and Gitlab-runner is running on it.
I want to create a file by .gitlab-ci.yml file by touch test.txt on Gitlab-runner server (deploy server).
For example:
build-job:
stage: build
script:
- pwd
- touch /srv/test.txt
- ls /srv/
This was run on gitlab.com, not deploy server. All config tested and worked properly.

Cache build folder during gitlab CI build

I have a remote server where I serve my project via Nginx. I am using Gitlab CI to automatize my deploy process and I have ran into a problem. When I push my commits to the master branch the gitlab-runner run nicely but the problem is that it removes my React build folder (it is ok, as I have put it into the .gitignore), but because it always removes my build folder my Nginx could not serve any files until the build finish, and a new build folder creaeted. Is is there any solution for this problem? It would be nice if I could cache my build file until the build process finish. I attached my gitlab.ci.yml. Thank's in advance!
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
GIT_SSL_NO_VERIFY: "1"
build-step:
stage: build
tags:
- shell
script:
- docker image prune -f
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
deploy-step:
stage: deploy
tags:
- shell
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
It should be possible to use git fetch and to disable git clean when your deploy job starts. Here are links for the variables to do this:
https://docs.gitlab.com/ee/ci/yaml/#git-clean-flags
https://docs.gitlab.com/ee/ci/yaml/#git-strategy
It would look something like this:
deploy-step:
variables:
GIT_STRATEGY: fetch
GIT_CLEAN_FLAGS: none
stage: deploy
tags:
- shell
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
This should make GitLab use git fetch instead of git clone, and to not run any git clean ... commands. The build artifacts from your previous run should then not be removed.
There are some problems with this though. If something goes wrong with a build, you might end up in a situation where you will have to manually log into the server where the runner is to fix it. The reason that GitLab uses git clean is to prevent these types of problems.
A more proper solution is to use nginx to have a sort of dubble buffer. You can have two different build folders, change the config in nginx, and then send a signal to nginx to reload the config. nginx will then make sure to gracefully switch to the new version of your application, without any interruptions. Here is a link to someone that has done this:
https://syshero.org/2016-06-09-zero-downtime-deployments-using-nginx/

gitlab-ci.yml fatal: unable to access http://gitlab-ci-token

I am new to gitlab CI/CD and I'm struggling to figure this out.
All I want to do is when I push to dev branch I want my react app to be built and the folder ./build to be pushed through SSH to my dev server.
Here is what I did so far, including a screenshot of the error message I get.
This is my gitlab-ci.yml
image: node:latest
cache:
paths:
- node_modules/
build_dev:
stage: build
environment: Development
only:
- dev
script:
- ls
- npm install
- npm run build
artifacts:
paths:
- build/
- ecosystem.config.js
deploy_dev:
stage: deploy
environment: Development
only:
- dev
script:
- rsync -r -a -v -e ssh --delete "./build" root#dev.teledirectasia.com:/var/www/gitlab/${CI_PROJECT_NAME}
- rsync -r -a -v -e ssh --delete "./ecosystem.config.js" root#dev.teledirectasia.com:/var/www/gitlab/${CI_PROJECT_NAME}/
- ssh root#dev.teledirectasia.com "cd /var/www/gitlab/${CI_PROJECT_NAME} && pm2 start ecosystem.config.js"
I don't know why I am getting this output with job failed
This is a DNS problem. Your runner cannot resolve the hostname of the GitLab server - gitlab.teledirectgroup.com. Did you set the GitLab hostname if your local workstation's host file manually, or did you set it up in a DNS server as a 'proper' hostname?
If you set up the hostname in a DNS server then the solution may be as simple as adding the DNS server to /etc/resolv.conf on the runner. However, if you just set the GitLab hostname in your workstation's hosts file then you'll need to set it in the runner's /etc/hosts file, too. It's hard to say what the exact solution is without knowing how you set up the GitLab hostname in the first place.
the solution that’s so applied to GitLab?
Use the git clone by ssh, I don’t have a good goal that’s so I can up to push that’s changes over a submodule from runner Shell by GitLab CI. The pipeline ever fails and prints this error.
ERROR PIPELINE JOB
In the local repo as a project the file config contains that line with the URL, more don’t have login with this about the pipeline.
.git/config
Some help or walkthrough of reference to culminate with that challenge in troubleshooting!
This is my code over the file ".gitlab-ci.yml"
variables:
TEST_VAR: "Update Git Submoudel in all Etecnic projects."
job1:
variables: {}
script:
- echo "$TEST_VAR"
job2:
variables: {}
script:
- echo "OK" >> exito.txt
- git add --all
- git commit -m "Update Submodule"
- git push origin HEAD:master
Versions:
GitLab:
gitlab-ce is already the newest version (15.6.0-ce.0).
Runner:
Version: 15.5.1
Git revision: 7178588d
Git branch: 15-5-stable
GO version: go1.18.7
Built: 2022-11-11T09:45:25+0000
OS/Arch: linux/amd64
Thanks so much for your attention.

Self-hosting gitlab autodeploy to aws ec2 server

I'm asking this because because I cannot find a running example similar to my case. I have this self hosted Gitlab in a AWS EC2 machine (let's call this "machine 1" and I want to set autodeployment to my AWS EC2 remote server , called "machine 2".
My Gitlabs installation shows (machine 1):
gitlab-ce 10.4.4
gitlab-config-template 10.4.4
gitlab-cookbooks 10.4.4
gitlab-ctl 10.4.4
gitlab-healthcheck
gitlab-monitor
gitlab-pages
gitlab-psql
gitlab-rails
gitlab-scripts
gitlab-selinux
gitlab-shell
gitlab-workhorse
I have follow gitlab instructions to setting up CI & CD on gitlab documentation in my project I want to set autodeployment. The follow steps are follow:
1.I have create runner following gitlabs doc , not much to show here except (machine 2):
url: https://url.to.my.compute.amazonaws.com
Token : token given by gitlab
Executor: shell
Tags: build deploy qa stage
2.I have created my .gitlab-ci.yml (in root project) file with (even with two yml file version created i have tried):
yml 2:
stages:
- build
- deploy
build:
stage: build
script: echo "Building the app"
deploy_staging:
stage: deploy
script:
- echo "Deploy to staging server"
yml 1:
#develop stage
deploy:
stage: deploy
before_script:
#generate ssh key
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
script:
- bash .gitlab-deploy.sh
environment:
name: develop
url: https://my.domain.com
when: manual
3.I have set two SECRET VARIABLES
SSH_PRIVATE_KEY and DEPLOY_SERVERS (with secret key and ips respectively)
4.I have add a deploy.sh file (in root of my project)
#!/bin/bash
#Get servers list
set -f
string=$DEPLOY_SERVERS
array=(${string//,/ })
#Iterate servers for deploy and pull last commit
for i in "${!array[#]}"do
echo "Deploy project on server ${array[i]}"
ssh ubuntu#${array[i]} "cd /var/www/html/app && git pull origin develop"
done
My gitlab-runner shows me at this momment:
gitlab-runner verify
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
And Running as sudo as says it show my runner :
Verifying runner... is alive runner=
Verifying runner... is alive runner=
Verifying runner... is alive runner=
but still in gitlabs ui is getting a "STUCK" tags and the job tell me to "job is stuck, check runners"
Questions:
Are this all the steps to follow?
Do you see anything (or process) I miss in all this configuration?
In my gitlab remote I have "master" permissions, is this what i need to run a runner ?
how can i debug at this point ( i'm using gitlab-runner --debug verify ) is that all I can?
Thanks in advance for you help.
When runner is "specific" , stages need a "tag" like:
stages:
- build
- deploy
build:
stage: build
script: echo "Building the app"
deploy_staging:
stage: deploy
script:
- echo "Deploy to staging server"
tags:
- deploy

How to deploy to custom server after success CI in docker environment?

I already did CI, but now I want to deploy to my server. My server is the same machine where I do CI, but I do CI in docker-executor. So I can't have acces to server folders to update production.
There is my script:
image: node:9.11.2
cache:
paths:
- node_modules/
before_script:
- npm install
stages:
- test
- deploy
test:
stage: test
script:
- npm run test
deploy:
stage: deploy
script:
#here I want to go to /home/projectFolder and make git pull, npm i, npm start
# but I can't beause I run CI in docker-environment which hasn't acces to my server's dirictories.
First of all you should consider using gitlab auto cicd ( or use it as a base to customize if you dont want to use kubernetes)
You have multiple way to do so but the simplest way should be to use an alpine image and
- install ssh (if necessary)
- load your private ssh key ( from pipeline secrets)
- run your npm commands through ssh.
The cleanest way would be :
- generating adding a valid Dockerfile to your project
- adding docker image generation for each commit on master (in your pipeline)
- Adding docker rm running image (in your pipeline)
- Adding docker run the newly generated image (in your pipeline) (by sharing your docker volume)
- Make nginx redirect to your container.
I can give more detailed advice depending on what you decide to do.
Hoping i helped.

Resources