Gitlab: Make the CI/CD failed when git pull failed in a bash script on server - gitlab

My deploy script is an extremely simple Git pull. I have a bash script that does git pull that’s part of my source code. The issue is that if the git pull fails for any reason, it’s still showing a successful deployment.
The .gitlab-ci.yml:
stages:
- 'deploy'
deploy to staging:
stage: 'deploy'
script: '/home/myuser/scripts/deployment/deploy_staging.sh'
tags:
- staging
only:
- staging
The deploy_staging.sh:
script=$(basename $0)
logger "$script: script executed"
cd $HOME/mydirectory
git fetch
git pull
composer update

This issue lies in the fact that in order for a job to fail it checks the exit code of the command executed in your case
script: '/home/myuser/scripts/deployment/deploy_staging.sh'
The exit code of this command will be the result of 'composer update'
Since the exit code for a script is the exit code of the latest executed command, in your case
'composer update'
Meaning, this job will fail only if the command 'composer update' fails, disregarding the exit codes of the previous commands in your case 'git pull'
To fix this add "set -e" in the beginnig of your script. This will force the script to adapt a proper exit code by exiting the script when a command in the script fails

Related

create folder in remote server using bitbucket branch name via pipeline

I am developing a Laravel project and I want to have CI with it. I am using bitbucket pipelines to do this. and I am using a ubuntu VPS.I would like to have separate folders for release-branchname.
for example, if I create a branch release-1.0.0 in bitbucket, and when i am trying to deploy it should create a folder in my remote server projectname/releases/release-1.0.0. I tried many ways but I was not successful.
here is my pipeline script
release-*:
- step:
name: Preparing pipeline for release
script:
- echo 'Preparing pipeline for releases'
- step:
name: Deploing release branches
trigger: manual
deployment: test
script:
- cat ./deploy.sh | ssh root#X.X.X.X
and here is my deploy.sh
echo 'Deployment started'
cd /home/core-cms/project-root
mkdir /home/core-cms/$BITBUCKET_BRANCH
git stash
git pull origin master
composer install
echo 'Deploy finished'
exit;
if I could pass $BITBUCKET_BRANCH variable into this deploy.sh, it would work perfectly I guess.
maybe I am totally wrong and there is another way to accomplish this. if so can anyone guide me, please? Thank you.
There are two different things that you are attempting to do:
Get name of the branch from the commit and pass it to the next step
default:
- step:
deployment: test
script:
- ssh user#host "bash -s" -- < ./deploy.sh $BITBUCKET_BRANCH
Get the branch name in your scrip and do the rest
BITBUCKET_BRANCH=${1}
echo 'Deployment started'
cd /home/core-cms/
git clone -b $BITBUCKET_BRANCH user#repository.com
composer install
echo 'Deploy finished'
exit;
Here is what I have done in the code in steps:
The $BITBUCKET_BRANCH is an standard Bitbucket variable. Be aware that this var is not available using custom or tag trigers.
In the pipeline code call the script on local machine pass the branch name to it and execute it on ssh server. The code ssh user#host "bash -s" -- < will run the rest of the code as if it is running it as a local code. Adding -- will stop the remote bash to take rest of the arguments for itself.
By adding the $BITBUCKET_BRANCH after the script file name, we pass it to the script.
In the script we get the input argument using $1 and assign it to $BITBUCKET_BRANCH variable.
The code will clone the code in the sub-directory. Here you might
need some changes to avoid git conflicts.
P.S. Just to mention, I haven't test the code. It might have some misspellings. However, the solution is there. You might be able to alter it to what you'd like to do.

How to define the environment variable in environments url [Gitlab CI]

Help, please. I have problems when using the CI tool.
Here's my .gitlab-ci.yaml
stages:
- test
test:
stage: test
environment:
name: test
url: https://word.mymusise.com/env_test.txt
script: echo "Running tests TEST=$TEST"
And I've define the test environment in EnvDocker > Pipelines > Environments
But it didn't export the environment from https://word.mymusise.com/env_test.txt in the CI job.
Running with gitlab-runner 11.4.2 (cf91d5e1)
on gitlab-ci runner a0e18516
Using Docker executor with image ubuntu:16.04 ...
Pulling docker image ubuntu:16.04 ...
Using docker image sha256:2a697363a8709093834e852b26bedb1d85b316c613120720fea9524f0e98e4a2 for ubuntu:16.04 ...
Running on runner-a0e18516-project-123-concurrent-0 via gitlab...
Fetching changes...
HEAD is now at d12c05b Update .gitlab-ci.yml
From https://gitlab.kingdomai.com/mymusise/envdocker
d12c05b..1a3954f master -> origin/master
Checking out 1a3954f8 as master...
Skipping Git submodules setup
$ echo "Running tests TEST=$TEST"
Running tests TEST=
Job succeeded
I define export TEST="test" in https://word.mymusise.com/env_test.txt, but it seems not working.
What should I do... Orz
Gitlab version: 11.4.0-ee
You want to run commands that are inside the text file that is accessible via http protocol.
With curl you can download the file and print it on curl's standard output. With command substitution $() you can grab the standard output. Then you can execute the commands itself (very unsafe, there might be multiple escaping issues).
script:
- $(curl "$url")
- echo "Running tests TEST=$TEST"
A safer alternative would be to just download the file and execute/source it.
script:
- curl "$url" > ./run_this.sh
# don't forget to add executable right to the file ;)
- chmod +x ./run_this.sh
- source ./run_this.sh
# pick out the trash
- rm ./run_this.sh
# rest of your script.
- echo "Running tests TEST=$TEST"
Downloading a shell script and executing it is a popular way of automating tasks, usually with curl url | bash. It is not supported "natively" by gitlab and I don't think it should be.

Gitlab runner does not fail build even tests fails

I'm trying to setup gitlab ci for my project.
My gitlab-ci script looks like:
stages:
- build
before_script:
- docker info
- chmod -R a+x scripts
build:
stage: build
script:
- pwd
- ./scripts/ci-prepare.sh
- ./scripts/dotnet-build.sh
- ./scripts/dotnet-tests.sh
- ./scripts/dotnet-publish.sh
- ./scripts/docker-publish-ci.sh
after_script:
- ./scripts/ci-success.sh
In build log I have this information:
Total tests: 6. Passed: 5. Failed: 1. Ommited: 0.
But event tests fails the build process is finished with success exit code.
Why?
I have no configured allow_failure option.
I added set -e in bash scripts and it works properly.
Gitlab CI checks the exit code of a command or script to decide if it has failed or succeeded.
A successful command returns a 0. All other exit codes are considered as an error.
I don't know what kind of software you are using for your tests - I think you should just return the exit code of your tests in your scripts.

GitLab CI stuck on running NodeJS server

I'm trying to use GitLab CI to build, test and deploy an Express app on a server (the Runner is running with the shell executor). However, the test:async and deploy_staging jobs do not terminate. But when checking the terminal inside GitLab, the Express server does indeed start. What gives ?
stages:
- build
- test
- deploy
### Jobs ###
build:
stage: build
script:
- npm install -q
- npm run build
- knex migrate:latest
- knex seed:run
artifacts:
paths:
- build/
- node_modules/
tags:
- database
- build
test:lint:
stage: test
script:
- npm run lint
tags:
- lint
# Run the Express server
test:async:
stage: test
script:
- npm start &
- curl http://localhost:3000
tags:
- server
deploy_staging:
stage: deploy
script:
- npm start
environment:
name: staging
url: my_url_here
tags:
- deployment
The npm start is just node build/bundle.js. The build script is using Webpack.
Note: solution works fine when using a gitlab runner with shell executor
Generally in Gitlab CI we run ordered jobs with specific tasks that should be executed one after the end of the other.
So for the job build we have the npm install -q command that runs and terminates with an exit status (0 exit status if the command was succesful), then runs the next command npm run build and so on until the job is terminated.
For the test job we have npm start & process that keeps running so the job wont be able to terminate.
The problem is that sometimes we need to have some process that need to run in background or having some process that keeps living between tasks. For example in some kind of test we need to keep the server running, something like that:
test:
stage: test
script:
- npm start
- npm test
in this case npm test will never start because npm statrt keeps running without terminating.
The solution is to use before_script where we run a shell script that keeps npm start process running then we call after_script to kill that npm start process
so on our .gitlab-ci.yml we write
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
and on the serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in this case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
thanks to that serverstart.sh script is terminated while keeping npm start process alive and help us by the way move to the job where we have npm test.
npm test terminates and pass to after script where we kill all nodejs process.
You are starting a background job during your test phase which never terminates - therefore the job runs forever.
The idea of the GitLab CI jobs are shortly-running tasks - like compiling, executing unit tests or gathering information such as code coverage - which are executed in a predefined order. In your case, the order is build -> test -> deploy; since the test job doesn't finish, deploy isn't even executed.
Depending on your environment, you will have to create a different job for deploying your node app. For example, you can push the build output to a remote server using a tool like scp or upload it to AWS; after that, you reference the final URL in the url: field in your .gitlab-ci.yml.

Execute a script before the branch is deleted in GitLab-CI

GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted. This effectively forces you to put all the teardown logic into the .gitlab-ci.yml instead of a script that .gitlab-ci.yml just calls.
Does anyone know a workaround for this? I have a shell script that removes the deployment. This script is part of the repository and can also be called locally (i.e. not onli in an CI environment). I want GitLab-CI to call this script when removing a dynamic environment but it's obviously not there anymore when the branch has been deleted. I also cannot put this script to the artifacts as it is generated before the build by a configure script and contains secrets. It would be great if one could execute the teardown script before the branch is deleted.
Here's a relevant excerpt from the .gitlab-ci.yml
deploy_dynamic_staging:
stage: deploy
variables:
SERVICE_NAME: foo-service-$CI_BUILD_REF_SLUG
script:
- ./configure
- make deploy.staging
environment:
name: staging/$CI_BUILD_REF_SLUG
on_stop: stop_dynamic_staging
except:
- master
stop_dynamic_staging:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- make teardown # <- this fails
when: manual
environment:
name: staging/$CI_BUILD_REF_SLUG
action: stop
Probably not ideal, but you can curl the script using the gitlab API before running it:
curl \
-X GET https://gitlab.example. com/raw/master/script.sh\
-H 'PRIVATE-TOKEN: ${GITLAB_TOKEN}' > script.sh
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted.
That includes:
An on_stop action, if defined, is executed.
With GitLab 15.1 (June 2022), you can skip that on_top action:
Force stop an environment
In 15.1, we added a force option to the stop environment API call.
This allows you to delete an active environment without running the specified on_stop jobs in cases where running these defined actions is not desired.
See Documentation and Issue.

Resources