Fail Gitlab pipeline based on output - gitlab

I am using a docker container (sslyze) in a Gitlab pipeline for some testing. This pipeline always succeeds, but I would like the pipeline to fail if the container ever reported a "FAIL" in its output. Currently if a "FAIL" is reported in the terminal output, an exit code of 0 is still reported (as the scan itself worked) so Gitlab passes the pipeline.
I am new to Gitlab, but familiar with Jenkins, and in Jenkins you could fail the job based on the terminal output using Text Finder. Is there a similar concept in Gitlab?

Thanks to #secustor in the comments for pointing me to a similar question. I was hoping for some native functionality within Gitlab but I couldn't find any.
Instead, I queried the container logs and the exit code, then set an exit code of 1 depending on the outcome.
Within in an .gitlab-ci.yml (I had problems splitting the logic across multiple lines so it's all jammed into one line):
script:
- docker run --name containername nablac0d3/sslyze --regular $URL
- if [[ "$(docker logs containername >& container-logs ; cat container-logs | grep 'FAIL' | wc -l)" -gt 0 ]] || [[ "$(docker container wait containername)" -eq 1 ]]; then exit 1; fi

Related

Finish background process when next process is completed

Hi, all
I am trying to implement automated test running from Makefile target. As my test dependent on running docker container, I need to check that container is up and running during whole test execution, and restart it, if it's down. I am trying do it with bash script, running in background mode.
At glance code looks like this:
run-tests:
./check_container.sh & \
docker-compose run --rm tests; \
#Need to finish check_container.sh right here, after tests execution
RESULT=$$?; \
docker-compose logs test-container; \
docker-compose kill; \
docker-compose rm -fv; \
exit $$RESULT
Tests has vary execution time (from 20min to 2hrs), so I don't know before, how much time it will take. So, I try to poll it within script longer, than the longest test suite. Script looks like:
#!/bin/bash
time=0
while [ $time -le 5000 ]; do
num=$(docker ps | grep selenium--standalone-chrome -c)
if [[ "$num" -eq 0 ]]; then
echo 'selenium--standalone-chrome container is down!';
echo 'try to recreate'
docker-compose up -d selenium
elif [[ "$num" -eq 1 ]]; then
echo 'selenium--standalone-chrome container is up and running'
else
docker ps | grep selenium--standalone-chrome
echo 'more than one selenium--standalone-chrome containers is up and running'
fi
time=$(($time + 1))
sleep 30
done
So, I am looking for how to exit script exactly after test running is finished, its meant, after command docker-compose run --rm tests is finished?
P.S. It is also ok, if background process can be finished on Makefile target finish
Docker (and Compose) can restart containers automatically when they exit. If your docker-compose.yml file has:
version: '3.8'
services:
selenium:
restart: unless-stopped
Then Docker will do everything your shell script does. If it also has
services:
tests:
depends_on:
- selenium
then the docker-compose run tests line will also cause the selenium container to be started, and you don't need a script to start it at all.
When you launch a command in the background, the special parameter $! contains its process ID. You can save this in a variable and later kill(1) it.
In plain shell-script syntax, without Make-related escaping:
./check_container.sh &
CHECK_CONTAINER_PID=$!
docker-compose run --rm tests
RESULT=$?
kill "$CHECK_CONTAINER_PID"

bash script does not capture exit code 1 properly

I have a bash script in which I start a docker. The docker start fails due to some error which exist in there and it clearly says exit code 1. This is the script I have to run the docker command
startContainer(){
echo "change directory to ..."
cd "..."
docker-compose -f ./docker-compose.yml up -d
if [[ $? -eq 0 ]]; then
echo "Executed docker-compose successfully on ${HOST_APP_HOME}"
else
echo "Failed to start container on ${HOST_APP_HOME}. Failed command: docker-compose -f ${DOCKER_CONF_FILE} up -d"
printErrorFinish
fi
}
The docker-compose command fails and it clearly prints this message
exited with code 1
But my script does not capture it and the first condition (-eq 0) gets executed. Why it can't capture this error and consider it as a successful command?
I think the status code of the docker-compose doesn't really make sense on it's own. It is in charge of running multiple other containers, the exist status you see printed is probably from one of the containers.
Base on what your docker-compose file is doing you can use --exit-code-from option to get the exit code of each service. You can also add a health-check mechanism for desired services in order to know which one is running and which one is not (a service which is deployed successfully doesn't return any value but could be checked with health check).
You can read about --exit-code-from here.
Sorry that I don't know a better way.

how to check if condition more than once in Shell Script?

I am creating a shell script which checks the status of a service running on another machine and if didn't get any response than performs some operation at the local system. I am using if clause in the script for this task.
Sometimes due to the network connection, it falsely assumes that the remote server is not responding and performs the tasks mentioned inside if clause. I want to set up a retry so that it checks if condition more than once when it didn't find any response in the first attempt.
is there any way to setup retry like thing in a shell script for this purpose?
Below is a sample code
RSI1_STATUS=$(psql -U username -h serverip -d postgres -t -c "select version();" )
if [ -z "$RSI1_STATUS" ] #Condition will be true if remote server is not active
then
touch /tmp/postgresql.trigger
fi
now I want to check if condition more than once if it is true in the first attempt.
You could add retry loop with a number of retries using a while loop:
retries=5
while ! check_network_connection && ((--retries)); do
sleep 1 # or probe the network, etc.
done
if [[ $retries -eq 0 ]]; then
echo "Error: Connection retries exhausted."
else
# connection succeeded.
fi
Whether you want to sleep or do something else depends on your usage and the application.
Note: The "network connection" might have succeeded after checking in the loop. So if retries is 0, it doesn't necessarily mean that the connection is still down.

Script for checking if the created AMI is available or not,

I am trying to write a script where we take backup of an AMI (Amazon Machine Image) & once its completed & it's status shows 'Available' than it email us informing the same.
I have got the first part covered but having problem with second part i.e. to check continuously for the when the image is available & email us. To check the status as available, i am using the following command,
/usr/bin/aws ec2 describe-images --image-ids=$AMI_ID --query "Images[*].{st:State}" | grep -e "available" | wc -l'
This will return output as 1 when AMI is available but having trouble in creating a loop which runs the above command continuously to check the output is equal to 1 or not.
Please help in figuring out this loop.
PS IMAGE creation takes anywhere from 10 to 30 minutes or even more in some cases.
You could use an infinite loop
while true
do
if /usr/bin/aws ec2 describe-images --image-ids=$AMI_ID --query "Images[*].{st:State}" | grep -e "available" | wc -l'; then
break
fi
esac
done
You can try like below as well: [Update sleepTime as needed]
Notice I've added flag --executable-users self to your command to list the images available for you.
sleepTime=60 # sleepTime in seconds
while true ; do
count=$(aws ec2 describe-images --executable-users self --query "Images[*].{st:State}" | grep -e "available" | wc -l)
if [[ $count == 1 ]] ; then
echo "Image is ready... Add your emailing code here"
exit 0
fi
sleep $sleepTime;
printf "."
done
From AWS Image-Exists Docs
could include the following the following code in the script:
aws ec2 wait image-exists \
--image-ids ami-0abcdef1234567890
(replace ami-0abcdef1234567890 with seeked AMI code)
According to documentation, image-exists has no output, but when found, wait will continue the script.

Notify via email if something wrong got happened in the shell script

fileexist=0
mv /data/Finished-HADOOP_EXPORT_&Date#.done /data/clv/daily/archieve-wip/
fileexist=1
--some other script below
Above is the shell script I have in which in the for loop, I am moving some files. I want to notify myself via email if something wrong got happened in the moving process, as I am running this script on the Hadoop Cluster, so it might be possible that cluster went down while this was running etc etc. So how can I have better error handling mechanism in this shell script? Any thoughts?
Well, atleast you need to know "What are you expecting to go wrong". based on that you can do this
mv ..... 2> err.log
if [ $? -ne 0 ]
then
cat ./err.log | mailx -s "Error report" admin#abc.com
rm ./err.log
fi
Or as William Pursell suggested, use-
trap 'rm -f err.log' 0; mv ... 2> err.log || < err.log mailx ...
mv may return a non-zero return code upon error, and $? returns that error code. If the entire server goes down then unfortunately this script doesn't run either so that's better left to more advanced monitoring tools such as Foglight running on a different monitoring server. For more basic checks, you can use method above.

Resources