bash script does not capture exit code 1 properly - linux

I have a bash script in which I start a docker. The docker start fails due to some error which exist in there and it clearly says exit code 1. This is the script I have to run the docker command
startContainer(){
echo "change directory to ..."
cd "..."
docker-compose -f ./docker-compose.yml up -d
if [[ $? -eq 0 ]]; then
echo "Executed docker-compose successfully on ${HOST_APP_HOME}"
else
echo "Failed to start container on ${HOST_APP_HOME}. Failed command: docker-compose -f ${DOCKER_CONF_FILE} up -d"
printErrorFinish
fi
}
The docker-compose command fails and it clearly prints this message
exited with code 1
But my script does not capture it and the first condition (-eq 0) gets executed. Why it can't capture this error and consider it as a successful command?

I think the status code of the docker-compose doesn't really make sense on it's own. It is in charge of running multiple other containers, the exist status you see printed is probably from one of the containers.
Base on what your docker-compose file is doing you can use --exit-code-from option to get the exit code of each service. You can also add a health-check mechanism for desired services in order to know which one is running and which one is not (a service which is deployed successfully doesn't return any value but could be checked with health check).
You can read about --exit-code-from here.
Sorry that I don't know a better way.

Related

Finish background process when next process is completed

Hi, all
I am trying to implement automated test running from Makefile target. As my test dependent on running docker container, I need to check that container is up and running during whole test execution, and restart it, if it's down. I am trying do it with bash script, running in background mode.
At glance code looks like this:
run-tests:
./check_container.sh & \
docker-compose run --rm tests; \
#Need to finish check_container.sh right here, after tests execution
RESULT=$$?; \
docker-compose logs test-container; \
docker-compose kill; \
docker-compose rm -fv; \
exit $$RESULT
Tests has vary execution time (from 20min to 2hrs), so I don't know before, how much time it will take. So, I try to poll it within script longer, than the longest test suite. Script looks like:
#!/bin/bash
time=0
while [ $time -le 5000 ]; do
num=$(docker ps | grep selenium--standalone-chrome -c)
if [[ "$num" -eq 0 ]]; then
echo 'selenium--standalone-chrome container is down!';
echo 'try to recreate'
docker-compose up -d selenium
elif [[ "$num" -eq 1 ]]; then
echo 'selenium--standalone-chrome container is up and running'
else
docker ps | grep selenium--standalone-chrome
echo 'more than one selenium--standalone-chrome containers is up and running'
fi
time=$(($time + 1))
sleep 30
done
So, I am looking for how to exit script exactly after test running is finished, its meant, after command docker-compose run --rm tests is finished?
P.S. It is also ok, if background process can be finished on Makefile target finish
Docker (and Compose) can restart containers automatically when they exit. If your docker-compose.yml file has:
version: '3.8'
services:
selenium:
restart: unless-stopped
Then Docker will do everything your shell script does. If it also has
services:
tests:
depends_on:
- selenium
then the docker-compose run tests line will also cause the selenium container to be started, and you don't need a script to start it at all.
When you launch a command in the background, the special parameter $! contains its process ID. You can save this in a variable and later kill(1) it.
In plain shell-script syntax, without Make-related escaping:
./check_container.sh &
CHECK_CONTAINER_PID=$!
docker-compose run --rm tests
RESULT=$?
kill "$CHECK_CONTAINER_PID"

Fail Gitlab pipeline based on output

I am using a docker container (sslyze) in a Gitlab pipeline for some testing. This pipeline always succeeds, but I would like the pipeline to fail if the container ever reported a "FAIL" in its output. Currently if a "FAIL" is reported in the terminal output, an exit code of 0 is still reported (as the scan itself worked) so Gitlab passes the pipeline.
I am new to Gitlab, but familiar with Jenkins, and in Jenkins you could fail the job based on the terminal output using Text Finder. Is there a similar concept in Gitlab?
Thanks to #secustor in the comments for pointing me to a similar question. I was hoping for some native functionality within Gitlab but I couldn't find any.
Instead, I queried the container logs and the exit code, then set an exit code of 1 depending on the outcome.
Within in an .gitlab-ci.yml (I had problems splitting the logic across multiple lines so it's all jammed into one line):
script:
- docker run --name containername nablac0d3/sslyze --regular $URL
- if [[ "$(docker logs containername >& container-logs ; cat container-logs | grep 'FAIL' | wc -l)" -gt 0 ]] || [[ "$(docker container wait containername)" -eq 1 ]]; then exit 1; fi

Wifi disconnected before init.d script is run

I've set up a simple init.d script "S3logrotate" to run on shutdown. The "S3logrotate" script works fine when run manually from command line but the script does not function correctly on shut down.
The script uploads logs from my PC to an Amazon S3 bucket and requires wifi to run correctly.
Debugging proved that the script is actually run but the upload process fails.
I found that the problem seems to be that the script seems to run after wifi is terminated.
These are the blocks I used to test my internet connection in the script.
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "IPv4 is up" >> *x.txt*
else
echo "IPv4 is down" >> *x.txt*
fi
if ping -q -c 1 -W 1 google.com >/dev/null; then
echo "The network is up" >> *x.txt*
else
echo "The network is down" >> *x.txt*
fi
The output for this block is:
IPv4 is down
The network is down
Is there any way to set the priority of an init.d script? As in, can I make my script run before the network connection is terminated? If not, is there any alternative to init.d?
I use Ubuntu 16.04 and have dual booted with Windows 10 if that's significant.
Thanks,
sganesan7
You should place you scrip in:
/etc/NetworkManager/dispatcher.d/pre-down.d
change group and owner to root
chown root:root S3logrotate
and it should work. If you need to do this for separate interface place script in
create a script inside
/etc/NetworkManager/dispatcher.d/
and name it (for example):
wlan0-down
and should work too.

Running tests in a container on Travis

While building my application on Travis I am trying to run the tests within a Docker container. The container starts and the tests are run, and when I log the container output I can see they have passed. It is my understanding I can use grep for this as seen below. So this is my travis script:
script:
docker-compose up -d
docker logs dockertestapp_app_1
docker logs 2>&1 dockertestapp_app_1 | grep -q 'npm info ok'
I just want to grep the output of the container logs to see whether or not the tests pass but it always fails. Am I missing something simple?
Thank you in advance!
In order to avoid a sleep of 60 seconds you described in your comment, start your tests manually doing something like this:
docker exec -it dockertestapp_app_1 bash -c 'tests.py > /proc/1/fd/1'
Note I'm executing a test file (in this example, tests.py) and setting output to /proc/1/fd/1. This way you can normally grep the expression that means your tests passed as you are currently doing.
TIP: you may not need to output to /proc/1/fd/1 for docker logs as your test script may return a non-zero exit code to indicate that tests failed. This way you don't even need the grep line in your script.

linux sftp :file transfer error

I have a small problem regarding "sftp".
I have a script, which simply transfers a file to a remote sftp server. But when this script runs it fails at sftp and my script fails.
So, i have to manually transfer the file,using command which is same as the command that i have used in the script, and it works fine.
So my problem is that the sftp command runs smoothly when i run it manually, but creates problem when the same command is run through the script.
this is the code that I'm using
sftp -v -b sftp_input.txt UserId#aa.bb.cc.dd
if (($? > 0 ));
then
echo "sftp error. Exiting.."
exit
fi
where sftp_input.txt contains the cmd to put the file to remote server.
Please advice.....
The script can't work because it's malformed. You forgot to separate the if statement and also forgot the closing fi. Here's the correct form for your script:
sftp -v -b sftp_input.txt UserId#aa.bb.cc.ddd
if (($? > 0 )); then
echo "sftp error. Exiting.."
exit
fi
If you want it all in one line, then:
sftp -v -b sftp_input.txt UserId#aa.bb.cc.ddd; if (($? > 0 )); then echo "sftp error. Exiting.."; exit; fi
But as you can see it's a bad idea. Better to write readable and well indented code.

Resources