Is there a way to make a bitbucket pipeline step wait until some condition - bitbucket-pipelines

I have a bitbucket pipeline step that looks something like the following:
- docker run [...]
- sleep 5
- ./bin/main
However, sometimes the docker container takes a little bit longer to get going causes main to fail. I could increase the time in the sleep statement, but there is no guarantee it would be enough, and excessively long times would use up build minutes needlessly. Is there anything like wait until {ping localhost:4444} that I could use to replace that sleep statement?

Consider this, add a while statement to curl for localhost:4444, so that it only proceeds to the next step when the curl is successful or will exit after 60s with a timeout exit code (28):
- docker run [...]
- while ! curl -s -o --connect-timeout 60 /dev/null localhost:4444; do echo Waiting for Container; sleep 1; done
- ./bin/main

Related

How do I setup two curl commands to execute at different times forever?

For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.

Stopping a started background service (phantomjs) in gitlab-ci

I'm starting phantomjs with specific arguments as part of my job.
This is running on a custom gitlab/gitlab-ci server, I'm currently not using containers, I guess that would simplify that.
I'm starting phantomjs like this:
- "timeout 300 phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /tmp/gastonjs.log &"
Then I'm running my behat tests, and then I'm stopping that process again:
- "pkill -f 'src/Client/main.js' || true"
The problem is when a behat test fails, then it doesn't execute the pkill and the test-run is stuck waiting on phantomjs to finish. I already added the timeout 300 but that means I'm still currently waiting 2min or so after a fail and it will eventually stop it while test are still running when they get slow enough.
I haven't found a way to run some kind of post-run/cleanup command that also runs in case of fails.
Is there a better way to do this? Can I start phantomjs in a way that gitlab-ci doesn't care that it is still running? nohup maybe?
TL;DR; - spawn the process in a new thread with & but then you have to make sure the process is killed in successfull and failure builds.
i use this (with comments):
'E2E tests':
before_script:
- yarn install --force >/dev/null
# if there is already an instance running kill it - this is ok in my case - as this is not run very often
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
- export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d ':' | head -n1)
- export E2E_BASE_URL="http://$DOCKERHOST:8000/#."
# start the lite-server in a new process
- lite-server -c bs-config.js >/dev/null &
script:
# run the tests
- node_modules/.bin/protractor ./protractor.conf.js --seleniumAddress="http://localhost:4444/wd/hub" --baseUrl="http://$DOCKERHOST:8000" --browser chrome
# on a successfull run - kill lite server
- killall lite-server >/dev/null
after_script:
# when a test fails - try to kill it in the after_script. this looks rather complicated, but it makes sure your builds dont fail when the tests succeedes and the lite-server is already killed. to have a successfull build we ensure a non-error return code (exit 0)
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
stage: test
dependencies:
- Build
tags:
- selenium
https://gist.github.com/rufinus/9ee8f04fc1f9248eeb0c73ad5360a006#file-gitlab-ci-yml-L7
As hinted, basically my problem wasn't that I couldn't kill the process, it's that running my test script and it failing stopped at that point, resulting in a deadlock.
I was already doing something quite similar to the example from #Rufinus, but it just didn't work for me. There could be a few different things, like different way of running tests or so or starting it in before_script, which is not an option for me.
I did find a way to make it work for me, which was to prevent my test runner from stopping the execution of further tasks. I managed to do that with a "set +e" and then storing the exit code (something I tried to do before but it didn't work).
This is the relevant part from my job:
# Set option to prevent gitlab from stopping if behat fails.
- set +e
- "phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /dev/null &"
# Store the exit code.
- "./vendor/bin/behat -f progress --stop-on-failure; export TEST_BEHAT=${PIPESTATUS[0]}"
- "pkill -f 'src/Client/main.js' || true"
# Exit the build
- if [ $TEST_BEHAT -eq 0 ]; then exit 0; else exit 1; fi
try -9 signal:
- "pkill -9 -f 'src/Client/main.js' || true"
You can try other signlas as well, you can find a list here

Linux - Run script after time period expires

I have a small NodeJS script that does some processing. Depending on the amount of data needing to be processed, this can take a couple of seconds to hours.
What I want is to do is schedule this command to run every hour after the previous attempt has completed. I'm wary of using something like cron because I need to ensure that two instances of the script aren't running at the same.
If you really don't like cron (or at) you can just use a simple bash script:
#!/bin/bash
while true
do
#Do something
echo Invoke long-running node.js script
#Wait an hour
sleep 3600
done
The (obvious) drawback is that you will have to make it run in background somehow (i.e. via nohup or screen) and add a proper error handling (taking that you script might fail, and you still want it to run again in an hour).
A bit more elaborate "custom script" solution might be like that:
#!/bin/bash
#Settings
LAST_RUN_FILE=/var/run/lock/hourly.timestamp
FLOCK_LOCK_FILE=/var/run/lock/hourly.lock
FLOCK_FD=100
#Minimum time to wait between two job runs
MIN_DELAY=3600
#Welcome message, parameter check
if [ -z $1 ]
then
echo "Please specify the command (job) to run, as follows:"
echo "./hourly COMMAND"
exit 1
fi
echo "[$(date)] MIN_DELAY=$MIN_DELAY seconds, JOB=$#"
#Set an exclusive lock, or skip execution if it is already set
eval "exec $FLOCK_FD>$FLOCK_LOCK_FILE"
if ! flock -n $FLOCK_FD
then
echo "Lock is already set, skipping execution."
exit 0
fi
#Last run timestamp
if ! [ -e $LAST_RUN_FILE ]
then
echo "Timestamp file ($LAST_RUN_FILE) is missing, creating a new one."
echo 0 >$LAST_RUN_FILE
fi
#Compute delay, and wait
let DELAY="$MIN_DELAY-($(date +%s)-$(cat $LAST_RUN_FILE))"
if [ $DELAY -gt 0 ]
then
echo "Waiting for $DELAY seconds, before proceeding..."
sleep $DELAY
fi
#Proceed with an actual task
echo "[$(date)] Running the task..."
echo
"$#"
#Update the last run timestamp
echo
echo "Done, going to update the last run timestamp now."
date +%s >$LAST_RUN_FILE
This will do 2 things:
Set an exclusive execution lock (with flock), so that no two instances of the job will run at the same time, irregardless of how you start them (manually or via cron e.t.c.);
If the last job was completed less then MIN_DELAY seconds ago,
it will sleep for the remaining time, before running the job again;
Now, if you schedule this script to run, say every 15 minutes with cron, like that:
* * * * * /home/myuser/hourly my_periodic_task and it's arguments
It will be guaranteed to execute with the fixed delay of at least MIN_DELAY (one hour) since the last job completed, and any intermediate runs will be skipped.
In the worst case, it will execute in MIN_DELAY + 15 minutes,
(as the scheduling period is discrete), but never earlier than that.
Other non-cron scheduling methods should work too (i.e. just running this script in a loop, or re-scheduling and each run with at).
You can use a cron and add process.exit(0) to your node script

Linux infinite loop with background

I have a little script called "CheekyScript.sh" that looks something like this:
#!/bin/bash
nohup mvn run_something_pretty_long
This clearly work pretty fine as it starts a long process in the background that continues running after the session has expired and the user has logged out.
What I wish to achieve is pretty simple, introduce a little infinite loop, to this process is being ran over and over again but only AFTER the nohup is completed. Of course I still wish this entire bash script and the nohup within to run long after the session expired and I'm logged out.
I was thinking something similar:
#!/bin/bash
while true
do
nohup mvn run_something_pretty_long
sleep 60
done
Obviously is what this does is that it starts the nohup process every 60 seconds. The desired thing would be wait for the nohup, wait a minute and start the loop again.
I was wondering what is the best practice solution for something like this?
Thank you very much in advance.
use crontab
add an entry like this
1 * * * * /path/to/something
In the something script
#!/bin/bash
LOCKFILE=/var/lock/mvn.lock
[ -f $LOCKFILE ] && exit 0
# Upon exit, remove lockfile.
trap "{ rm -f $LOCKFILE ; exit 255; }" EXIT
touch $LOCKFILE
mvn run_something_pretty_long
exit 0
This tries to run the script once a minute and mostly fails as the lockfile exists. But if the script is finished the lockfile isn't there and it starts again
By default cron emails all output to the user that owns the job
You want to run your long running script either once, or repeatedly. And you want to run both of these using nohup. Since you already have one script that handles the first (run once) case, make two copies of your "CheekyScript.sh". The first one runs once, and the second you edit to run repeatedly (and can optionally check for a done condition).
This one runs once,
#!/bin/bash
#CheekyScriptOnce.sh
nohup mvn run_something_pretty_long
This one runs repeatedly,
#!/bin/bash
#CheekyRepeat.sh
thing="mvn run_something_pretty_long"
delay=60;
nohup (while [ 1 ] ; do $thing; sleep $delay; done)
But you want some way to signal done. A control file can handle that,
#!/bin/bash
#CheekyRepeatConditional.sh
thing="mvn run_something_pretty_long"
delay=60;
if [ ! -d etc ] ; then mkdir etc; fi
touch etc/Cheeky.run
nohup (while [ -f etc/Cheeky.run ] ; do $thing; sleep $delay; done)

Bash script how to sleep in new process then execute a command

So, I was wondering if there was a bash command that lets me fork a process which sleeps for several seconds, then executes a command.
Here's an example:
sleep 30 'echo executing...' &
^This doesn't actually work (because the sleep command only takes the time argument), but is there something that could do something like this? So, basically, a sleep command that takes a time argument and something to execute when the interval is completed? I want to be able to fork it into a different process then continue processing the shell script.
Also, I know I could write a simple script that does this, but due to some restraints to the situation (I'm actually passing this through a ssh call), I'd rather not do that.
You can do
(sleep 30 && command ...)&
Using && is safer than ; because it ensures that command ... will run only if the sleep timer expires.
You can invoke another shell in the background and make it do what you want:
bash -c 'sleep 30; do-whatever-else' &
The default interval for sleep is in seconds, so the above would sleep for 30 seconds. You can specify other intervals like: 30m for 30 minutes, or 1h for 1 hour, or 3d for 3 days.

Resources