shell script put result of curl in variable followed by sleep command [duplicate] - linux

This question already has answers here:
Assign variable in the background shell
(2 answers)
Closed 1 year ago.
I want to trigger curl requests every 400ms in shell script and put the results in a variable, and after finishing the curl request (eg 10 requests) finally write all results in a file. when I use the following code for this purpose
result="$(curl --location --request GET 'http://localhost:8087/say-hello')" & sleep 0.400;
Because & creates a new process result can not achieve. so what should I do?

You can use the -m curl option instead of the sleep.
-m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. See
also the --connect-timeout option.

The difference can be sound in the next sequence of commands:
a=1; a=$(echo 2) ; sleep 1; echo $a
2
and with a background process
a=1; a=$(echo 2) & sleep 1; echo $a
[1] 973
[1]+ Done a=$(echo 2)
1
Why is a not changed in the second case?
Actually it is changed... in a new environment. The & creates a new process with its own a, and that a is assigned the value 2. When the process is finished, the variable a of that subprocess is deleted and you only of the original value of a.
Depending on your requirements you might want to make a resultdir, have every background curl process write to a different tmpfile, wait with wait until all curls are finished and collect your results.

Related

How do I setup two curl commands to execute at different times forever?

For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.

Check script double execution [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
Closed 4 years ago.
I have a bash script and sometimes happened that, even my script was scheduled, it was executed two times. So I added few code lines to check if the script is already in execution. Initially I hadn't a problem, but in the last three days I had got it again
PID=`echo $$`
PROCESS=${SL_ROOT_FOLDER}/xxx/xxx/xxx_xxx_$PID.txt
ps auxww | grep $scriptToVerify | grep -v $$ | grep -v grep > $PROCESS
num_proc=`awk -v RS='\n' 'END{print NR}' $PROCESS`
if [ $num_proc -gt 1 ];
then
sl_log "---------------------------Warning---------------------------"
sl_log "$scriptToVerify already executed"
sl_log "num proc $num_proc"
sl_log "--------"
sl_log $PROCESS
sl_log "--------"
exit 0;
fi
In this way I check how many rows I've got into my log and if the result is more than one, then I have two process in execution and one will be stopped.
This method doesn't work correctly, though. How can I fix my code to check how many scripts in execution I have?
Anything that involves:
read some state information
check results
do action based on results
finish
must do all three steps at the same time otherwise there is a "race" condition. For example:
(A) reads state
(A) checks results (ok)
(A) does action (ok)
(A) finishes
(B) reads state
(B) checks results (bad)
(B) does action (bad)
(B) finishes
but if timing is slightly different:
(A) reads state
(A) checks results (ok)
(B) reads state
(B) checks results (ok)
(A) does action (ok)
(B) does action (ok)
(A) finishes
(B) finishes
The usual example people give is updating bank balances.
Using your method, you may be able to reduce the frequency of your code running when it shouldn't but you can never avoid it entirely.
A better solution is to use locking. This guarantees that only one process can run at a time. For example, using flock, you can wrap all calls to your script:
flock -x /var/lock/myscript-lockfile myscript
Or, inside your script, you can do something like:
exec 300>/var/lock/myscript-lockfile
flock -x 300
# rest of script
flock -u 300
or:
exec 300>/var/logk/myscript-lockfile
if ! flock -nx 300; then
# another process is running
exit 1
fi
# rest of script
flock -u 300
flock(1)
#!/bin/bash
# Makes sure we exit if flock fails.
set -e
(
# Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
flock -x -w 10 200
# Do stuff
) 200>/var/lock/.myscript.exclusivelock
This ensures that code between "(" and ")" is run only by one process at a time and that the process does wait for a lock too long.
Credit goes to Alex B.

Shell infinite loop to execute at specific time [duplicate]

This question already has answers here:
Sleep until a specific time/date
(22 answers)
Closed 7 years ago.
I have access to a Linux CentOS box. (I can't use crontab sadly)
When I need to run a task everyday I have just created a infinite loop with a sleep. (it runs, sleeps ~24 hours and then runs again)
#!/bin/sh
while :
do
/home/sas_api_emailer.sh |& tee first_sas_api
sleep 1438m
done
Recently I have a task that I need to run at a specific time everyday 6:00 am (I can't use crontab)
How can I create an infinite loop that will only execute # 6:00 am?
Check the time in the loop, and then sleep for a minute if it's not the time you want.
while :
do
if [ $(date '+%H%M') = '0600' ]
then /home/sas_api_emailer.sh |& tee first_sas_api
fi
sleep 60
done
You have (at least!) three choices:
cron
This is hands-down the best choice. Unfortunately, you say it's not an option for you. Drag :(
at
at and batch read commands from standard input or a specified file
which are to be executed at a later time.
For example: at -f myjob noon
Here is more information about at: http://www.thegeekstuff.com/2010/06/at-atq-atrm-batch-command-examples/
Write a "polling" or "while loop" script. For example:
while true
# Compute wait time
sleep wait_time
# do something
done
Here are some good ideas for "compute wait time": Bash: Sleep until a specific time/date

Is it possible to set time out from bash script? [duplicate]

This question already has answers here:
How do I limit the running time of a BASH script
(5 answers)
Closed 7 years ago.
Sometimes my bash scripts are hanging and hold without clear reason
So they actually can hang for ever ( script process will run until I kill it )
Is it possible to combine in the bash script time out mechanism in order to exit from the program after for example ½ hour?
This Bash-only approach encapsulates all the timeout code inside your script by running a function as a background job to enforce the timeout:
#!/bin/bash
Timeout=1800 # 30 minutes
function timeout_monitor() {
sleep "$Timeout"
kill "$1"
}
# start the timeout monitor in
# background and pass the PID:
timeout_monitor "$$" &
Timeout_monitor_pid=$!
# <your script here>
# kill timeout monitor when terminating:
kill "$Timeout_monitor_pid"
Note that the function will be executed in a separate process. Therefore the PID of the monitored process ($$) must be passed. I left out the usual parameter checking for the sake of brevity.
If you have Gnu coreutils, you can use the timeout command:
timeout 1800s ./myscript
To check if the timeout occurred check the status code:
timeout 1800s ./myscript
if (($? == 124)); then
echo "./myscript timed out after 30 minutes" >>/path/to/logfile
exit 124
fi

Bash script how to sleep in new process then execute a command

So, I was wondering if there was a bash command that lets me fork a process which sleeps for several seconds, then executes a command.
Here's an example:
sleep 30 'echo executing...' &
^This doesn't actually work (because the sleep command only takes the time argument), but is there something that could do something like this? So, basically, a sleep command that takes a time argument and something to execute when the interval is completed? I want to be able to fork it into a different process then continue processing the shell script.
Also, I know I could write a simple script that does this, but due to some restraints to the situation (I'm actually passing this through a ssh call), I'd rather not do that.
You can do
(sleep 30 && command ...)&
Using && is safer than ; because it ensures that command ... will run only if the sleep timer expires.
You can invoke another shell in the background and make it do what you want:
bash -c 'sleep 30; do-whatever-else' &
The default interval for sleep is in seconds, so the above would sleep for 30 seconds. You can specify other intervals like: 30m for 30 minutes, or 1h for 1 hour, or 3d for 3 days.

Resources