Shell Scripting: execute echo statements with a time delay - linux

Is there any way I could run several echo statements one after the other with a delay?
For example:
The first statement will be:
echo Hello1
after 1/2 second, run the Second echo statement:
echo Hello2
Likewise, is it possible to run several statements one after the other with a time delay without printing all echoes at once?

Perhaps you would like to use sleep <number of seconds>
Like sleep 60 to wait for a minute.
eg. run from commandline
$ echo 'hello1'; sleep 2; echo 'hello2'
or in a bash script file (myscript.sh)
#!/bin/bash
echo 'hello1'
sleep 2
echo 'hello2 after 2 seconds'
sleep 2
echo 'hello3 after 2 seconds'

echo Hello1
usleep 500000 # sleep 500,000 microseconds
echo Hello2
The usleep(1) command is part of the initscripts package on Fedora.

for i in `echo "hello1 hello2 hello3"`; do echo $i; sleep 2; done

Related

How to make multiple commands executed sequentially in atd

echo "echo `date +'%Y%m%d-%H%M%S'`' hello' >> tmp.log;
sleep 3s;
echo `date +'%Y%m%d-%H%M%S'`' world' >> tmp.log" |
at now
I hope that these 3 commands will be executed in sequence use atd, but it is counterproductive. The 3 commands are executed in parallel. How can I execute these 3 commands in sequence?
atd does run commands sequentially, the problem is that both date commands run at the time you submit the job due to expansion, try :
#!/usr/bin/env bash
echo 'date +"%Y%m%d-%H%M%S hello" > /tmp/at.log;
sleep 3s;
date +"%Y%m%d-%H%M%S world" >> /tmp/at.log' |
at now

How to add threading to the bash script?

#!/bin/bash
cat input.txt | while read ips
do
cmd="$(snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" >> out_uptime.txt
done
How can i add threading to this bash script, i have around 80000 input and it takes lot of time?
Simple method. Assuming the order of the output is unimportant, and that snmpwalk's output is of no interest if it should fail, put a && at the end of each of the commands to background, except the last command which should have a & at the end:
#!/bin/bash
while read ips
do
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)" &&
echo "$ips ---> $cmd" &&
echo "$ips $cmd" >> out_uptime.txt &
done < input.txt
Less simple. If snmpwalk can fail, and that output is also needed, lose the && and surround the code with curly braces,{}, followed by &. To redirect the appended output to include standard error use &>>:
#!/bin/bash
while read ips
do {
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" &>> out_uptime.txt
} &
done < input.txt
The braces can contain more complex if ... then ... else ... fi statements, all of which would be backgrounded.
For those who don't have a complex snmpwalk command to test, here's a similar loop, which prints one through five but sleeps for random durations between echo commands:
for f in {1..5}; do
RANDOM=$f &&
sleep $((RANDOM/6000)) &&
echo $f &
done 2> /dev/null | cat
Output will be the same every time, (remove the RANDOM=$f && for varying output), and requires three seconds to run:
2
4
1
3
5
Compare that to code without the &&s and &:
for f in {1..5}; do
RANDOM=$f
sleep $((RANDOM/6000))
echo $f
done 2> /dev/null | cat
When run, the code requires seven seconds to run, with this output:
1
2
3
4
5
You can send tasks to the background by &. If you intend to wait for all of them to finish you can use the wait command:
process_to_background &
echo Processing ...
wait
echo Done
You can get the pid of the given task started in the background if you want to wait for one (or few) specific tasks.
important_process_to_background &
important_pid=$!
while i in {1..10}; do
less_important_process_to_background $i &
done
wait $important_pid
echo Important task finished
wait
echo All tasks finished
On note though: the background processes can mess up the output as they will run asynchronously. You might want to use a named pipe to collect the output from them.

BasH: run scripts in background

I am trying to run two scripts in the background. However I would like to have one script run first, wait for it to finish and run the next script recursively. Will this code snippet do as such:
for i in "${studyinstanceuids[#]}"
do
#let count="$count+1"
echo "$i" | ./cmd2&
sleep 5
if job1 is alive then sleep 5
echo "$i" | ./sendExamToRepo.sh&
wait
fi
for i in "${studyinstanceuids[#]}"; do
( echo "$i" | ./cmd2; echo "$1" | ./sendExamToRepo.sh )&
done
wait

Scheduling Jobs one after other in Linux

I have written an script After.sh to postpone one job to be started after another running job is finished:
echo "Waiting job $1 to be finished..."
while ps -p $1 >/dev/null; do sleep 1; done ;
echo "Job $1 finished."
echo "Now running job ${*:2}..."
${*:2}
echo "Job ${*:2} finished."
I run it like After.sh 5327 shutdown 0 to shutdown the PC after the job with PID=5327 is finished. Even I can run it like After.sh "5327 5778 5935" shutdown 0, this way it waits until all jobs 5327 5778 and 5935 are finished first.
My problem is when I want to feed a more complex job as argument to the script, for instance:
After.sh 5327 for f in *; do echo $f; done
Now it runs with an error: for: command not found. I tried to replace the command ${*:2} in the script to sh ${*:2} or eval ${*:2}, but they failed (sh fails again to find for command, eval keeps the first value of $f for the whole loop, so prints the name of the first file every time.)
Do you have any idea how to fix it?
the problem is, that you are for f in *; do echo $f; done is not a command, but rather it is bash-code, which needs a bash-interpreter to be executed.
you can start an interpreter by using eval:
#!/bin/sh
PID="$1"
shift
JOB="$#"
echo "Waiting job ${PID} to be finished..."
while ps -p ${PID} >/dev/null; do sleep 1; done ;
echo "Job ${PID} finished."
echo "Now running job '${JOB}'"
eval ${JOB}
echo "Job '${JOB}' finished."
then use single quotes for commands including variables that should be evaluated by After:
./After.sh 123 'for f in *; do echo $f; done'

Start a command, count lines of output after 10 seconds, then either restart it or let it run

I have an interesting situation I am trying to script. I have a program that outputs 26,000 lines after 10 seconds when it starts successfully. Otherwise I have to kill it and start it again. I tried doing something like this:
test $(./long_program | wc -l) -eq 26000 && echo "Started successfully"
but that only works if the program finishes running. Is there a clever way to watch the output stream of a command and make decisions accordingly? I'm at a loss, not quite sure even how to start searching for this. Thanks!
What about
./long_program > mylogfile &
pid=$!
sleep 10
# then test on mylogfile length and kill $pid if needed
count=0
until [ $count -eq 26000 ]; do
killall ./longrun
#start in background
./longrun >output.$$ &
sleep 10
count=$(wc -l output.$$ |awk '{print $1}')
done
echo "done"
#disown so it continues after current login quits
disown -h

Resources