How to run script continously in background without using crontab - linux

I have a small script that checks certain condition continously and as soon as that condition is met the program should execute. Can this be done. I thought of using crontab where script runs every 5 min but now I want that to be done without crontab

You probably want to create an infinite loop first, then within that loop you probably want to verify your condition or wait a bit. As you did not mention which scripting language you wanted to use, I'm going to write pseudo code for the example. Give us more info about the scripting language, and perhaps also the conditions.
Example in pseudo code:
# Defining a timeout of 1 sec, so checking the condition every second
timeout = 1
# Running in background infinitely
while true do
# Let's check the condition
if condition then
# I got work to do
...
else
# Let's wait a specified timeout, not to block the system
sleep timeout
endif
endwhile
Example in sh with your input code
#!/bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin
# Defining a timeout of 1 hour
timeout=3600
while true
do
case 'df /tmp' in
" "[5-9]?%" ") rm -f /tmp/af.*
;;
*)
sleep $timeout
;;
esac
done
You can then run this script from the shell using 'nohup':
nohup yourscript &

Related

Display a running clock in terminal using bash without using loop

How to display a running clock in a linux terminal without using any for or while loop. Using these loops in scripts with 1 sec duration causes a significant load in the systems.
How about:
watch -n 1 date
Use watch to run a command periodically.
You can make a function which calls itself with a sleep :
#!/bin/bash
function showdate(){
printf '\033[;H' # Move the cursor to the top of the screen
date # Print the date (change with the format you need for the clock)
sleep 1 # Sleep (pause) for 1 second
showdate # Call itself
}
clear # Clear the screen
showdate # Call the function which will display the clock
If you execute this script it will run indefinitely until you hit CTRL-C
If you want to add the date to the terminal window title and your terminal supports it, you can do
#! /bin/bash
while :
do
printf '\033]0;%s\007' "$(date)"
sleep 1
done &
and it won't affect any other terminal output.
This post is ancient, but for anyone still looking for this kind of thing:
#!/bin/bash
printf '\033[;H'
while true; do date; sleep 1; done
This version avoids recursion (and stack faults!) while accomplishing the goal.
(For the record - I do prefer the watch version, but since the OP did not like that solution, I offered an improvement on the other solution.)

Start programs synchronized in bash

What I simply want to do is to do a wait for release lock.
I have for example 4 (because I have 4 core) identical script that works each on a part of a project each script looks like that:
#!/bin/bash
./prerenderscript $1
scriptsync step1 4
./renderscript $1
scriptsync step2 4
./postprod $1
when I run the main script that call the four script, I want each script to work individualy but at some point, I want to have each script waiting for each other because the next part need all data from the first part.
For now I used some logic like the number of file or a file that get created for each process and their existance getting tested with other one.
I also got the idea to use a makefile and to have
prerender%: source
./prerender $#
renderscript%: prerender1 prerender2 prerender3 prerender4
./renderscript $#
postprod: renderscript1 renderscript2 renderscript3 renderscript4
./postprod $#
But actually the process is simplified here the script is more complex and for each step the thread need to keep his variables.
Is there anyway to get the script in sync instead of the placeholder command scriptsync.
To achieve this in Bash, one way to do it is using inter-process communication to cause a task to wait for the previous one to finish. Here is an example.
#!/bin/bash
# $1 is received to allow for an example command, not required for the mechanism suggested
task_a()
{
# Do some work
sleep $1 # This is just a dummy command as an example
echo "Task A/$1 completed" >&2
# Send status to stdin, telling next task to proceed
echo "OK"
}
task_b()
{
IFS= read status ; [[ $status = OK ]] || return 1
# Do some work
sleep $1 # This is just a dummy command as an example
echo "Task B/$1 completed" >&2
}
task_a 2 | task_b 2 &
task_a 1 | task_b 1 &
wait
You will notice that the read could be anywhere in task B, so you could do some work, then wait (read) for the other task, then continue. You could have many signals sent by task A to task B, and several corresponding read statements.
As shown in the example, you can launch several pipelines in parallel.
One limit of this approach is that a pipeline establishes a communication channel between one writer and one reader. If a task needs to wait for signals from several tasks, you would need FIFOs to allow the task with dependencies to read from multiple sources.

How to schedule a task automatically with a while loop in bash?

I'm trying to schedule my script to run at two different times and then keep running in the background waiting for the next time that the conditions are met again.
So far I am not able to do it because after that one condition is met this scripts exits automatically:
function schedule {
while :
do
hour=$(date +"%H")
minute=$(date +"%M")
if [[ "$hour" = "02" && "$minute" = "31" ]]; then
# run some script
exec /home/gfx/Desktop/myscript.sh
wait
schedule
elif [[ "$hour" = "02" && "$minute" = "32" ]]; then
# run some script
exec /home/gfx/Desktop/myscript.sh
wait
schedule
fi
done
}
schedule
The script that I execute is the following :
$cat myscript.sh
echo "this a message"
Any idea or comment are welcome, thanks!
When you do:
exec some program
something else
if the exec works, then the something else will never happen because the script is replaced by what you exec'd
Maybe instead of:
exec /home/gfx/Desktop/myscript.sh
wait
schedule
you want:
/home/gfx/Desktop/myscript.sh &
wait
# schedule -- don't make a fork bomb
As noted in the comments, & + wait is kind of silly, you probably want:
/home/gfx/Desktop/myscript.sh &
# wait
# schedule
or:
/home/gfx/Desktop/myscript.sh
# wait
# schedule
depending on whether or not you want myscript in the background.
ALSO you should put a sleep in the loop, (maybe sleep(59)) to keep it from looping like a banshee.
You begin to understand the appeal of cron I think (at least for things you don't want to interact with your terminal session).
Crontabs are best place to put these things.
Edit crontab using
$ crontab -e
Add following lines to the files
31 2 * * * /home/gfx/Desktop/myscript.sh
And save crontab file. Your script will be executed every day 2:31 am

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

udev rule runs bash script multiple times

I created a udev rule to execute a bash script after the insertion of a usb device
SUBSYSTEMS=="usb", ATTRS{serial}=="00000000", SYMLINK+="Kingston", RUN+="/bin/flashled.sh"
However the script is run several times instead of just once, I assume it is down to way the hardware is detected? I tried putting sleep 10 into the script and fi but it makes no difference.
This is not a solution but a workaround:
One way (simple) is to begin your script "/bin/flashled.sh" like this
#!/bin/bash
#this file is /bin/flashled.sh
#let's exit if another instance is already running
if [[ $(pgrep -c "$0" ) -gt 1 ]]; then exit ;fi
... ... ...
However, this can in some border cases be a bit prone to race conditions (bash is a bit slow so there is no way to be sure that this will always work) but it might work perfectly in your case.
Another one (more solid but more code) is to begin "/bin/flashled.sh" like this:
#!/bin/bash
#this file is /bin/flashled.sh
#write in your /etc/rc.local: /bin/flashled.sh & ; disown
#or let it start by init.
while :
do
kill -SIGSTOP $$ # halt and wait
... ... # your commands here
sleep $TIME # choose your own value here instead of $TIME
done
Start it during boot (by, for example, /etc/rc.local) so it will be waiting for a signal to continue... It doesn't matter how many "continue" signals it gets (they are not queued), as long as they are within $TIME
Change your udev rule accordingly :
SUBSYSTEMS=="usb", ATTRS{serial}=="00000000", SYMLINK+="Kingston", RUN+="/usr/bin/pkill -SIGCONT flashled.sh"
I believe this may be down to the use of the pluralised SUBSYSTEMS and ATTRS keys. My understanding of udev is that those keys will match parent devices (see the "Device hierarchy" section of http://www.reactivated.net/writing_udev_rules.html#hierarchy). Try using SUBSYSTEM and ATTR to match just that device.

Resources