Restart process on file change in Linux - linux

Is there a simple solution (using common shell utils, via a util provided by most distributions, or some simple python/... script) to restart a process when some files change?
It would be nice to simply call sth like watch -cmd "./the_process -arg" deps/*.
Update:
A simple shell script together with the proposed inotify-tools (nice!) fit my needs (works for commands w/o arguments):
#!/bin/sh
while true; do
$# &
PID=$!
inotifywait $1
kill $PID
done

This is an improvement on the answer provided in the question. When one interrupts the script, the run process should be killed.
#!/bin/sh
sigint_handler()
{
kill $PID
exit
}
trap sigint_handler SIGINT
while true; do
$# &
PID=$!
inotifywait -e modify -e move -e create -e delete -e attrib -r `pwd`
kill $PID
done

Yes, you can watch a directory via the inotify system using inotifywait or inotifywatch from the inotify-tools.
inotifywait will exit upon detecting an event. Pass option -r to watch directories recursively. Example: inotifywait -r mydirectory.
You can also specify the event to watch for instead of watching all events. To wait only for file or directory content changes use option -e modify.

Check out iWatch:
Watch is a realtime filesystem monitoring program. It is a tool for detecting changes in filesystem and reporting it immediately.It uses a simple config file in XML format and is based on inotify, a file change notification system in the Linux kernel.
than, you could watch files easily:
iwatch /path/to/file -c 'run_you_script.sh'

I find that this suits the full scenario requested by the PO quite very well:
ls *.py | entr -r python my_main.py
To run in the background, you'll need the -n non-interactive mode.
ls | entr -nr go run example.go &
See also http://eradman.com/entrproject/, although a bit oddly documented. Yes, you need to ls the file pattern you want matched, and pipe that into the entr executable. It will run your program and rerun it when any of the matched files change.
PS: It's not doing a diff on the piped text, so you don't need anything like ls --time-style

There's a perl script call lrrr (little restart runner, really) that I'm a contributor on. I use it daily at work.
You can install it with cpanm App::lrrr if you have a perl and cpanm installed, and then use it as follows:
lrrr -w dirs,or_files,to-watch your_cmd_here
The w flag marks off files or directories to watch. Currently, it kills the process you ran if a file is changed, but I'm gonna add a feature soon to toggle that.
Please let me know if there's anything to be added!

I use this "one liner" to restart long-running processes based on file changes
trap 'kill %1' 1 2 3 6; while : ; do YOUR_EXE & inotifywait -r YOUR_WATCHED_DIRECTORY -e create,delete,modify || break; kill %1; sleep 3; done
This will start the process, keep its output to the same console, watch for changes, if there is one, it will shut down the process, wait three seconds (for further within-same-second writes or process shutdown times), then do the above again.
ctrl-c & ssh-disconnect will be respected and the process will exit once you're done.
For legibility:
trap 'kill %1' 1 2 3 6
while :
do
YOUR_EXE &
inotifywait \
-r YOUR_WATCHED_DIRECTORY \
-e create,delete,modify \
|| break
kill %1
sleep 3
done
E.g. for a package.json-ran project
"module" : "./dist/server.mjs",
"scripts" : {
"build" : "cd ./src && rollup -c ",
"watch" : "cd ./src && rollup -c -w",
"start" : "cd ./dist && node --trace-warnings --enable-source-maps ./server.mjs",
"test" : "trap 'kill %1' 1 2 3 6; while : ; do npm run start & inotifywait -r ./dist -e create,delete,modify || break; kill %1; sleep 3; done"
},
"dependencies" : {
Here now you can run npm run watch (which compiles from src to dist) in one activity, npm run test (the server runner & restarter) in another, and as you edit ./src files the builder process will update ./dist and the server will restart for you to test.

I needed a solution for golang's go run command which spawns a subprocess. So combining the answers above and pidtree gave me this script.
#!/bin/bash
# go file to run
MAIN=cmd/example/main.go
# directories to recursively monitor
DIRS="cmd pkg"
# Based on pidtree from https://superuser.com/a/784102/524394
pidtree() {
declare -A CHILDS
while read P PP; do
CHILDS[$PP]+=" $P"
done < <(ps -e -o pid= -o ppid=)
walk() {
echo $1
for i in ${CHILDS[$1]}; do
walk $i
done
}
for i in "$#"; do
walk $i
done
}
sigint_handler()
{
kill $(pidtree $PID)
exit
}
trap sigint_handler SIGINT
while true; do
go run $MAIN &
PID=$!
inotifywait -e modify -e move -e create -e delete -e attrib -r $DIRS
PIDS=$(pidtree $PID)
kill $PIDS
wait $PID
sleep 1
done

-m switch of inotifywait tool
As no answer here address -m switch of inotifywait tool, I will share this: think parallel!
How I do this:
If I want to trig when a file is modified, I will use CLOSE-WRITE event.
Instead on while true I use -m switch of inotifywait command.
As many editors write on new file then rename file, I watch for directory instead and wait for an event with correct filename.
#!/bin/sh
cmdFile="$1"
tempdir=$(mktemp -d)
notif="$tempdir/notif"
mkfifo "$notif"
inotifywait -me close_write "${cmdFile%/*}" >"$notif" 2>&1 &
notpid=$!
exec 5<"$notif"
rm "$notif"
rmdir "$tempdir"
"$#" & cmdPid=$!
trap "kill $notpid \$cmdPid; exit" 0 1 2 3 6 9 15
while read dir evt file <&5;do
case $file in
${cmdFile##*/} )
date +"%a %d %b %T file '$file' modified."
kill $cmdPid
"$#" & cmdPid=$!
;;
esac
done

Related

How to run script multiple times and after every execution of command to wait until the device is ready to execute again?

I have this bash script:
#!/bin/bash
rm /etc/stress.txt
cat /dev/smd10 | tee /etc/stress.txt &
for ((i=0; i< 1000; i++))
do
echo -e "\nRun number: $i\n"
#wait untill module restart and bee ready for next restart
dmesg | grep ERROR
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
echo -e "\nADB device booted successfully\n"
done
I want to restart module 1000 times using this script.
Module is like android device witch has linux inside it. But I use Windows.
AT+CFUN=1,1 - reset
When I push script, after every restart I need a command which will wait module and start up again and execute script 1000 times. Then I do pull in .txt file and save all output content.
Which command should I use?
I try commands like wait, sleep, watch, adb wait-for-device, ps aux | grep... Nothing works.
Can someone help me with this?
I find the solution. This is how my script actually looks:
#!/bin/bash
cat /dev/smd10 &
TEST=$(cat /etc/output.txt)
RESTART_TIMES=1000
if [[ $TEST != $RESTART_TIMES ]]
then
echo $((TEST+1)) > /etc/output.txt
dmesg
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
fi
These are the steps that you need to do:
adb push /path/to/your/script /etc/init.d
cd /etc
cat outputfile.txt - make an output file and write inside file 0 ( echo 0 > output.txt )
cd init.d
ls - you should see rc5.d
cd .. then cd rc5.d - go inside
ln -s ../init.d/yourscript.sh S99yourscript.sh
ls - you should see S99yourscript.sh
cd .. return to init.d directory
chmod +x yourscript.sh - add permision to your script
./yourscript.sh

Bash write to background job's stdin after its launch

This is quite naive but I'll give it a shot.
I would like to launch gimp from bash with gimp -i -b - & then read dbus signals in endless loop and post data obtained from these signals back to gimp I launched. The gimp -i -b - starts command line gimp and awaits for user input, like gnuplot etc. But is it possible to access its stdin from bash after command execution?
Ideally I would like something like that to work:
gimp -i -b - &
dbus-monitor --profile "..." --monitor |
while read -r line; do
gimp -b '(mycommand '$line')' &
done
gimp -b '(gimp-quit 0)' &
where all gimp cmd & are sent to same gimp instance.
Would be even better if I could close gimp instance if it's not used for long enough and start again when it's needed.
Is it possible with bash without writing some daemon app?
Simple Solution
You could us a simple pipe. Wrap the command sending part of your script into a function and call that function while piping its output to gimp:
#! /bin/bash
sendCommands() {
dbus-monitor --profile "..." --monitor |
while read -r line; do
echo "(mycommand $line)"
done
echo "(gimp-quit 0)"
}
sendCommands | gimp -i &
sendCommands and gimp -i will run in parallel. Each time sendCommands prints something, that something will land in gimp's stdin.
If that's your complete script, you can omit the & after gimp -i.
Killing and Restarting Gimp
Would be even better if I could close gimp instance if it's not used for long enough and start again when it's needed.
This gets a bit more complicated than just using the timeout command because we don't want to kill gimp while it is still processing some image. We also don't want to kill sendCommands between the consumption of an event and the sending of the corresponding command.
Maybe we could start a helper process to send a dbus-event every 60 seconds. Let said event be called tick. The ticks are also read by sendCommands. If there are two ticks without commands in between, gimp should be killed.
We use FIFOs (also called named pipes) to send commands to gimp. Each time a new gimp process starts, we also create a new FIFO. This ensures that commands targeted at the new gimp process are also sent to the new process. In case gimp cannot finish the pending operations in less than 60 seconds, there may be two gimp processes at the same time.
#! /bin/bash
generateTicks() {
while true; do
# send tick over dbus
sleep 60
done
}
generateTicks &
gimpIsRunning=false
wasActive=false
sleepPID=
fifo=
while read -r line; do
if eventIsTick; then # TODO replace "eventsIsTick" with actual code
if [[ "$wasActive" = false ]]; then
echo '(gimp-quit 0)' > "$fifo" # gracefully quit gimp
gimpIsRunning=false
[[ "$sleepPID" ]] && kill "$sleepPID" # close the FIFO
rm -f "$fifo"
fi
wasActive=false
else
if [[ "$gimpIsRunning" = false ]]; then
fifo="$(mktemp -u)"
mkfifo "$fifo"
sleep infinity > "$fifo" & # keep the FIFO open
sleepPID="$!"
gimp -i < "$fifo" &
gimpIsRunning=true
fi
echo "(mycommand $line)" > "$fifo"
wasActive=true
fi
done < <(dbus-monitor --profile "..." --monitor)
echo '(gimp-quit 0)' > "$fifo" # gracefully quit gimp
[[ "$sleepPID" ]] && kill "$sleepPID" # close the FIFO
rm -f "$fifo"
Note that the dbus-monitor ... | while ... done is now written as while ... done < <(dbus-monitor ...). Both versions do the same thing in terms of looping over the output of dbus, but the the version with the pipe | creates a subshell which doesn't allow to set global variables inside the loop. For a further explanations see SC2031.

How to check if script is running or not from script itself?

Having below sample script sample.sh
#!/bin/bash
if ps aux | grep -o "sample.sh" >/dev/null
then
echo "Already script running"
exit 0
fi
echo "start script"
while true
do
echo "script running"
sleep 5
done
In above script i want to check if this script previously running or not if running then not run it again.
problem is check condition always become true (because to check the condition require to run script) and it always show me "Already script running" message.
Any idea how to solve it?
You need a proper lock. I'd do using flock like this:
exec 201> /tmp/lock.$(basename $0).file
if ! flock -n 201 ; then
echo "another instance of $0 is running";
exit 1
fi
# cmds
exec 201>&-
rm -rf /tmp/lock.$(basename $0).file
This basically creates lock for script using a temporary file. The temporary file has particular significance other than it's used to tell whether your script has acquired a lock.
When there's an instance of this program running, the next run of the same program can't run as the lock will prevent it.
For me will be safer to use a lock file , create it when process start and delete after completion.
Let the script record its own PID in a file. Before doing so, it first checks if that file currently contains an active PID, in which case it exits.
pid=$(< ${PID_FILE:?} || exit
kill -0 $PID && exit
The next exercise is to prevent race conditions when writing the file.
Try this, it gives number of sample.sh run by the user
ps -aux | awk -v app='sample.sh' '$0 ~ app { print $1 }' |grep $USERNAME|wc -l
Wtite a tmp file to the /tmp directory.
have your script check to see if the file exists, if it does then don't run.
#!/bin/sh
# our tmpfile
tmpfile="/tmp/mytmpfile"
# check to see if it exists.
# if it does then exit script
if [[ -f ${tmpfile} ]]; then
echo script already running.
exit
fi
# it doesn't exist at this point so lets make one
touch ${tmpfile}
# do whatever now.
# end of script
rm ${tmpfile}

launch process in background and modify it from bash script

I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.

How to make sure only one instance of a Bash script is running at a time?

I want to make a sh script that will only run at most once at any point.
Say, if I exec the script then I go to exec the script again, how do I make it so that if the first exec of the script is still working the second one will fail with an error. I.e. I need to check if the script is running elsewhere before doing anything. How would I go about doing this??
The script I have runs a long running process (i.e. runs forever). I wanted to use something like cron to call the script every 15mins so in case the process fails, it will be restarted by the next cron run script.
You want a pid file, maybe something like this:
pidfile=/path/to/pidfile
if [ -f "$pidfile" ] && kill -0 `cat $pidfile` 2>/dev/null; then
echo still running
exit 1
fi
echo $$ > $pidfile
I think you need to use lockfile command. See using lockfiles in shell scripts (BASH) or http://www.davidpashley.com/articles/writing-robust-shell-scripts.html.
The second article uses "hand-made lock file" and shows how to catch script termination & releasing the lock; although using lockfile -l <timeout seconds> will probably be a good enough alternative for most cases.
Example of usage without timeout:
lockfile script.lock
<do some stuff>
rm -f script.lock
Will ensure that any second script started during this one will wait indefinitely for the file to be removed before proceeding.
If we know that the script should not run more than X seconds, and the script.lock is still there, that probably means previous instance of the script was killed before it removed script.lock. In that case we can tell lockfile to force re-create the lock after a timeout (X = 10 below):
lockfile -l 10 /tmp/mylockfile
<do some stuff>
rm -f /tmp/mylockfile
Since lockfile can create multiple lock files, there is a parameter to guide it how long it should wait before retrying to acquire the next file it needs (-<sleep before retry, seconds> and -r <number of retries>). There is also a parameter -s <suspend seconds> for wait time when the lock has been removed by force (which kind of complements the timeout used to wait before force-breaking the lock).
You can use the run-one package, which provides run-one, run-this-one and keep-one-running.
The package: https://launchpad.net/ubuntu/+source/run-one
The blog introducing it: http://blog.dustinkirkland.com/2011/02/introducing-run-one-and-run-this-one.html
Write the process id into a file and then when a new instance starts, check the file to see if the old instance is still running.
(
if flock -n 9
then
echo 'Not doing the critical operation (lock present).'
exit;
fi
# critical section goes here
) 9>'/run/lock/some_lock_file'
rm -f '/run/lock/some_lock_file'
From example in flock(1) man page. Very practical for using in shell scripts.
I just wrote a tool that does this:
https://github.com/ORESoftware/quicklock
writing a good one takes about 15 loc, so not something you want to include in every shell script.
basically works like this:
$ ql_acquire_lock
the above calls this bash function:
function ql_acquire_lock {
set -e;
name="${1:-$PWD}" # the lock name is the first argument, if that is empty, then set the lockname to $PWD
mkdir -p "$HOME/.quicklock/locks"
fle=$(echo "${name}" | tr "/" _)
qln="$HOME/.quicklock/locks/${fle}.lock"
mkdir "${qln}" &> /dev/null || { echo "${ql_magenta}quicklock: could not acquire lock with name '${qln}'${ql_no_color}."; exit 1; }
export quicklock_name="${qln}"; # export the var *only if* above mkdir command succeeds
trap on_ql_trap EXIT;
}
when the script exits, it automatically releases the lock using trap
function on_ql_trap {
echo "quicklock: process with pid $$ was trapped.";
ql_release_lock
}
to manually release the lock at will, use ql_release_lock:
function ql_maybe_fail {
if [[ "$1" == "true" ]]; then
echo -e "${ql_magenta}quicklock: exiting with 1 since fail flag was set for your 'ql_release_lock' command.${ql_no_color}"
exit 1;
fi
}
function ql_release_lock () {
if [[ -z "${quicklock_name}" ]]; then
echo -e "quicklock: no lockname was defined. (\$quicklock_name was not set).";
ql_maybe_fail "$1";
return 0;
fi
if [[ "$HOME" == "${quicklock_name}" ]]; then
echo -e "quicklock: dangerous value set for \$quicklock_name variable..was equal to user home directory, not good.";
ql_maybe_fail "$1"
return 0;
fi
rm -r "${quicklock_name}" &> /dev/null &&
{ echo -e "quicklock: lock with name '${quicklock_name}' was released."; } ||
{ echo -e "quicklock: no lock existed for lockname '${quicklock_name}'."; ql_maybe_fail "$1"; }
trap - EXIT # clear/unset trap
}
I suggest using flock, but in a different way than suggested by #Josef Kufner. I think this is quite easy and flock should be available on most systems by default:
flock -n lockfile myscript.sh

Resources