This is quite naive but I'll give it a shot.
I would like to launch gimp from bash with gimp -i -b - & then read dbus signals in endless loop and post data obtained from these signals back to gimp I launched. The gimp -i -b - starts command line gimp and awaits for user input, like gnuplot etc. But is it possible to access its stdin from bash after command execution?
Ideally I would like something like that to work:
gimp -i -b - &
dbus-monitor --profile "..." --monitor |
while read -r line; do
gimp -b '(mycommand '$line')' &
done
gimp -b '(gimp-quit 0)' &
where all gimp cmd & are sent to same gimp instance.
Would be even better if I could close gimp instance if it's not used for long enough and start again when it's needed.
Is it possible with bash without writing some daemon app?
Simple Solution
You could us a simple pipe. Wrap the command sending part of your script into a function and call that function while piping its output to gimp:
#! /bin/bash
sendCommands() {
dbus-monitor --profile "..." --monitor |
while read -r line; do
echo "(mycommand $line)"
done
echo "(gimp-quit 0)"
}
sendCommands | gimp -i &
sendCommands and gimp -i will run in parallel. Each time sendCommands prints something, that something will land in gimp's stdin.
If that's your complete script, you can omit the & after gimp -i.
Killing and Restarting Gimp
Would be even better if I could close gimp instance if it's not used for long enough and start again when it's needed.
This gets a bit more complicated than just using the timeout command because we don't want to kill gimp while it is still processing some image. We also don't want to kill sendCommands between the consumption of an event and the sending of the corresponding command.
Maybe we could start a helper process to send a dbus-event every 60 seconds. Let said event be called tick. The ticks are also read by sendCommands. If there are two ticks without commands in between, gimp should be killed.
We use FIFOs (also called named pipes) to send commands to gimp. Each time a new gimp process starts, we also create a new FIFO. This ensures that commands targeted at the new gimp process are also sent to the new process. In case gimp cannot finish the pending operations in less than 60 seconds, there may be two gimp processes at the same time.
#! /bin/bash
generateTicks() {
while true; do
# send tick over dbus
sleep 60
done
}
generateTicks &
gimpIsRunning=false
wasActive=false
sleepPID=
fifo=
while read -r line; do
if eventIsTick; then # TODO replace "eventsIsTick" with actual code
if [[ "$wasActive" = false ]]; then
echo '(gimp-quit 0)' > "$fifo" # gracefully quit gimp
gimpIsRunning=false
[[ "$sleepPID" ]] && kill "$sleepPID" # close the FIFO
rm -f "$fifo"
fi
wasActive=false
else
if [[ "$gimpIsRunning" = false ]]; then
fifo="$(mktemp -u)"
mkfifo "$fifo"
sleep infinity > "$fifo" & # keep the FIFO open
sleepPID="$!"
gimp -i < "$fifo" &
gimpIsRunning=true
fi
echo "(mycommand $line)" > "$fifo"
wasActive=true
fi
done < <(dbus-monitor --profile "..." --monitor)
echo '(gimp-quit 0)' > "$fifo" # gracefully quit gimp
[[ "$sleepPID" ]] && kill "$sleepPID" # close the FIFO
rm -f "$fifo"
Note that the dbus-monitor ... | while ... done is now written as while ... done < <(dbus-monitor ...). Both versions do the same thing in terms of looping over the output of dbus, but the the version with the pipe | creates a subshell which doesn't allow to set global variables inside the loop. For a further explanations see SC2031.
Related
What I want to do is:
run a process
wait 10 seconds
send a string to the stdin of the process
This should be done in a bash script.
I've tried:
./script&
pid=$!
sleep 10
echo $string > /proc/${pid}/fd/0
It does work in a shell but not when I run it in a script.
( sleep 10; echo "how you doin?" ) | ./script
Your approach might work on Linux if e.g., your scripts stdin is e.g., something like a FIFO:
myscript(){ tr a-z A-Z; }
rm -f p
mkfifo p
exec 3<>p
myscript <&3 &
pid=$!
echo :waiting
sleep 0.5
echo :writing
file /proc/$pid/fd/0
echo hi > /proc/$pid/fd/0
exec 3>&-
But this /proc/$pid/fd stuff behaves differently on different Unices.
It doesn't work for your case because your scripts stdin is a terminal.
With default terminal settings, the terminal driver will put background proccesses trying to read from it to sleep (by sending them the SIGTTIN signal) and writes to a terminal filedescriptor will just get echoed -- they won't wake up the sleeping background process that's was put to sleep trying to read from the terminal.
What about this (as OP requested it to be done in the script):
#! /bin/bash
./script&
pid=$!
sleep 10
echo $string > /proc/${pid}/fd/0
just proposing the missing element not commenting on coding style ;-)
When I do the following, then I have to press CTRL-c afterwards or the shell acts weird. Left/right arrows keys e.g. doesn't move correctly and the text is messed up.
# read -r pid < <(ssh 10.10.10.46 'sleep 50 & echo $!') ; echo $pid
2135
# Killed by signal 2.
^C
#
I need this for a script, so I'd like to know why CTRL-c is needed and is it possible to work around it?
Update
It looks like it opens an extra Bash shell, and that is the one that needs to be exited.
The command I am actually interesting in is
read -r pid < <(ssh 10.10.10.46 "mbuffer -4 -v 0 -q -I 8023 > /tmp/mtest & echo $!"); echo $pid
Try this instead:
read -r pid \
< <(ssh 10.10.10.46 'nohup mbuffer >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
Three important changes:
Use of nohup (you could also get a similar effect with the bash built-in disown)
Redirection of stdin and stderr to files (preventing them from holding handles that connect, eventually, to your terminal).
Use of single quotes for the remote command (with double-quotes, expansions happen before ssh is started, so the $! you get is the PID of the most recently started local background process).
For instance, how would I kill tail when wget finishes.
#!/bin/bash
wget http://en.wikipedia.org/wiki/File:Example.jpg &
tail -f example.log
Perhaps this is better - i haven't tested it though:
#!/bin/bash
LOGFILE=example.log
> $LOGFILE # truncate log file so tail begins reading at the beginning
tail -f $LOGFILE &
# launch tail and background it
PID=$!
# record the pid of the last command - in this case tail
wget --output-file=$LOGFILE http://en.wikipedia.org/wiki/File:Example.jpg
kill $PID
#launch wget and when finished kill program (tail) with PID
This counts on the fact that tail although in the background will still show it's output on a console. This won't be as easily redirectable though.
Is there a simple solution (using common shell utils, via a util provided by most distributions, or some simple python/... script) to restart a process when some files change?
It would be nice to simply call sth like watch -cmd "./the_process -arg" deps/*.
Update:
A simple shell script together with the proposed inotify-tools (nice!) fit my needs (works for commands w/o arguments):
#!/bin/sh
while true; do
$# &
PID=$!
inotifywait $1
kill $PID
done
This is an improvement on the answer provided in the question. When one interrupts the script, the run process should be killed.
#!/bin/sh
sigint_handler()
{
kill $PID
exit
}
trap sigint_handler SIGINT
while true; do
$# &
PID=$!
inotifywait -e modify -e move -e create -e delete -e attrib -r `pwd`
kill $PID
done
Yes, you can watch a directory via the inotify system using inotifywait or inotifywatch from the inotify-tools.
inotifywait will exit upon detecting an event. Pass option -r to watch directories recursively. Example: inotifywait -r mydirectory.
You can also specify the event to watch for instead of watching all events. To wait only for file or directory content changes use option -e modify.
Check out iWatch:
Watch is a realtime filesystem monitoring program. It is a tool for detecting changes in filesystem and reporting it immediately.It uses a simple config file in XML format and is based on inotify, a file change notification system in the Linux kernel.
than, you could watch files easily:
iwatch /path/to/file -c 'run_you_script.sh'
I find that this suits the full scenario requested by the PO quite very well:
ls *.py | entr -r python my_main.py
To run in the background, you'll need the -n non-interactive mode.
ls | entr -nr go run example.go &
See also http://eradman.com/entrproject/, although a bit oddly documented. Yes, you need to ls the file pattern you want matched, and pipe that into the entr executable. It will run your program and rerun it when any of the matched files change.
PS: It's not doing a diff on the piped text, so you don't need anything like ls --time-style
There's a perl script call lrrr (little restart runner, really) that I'm a contributor on. I use it daily at work.
You can install it with cpanm App::lrrr if you have a perl and cpanm installed, and then use it as follows:
lrrr -w dirs,or_files,to-watch your_cmd_here
The w flag marks off files or directories to watch. Currently, it kills the process you ran if a file is changed, but I'm gonna add a feature soon to toggle that.
Please let me know if there's anything to be added!
I use this "one liner" to restart long-running processes based on file changes
trap 'kill %1' 1 2 3 6; while : ; do YOUR_EXE & inotifywait -r YOUR_WATCHED_DIRECTORY -e create,delete,modify || break; kill %1; sleep 3; done
This will start the process, keep its output to the same console, watch for changes, if there is one, it will shut down the process, wait three seconds (for further within-same-second writes or process shutdown times), then do the above again.
ctrl-c & ssh-disconnect will be respected and the process will exit once you're done.
For legibility:
trap 'kill %1' 1 2 3 6
while :
do
YOUR_EXE &
inotifywait \
-r YOUR_WATCHED_DIRECTORY \
-e create,delete,modify \
|| break
kill %1
sleep 3
done
E.g. for a package.json-ran project
"module" : "./dist/server.mjs",
"scripts" : {
"build" : "cd ./src && rollup -c ",
"watch" : "cd ./src && rollup -c -w",
"start" : "cd ./dist && node --trace-warnings --enable-source-maps ./server.mjs",
"test" : "trap 'kill %1' 1 2 3 6; while : ; do npm run start & inotifywait -r ./dist -e create,delete,modify || break; kill %1; sleep 3; done"
},
"dependencies" : {
Here now you can run npm run watch (which compiles from src to dist) in one activity, npm run test (the server runner & restarter) in another, and as you edit ./src files the builder process will update ./dist and the server will restart for you to test.
I needed a solution for golang's go run command which spawns a subprocess. So combining the answers above and pidtree gave me this script.
#!/bin/bash
# go file to run
MAIN=cmd/example/main.go
# directories to recursively monitor
DIRS="cmd pkg"
# Based on pidtree from https://superuser.com/a/784102/524394
pidtree() {
declare -A CHILDS
while read P PP; do
CHILDS[$PP]+=" $P"
done < <(ps -e -o pid= -o ppid=)
walk() {
echo $1
for i in ${CHILDS[$1]}; do
walk $i
done
}
for i in "$#"; do
walk $i
done
}
sigint_handler()
{
kill $(pidtree $PID)
exit
}
trap sigint_handler SIGINT
while true; do
go run $MAIN &
PID=$!
inotifywait -e modify -e move -e create -e delete -e attrib -r $DIRS
PIDS=$(pidtree $PID)
kill $PIDS
wait $PID
sleep 1
done
-m switch of inotifywait tool
As no answer here address -m switch of inotifywait tool, I will share this: think parallel!
How I do this:
If I want to trig when a file is modified, I will use CLOSE-WRITE event.
Instead on while true I use -m switch of inotifywait command.
As many editors write on new file then rename file, I watch for directory instead and wait for an event with correct filename.
#!/bin/sh
cmdFile="$1"
tempdir=$(mktemp -d)
notif="$tempdir/notif"
mkfifo "$notif"
inotifywait -me close_write "${cmdFile%/*}" >"$notif" 2>&1 &
notpid=$!
exec 5<"$notif"
rm "$notif"
rmdir "$tempdir"
"$#" & cmdPid=$!
trap "kill $notpid \$cmdPid; exit" 0 1 2 3 6 9 15
while read dir evt file <&5;do
case $file in
${cmdFile##*/} )
date +"%a %d %b %T file '$file' modified."
kill $cmdPid
"$#" & cmdPid=$!
;;
esac
done
Did some search online, found simple 'tutorials' to use named pipes. However when I do anything with background jobs I seem to lose a lot of data.
[[Edit: found a much simpler solution, see reply to post. So the question I put forward is now academic - in case one might want a job server]]
Using Ubuntu 10.04 with Linux 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu).
My bash function is:
function jqs
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT SIGKILL
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
if read txt <"$pipe"
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
fi
fi
done
}
I run this in the background:
> jqs&
[1] 5336
And now I feed it:
for i in 1 2 3 4 5 6 7 8
do
(echo aaa$i > /tmp/__job_control_manager__ && echo success$i &)
done
The output is inconsistent.
I frequently don't get all success echoes.
I get at most as many new text echos as success echoes, sometimes less.
If I remove the '&' from the 'feed', it seems to work, but I am blocked until the output is read. Hence me wanting to let sub-processes get blocked, but not the main process.
The aim being to write a simple job control script so I can run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run.
Full job manager below:
function jq_manage
{
export __gn__="$1"
pipe=/tmp/__job_control_manager_"$__gn__"__
trap "rm -f $pipe" EXIT
trap "break" SIGKILL
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
date
jobs
if (($(jobs | egrep "Running.*echo '%#_Group_#%_$__gn__'" | wc -l) < $__jN__))
then
echo "Waiting for new job"
if read new_job <"$pipe"
then
echo "new job is [[$new_job]]"
if [[ "$new_job" == 'quit' ]]
then
break
fi
echo "In group $__gn__, starting job $new_job"
eval "(echo '%#_Group_#%_$__gn__' > /dev/null; $new_job) &"
fi
else
sleep 3
fi
done
}
function jq
{
# __gn__ = first parameter to this function, the job group name (the pool within which to allocate __jN__ jobs)
# __jN__ = second parameter to this function, the maximum of job numbers to run concurrently
export __gn__="$1"
shift
export __jN__="$1"
shift
export __jq__=$(jobs | egrep "Running.*echo '%#_GroupQueue_#%_$__gn__'" | wc -l)
if (($__jq__ '<' 1))
then
eval "(echo '%#_GroupQueue_#%_$__gn__' > /dev/null; jq_manage $__gn__) &"
fi
pipe=/tmp/__job_control_manager_"$__gn__"__
echo $# >$pipe
}
Calling
jq <name> <max processes> <command>
jq abc 2 sleep 20
will start one process.
That part works fine. Start a second one, fine.
One by one by hand seem to work fine.
But starting 10 in a loop seems to lose the system, as in the simpler example above.
Any hints as to what I can do to solve this apparent loss of IPC data would be greatly appreciated.
Regards,
Alain.
Your problem is if statement below:
while true
do
if read txt <"$pipe"
....
done
What is happening is that your job queue server is opening and closing the pipe each time around the loop. This means that some of the clients are getting a "broken pipe" error when they try to write to the pipe - that is, the reader of the pipe goes away after the writer opens it.
To fix this, change your loop in the server open the pipe once for the entire loop:
while true
do
if read txt
....
done < "$pipe"
Done this way, the pipe is opened once and kept open.
You will need to be careful of what you run inside the loop, as all processing inside the loop will have stdin attached to the named pipe. You will want to make sure you redirect stdin of all your processes inside the loop from somewhere else, otherwise they may consume the data from the pipe.
Edit: With the problem now being that you are getting EOF on your reads when the last client closes the pipe, you can use jilles method of duping the file descriptors, or you can just make sure you are a client too and keep the write side of the pipe open:
while true
do
if read txt
....
done < "$pipe" 3> "$pipe"
This will hold the write side of the pipe open on fd 3. The same caveat applies with this file descriptor as with stdin. You will need to close it so any child processes dont inherit it. It probably matters less than with stdin, but it would be cleaner.
As said in other answers you need to keep the fifo open at all times to avoid losing data.
However, once all writers have left after the fifo has been open (so there was a writer), reads return immediately (and poll() returns POLLHUP). The only way to clear this state is to reopen the fifo.
POSIX does not provide a solution to this but at least Linux and FreeBSD do: if reads start failing, open the fifo again while keeping the original descriptor open. This works because in Linux and FreeBSD the "hangup" state is local to a particular open file description, while in POSIX it is global to the fifo.
This can be done in a shell script like this:
while :; do
exec 3<tmp/testfifo
exec 4<&-
while read x; do
echo "input: $x"
done <&3
exec 4<&3
exec 3<&-
done
Just for those that might be interested, [[re-edited]] following comments by camh and jilles, here are two new versions of the test server script.
Both versions now works exactly as hoped.
camh's version for pipe management:
function jqs # Job queue manager
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT TERM
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
if read -u 3 txt
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
else
sleep 1
# process $txt - remember that if this is to be a spawned job, we should close fd 3 and 4 beforehand
fi
fi
done 3< "$pipe" 4> "$pipe" # 4 is just to keep the pipe opened so any real client does not end up causing read to return EOF
}
jille's version for pipe management:
function jqs # Job queue manager
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT TERM
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
exec 3< "$pipe"
exec 4<&-
while true
do
if read -u 3 txt
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
else
sleep 1
# process $txt - remember that if this is to be a spawned job, we should close fd 3 and 4 beforehand
fi
else
# Close the pipe and reconnect it so that the next read does not end up returning EOF
exec 4<&3
exec 3<&-
exec 3< "$pipe"
exec 4<&-
fi
done
}
Thanks to all for your help.
Like camh & Dennis Williamson say don't break the pipe.
Now I have smaller examples, direct on the command line:
Server:
(
for i in {0,1,2,3,4}{0,1,2,3,4,5,6,7,8,9};
do
if read s;
then echo ">>$i--$s//";
else
echo "<<$i";
fi;
done < tst-fifo
)&
Client:
(
for i in {%a,#b}{1,2}{0,1};
do
echo "Test-$i" > tst-fifo;
done
)&
Can replace the key line with:
(echo "Test-$i" > tst-fifo&);
All client data sent to the pipe gets read, though with option two of the client one may need to start the server a couple of times before all data is read.
But although the read waits for data in the pipe to start with, once data has been pushed, it reads the empty string forever.
Any way to stop this?
Thanks for any insights again.
On the one hand the problem is worse than I thought:
Now there seems to be a case in my more complex example (jq_manage) where the same data is being read over and over again from the pipe (even though no new data is being written to it).
On the other hand, I found a simple solution (edited following Dennis' comment):
function jqn # compute the number of jobs running in that group
{
__jqty__=$(jobs | egrep "Running.*echo '%#_Group_#%_$__groupn__'" | wc -l)
}
function jq
{
__groupn__="$1"; shift # job group name (the pool within which to allocate $__jmax__ jobs)
__jmax__="$1"; shift # maximum of job numbers to run concurrently
jqn
while (($__jqty__ '>=' $__jmax__))
do
sleep 1
jqn
done
eval "(echo '%#_Group_#%_$__groupn__' > /dev/null; $#) &"
}
Works like a charm.
No socket or pipe involved.
Simple.
run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run
You can do this with GNU Parallel. You will not need a this scripting.
http://www.gnu.org/software/parallel/man.html#options
You can set max-procs "Number of jobslots. Run up to N jobs in parallel." There is an option to set the number of CPU cores you want to use. You can save the list of executed jobs to a log file, but that is a beta feature.