Disown a function from a script? [duplicate] - linux

This question already has answers here:
Call a function using nohup
(6 answers)
Closed 4 years ago.
I'm not sure this is really possible, but maybe someone has some suggestion on how I can do something similar to this:
#!/bin/bash
dissed() {
for i in {1..10}
do
sleep 5
echo "$(date)"
done
}
/usr/bin/screen -S disowned -d -m dissed & disown -h;
screen -ls
echo "done."
Of course this doesn't work, since I assume the function gets killed when the script ends. I know it works when I substitute an actual .sh file where the "dissed" function gets called, but this is what I want to avoid.
Is there some way to disown a function or subscript located in the parent?

To detach a process from a bash script:
nohup ./process &
If you stop your bash script with SIGINT for instance, the process won't be bothered and will continue to execute normally. stdout & stderr will be redirected to a file: nohup.out.
If you wish to execute a detached command while being able to see the output in the terminal, then use tail:
TEMP_LOG_FILE=tmp.log
"$TEMP_LOG_FILE"
nohup ./process &> "$TEMP_LOG_FILE" & tail -f "$TEMP_LOG_FILE" &

Related

Running a process with the TTY detached

I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.

'nohup 2>&1 >out' vs 'nohup >out 2>&1' [duplicate]

This question already has answers here:
Whats the difference between redirections "1>/dev/null 2>&1" and " 2>&1 1>/dev/null"?
(3 answers)
Closed 2 years ago.
For me, the following two bash commands produce different results:
# returns instantly
ssh localhost 'nohup sleep 5 >out 2>&1 &'
# returns after 5 seconds
ssh localhost 'nohup sleep 5 2>&1 >out &'
It's surprising because the following two bash commands produce the same result:
# both return instantly
nohup sleep 5 >out 2>&1 &
nohup sleep 5 2>&1 >out &
Why?
This form:
command >out 2>&1
... takes standard out and puts it to a file. It then takes standard error and puts it to the same location as standard out. Both streams are sent to a file.
This form:
command 2>&1 >out
... takes standard error and puts it on standard out. It then takes what would be sent to standard out and sends it to a file. It does not send standard error to the file.
So, at your terminal, you are backgrounding the task and the prompt returns immediately. When using ssh, in the first case, there is no output to display because everything has been sent to a file. In the other case, standard out can still display information that the application tries to send to standard error.
My expectation is that ssh returns immediately in the first case because there will never by any output. In the second case, there is an open stream that may return data for you to see.

start process in background, get PID, and write into stdin in Shell

I need to do a buffer overflow for my system security course. Therefore I do have a program(called canary) I need start which asks for a input string (read()).
I need to calculate a canary(random canary built with PID and time) for a successful buffer overflow. I already wrote a program(getcanary) which gets me the right canary. The problem:
I try to start canary in a extra terminal, then get the PID of it, then calculate the program followed by a write to canary's STDIN. The last thing is where I have a Problem.
#!/bin/bash
echo "start canary"
x-terminal-emulator -e ./canary &
sleep 1
PID=$(pgrep canary)
CANARY=$(./getcanary $PID)
How can I write the command to the extra terminal? I already tried several solutions,
echo "cmd" > /proc/$PID/fd/0
is one of it
I also tried
mkfifo fifo
cat > fifo &
./canary < fifo
echo "cmd" > fifo
some other solutions are not allowed by my environment, as the script must run on a clean install of xubuntu, so I can't use screen or tmux
I hope you can help me,
Thank you! :)
PS.: I'm sorry if I misunderstood any of these solutions I tried, I'm not very familiar with shell scripting.
Write to the terminal, not to the running process!
#!/bin/bash
echo "start canary"
x-terminal-emulator -e ./canary &
termpid=$!
sleep 1
xvkbd -window $(xdotool search --sync --pid $termpid) -text "echo Hello world!\n"

Bash redirection in a script in parallel

I have a bash script with a loop of processes that I want to run in parallel:
for i in {1..5}
do
echo Running for simulation $i
python script.py $i > ./outlogs/$i.log 2>&1 &
done
But when I do this the file redirection doesn't work, so $i.log just stays empty. The redirection only works when I do not use the & at the end, but then the script waits for each process to finish before starting the next one, which I don't want.
I tried a solution using script -c, but this does not update in realtime, only once the process ends. Does anyone have better suggestions, where the file redirection works in this script but it still updates in realtime?
You need simply add -u option so it will look like this:
python -u script.py $i > ./outlogs/$i.log 2>&1 &
Option -u is for unbuffered binary stdout and stderr

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources