Force kill subprocess with a very big shell script - node.js

I have a big ffmpeg shell script of 80.000 chars and another smaller one.
I'm executing it with execa and i get a PID.
execa(`chmod +x command.sh; chmod +x command2.sh; ./command.sh & ./command2.sh`, {shell: true, detached: true}
After i leave a tab i'm executing exec(kill -9 ${pid}) on that subprocess. And it takes like 3-4 minutes to kill it. How can i optimize this? Can i kill it instantly? In that 3-4 min time that it tries to close, it takes a lot of CPU power.
Followup infomration
So i have 2 shell scripts.
One for audio stream and one for video stream.
I'm using & to execute them at the same time, because i need both audio and video at the same time when playing a video.
audioStream.sh is just a ffmpeg command that outputs hls audio chunks.
videoStream.sh contains 400 ImageMagick commands that convert images with border and 85 ffmpeg commands that outputs video hls chunks that go into the m3u8 master playlist which goes into the video player. The commands are delimited with ';'.

Suggesting to start debugging your code as a simple bash script.
bash-script.sh
#!/bin/bash
chmod +x command.sh;
chmod +x command2.sh;
./command.sh &
./command2.sh
Once your script is running try to kill your command with pkill command.
Something like:
pkill -9 -f "command.sh"
See how long it takes to kill it.
If it takes too long. Run ./command.sh from the command line and try to kill it again with pkill command as above. If it is still take long.
The problem is in command.sh script handling SIGKILL signals from the operating system and you should study the command.sh command line arguments enabling it to trap SIGKILL signals.

Related

Starting fifo at the startup Linux fedora

I have been using fifos for controlling mpg123 player, there every-time I need to execute these 3 commands
mkfifo a // create fifo
cat > a & //to run it indefinately
mypid=$! //assign some dummy pid
I want to put this into some script which would execute it at the boot, i wrote a script containing these commands.
but it was not working, after some search i got i had to execute it like
. test.sh
manually i can execute it like the above way but automatically how to execute i am struggling.?
EDITED
test.sh
cd /root/work/
now executing this as ./test.sh will not change directory on terminal as it is executed in child process, and executing it as . test.sh will change the directory to /root/work.
I want to execute it as . test.sh through some function/script or anything that i can put at startup and at every boot it runs
Since mpg123 they are providing feature for fifo control of the player
instead of executing all the commands mentioned above
just
mpg123 -R --fifo /usr/test/FIFO_NAME
and then send the command to FIFO and it's done.

Why would a loop die?

I'm using a simple loop to restart a process if it dies. Occasionally I've seen the loop stop, which of course it shouldn't do. What could be some causes of this? I'm using a low end node/vps running ubuntu 14. Thanks. :)
This is the loop I use.
#!/bin/bash/
period=${1:-60}
while :
do
sleep 20 &
sh restart.sh
wait
done
This is restart.sh.. it greps the current PID of 'ffmpeg' and if its not found, it re-runs the ffmpeg command which broadcasts my own internet radio station, which I can then listen to anywhere :)
#!/bin/bash/
pgrep ffmpeg
if [ $? -ne 0 ]
then killall ffmpeg
killall rtmpdump
sleep 1
nohup ffmpeg (a bunch of ffmpeg stuff) &
fi
So your saying this could be an issue with ffmpeg hanging/freezing, rather then the loop dieing out? Is there ways I could improve what I am doing here? TBH I'm only about a month into linux and a week into bash, so this is the best I could do.
Thanks for your help so far, it was very useful!
Depending on what restart.sh does, it appears the script could wait forever if sh restart.sh hangs. By specifying wait with no PID, it will wait until all process IDs known to the invoking shell have terminated (see man 1P wait). If sh restart.sh hangs, wait is never satisfied.
If there is no specific requirement to background sleep, why not run sleep in the foreground and omit wait altogether:
#!/bin/bash/
period=${1:-60}
while :
do
sleep 20
sh restart.sh
done

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

How can I launch a new process that is NOT a child of the original process?

(OSX 10.7) An application we use let us assign scripts to be called when certain activities occur within the application. I have assigned a bash script and it's being called, the problem is that what I need to do is to execute a few commands, wait 30 seconds, and then execute some more commands. If I have my bash script do a "sleep 30" the entire application freezes for that 30 seconds while waiting for my script to finish.
I tried putting the 30 second wait (and the second set of commands) into a separate script and calling "./secondScript &" but the application still sits there for 30 seconds doing nothing. I assume the application is waiting for the script and all child processes to terminate.
I've tried these variations for calling the second script from within the main script, they all have the same problem:
nohup ./secondScript &
( ( ./secondScript & ) & )
( ./secondScript & )
nohup script -q /dev/null secondScript &
I do not have the ability to change the application and tell it to launch my script and not wait for it to complete.
How can I launch a process (I would prefer the process to be in a scripting language) such that the new process is not a child of the current process?
Thanks,
Chris
p.s. I tried the "disown" command and it didn't help either. My main script looks like this:
[initial commands]
echo Launching second script
./secondScript &
echo Looking for jobs
jobs
echo Sleeping for 1 second
sleep 1
echo Calling disown
disown
echo Looking again for jobs
jobs
echo Main script complete
and what I get for output is this:
Launching second script
Looking for jobs
[1]+ Running ./secondScript &
Sleeping for 1 second
Calling disown
Looking again for jobs
Main script complete
and at this point the calling application sits there for 45 seconds, waiting for secondScript to finish.
p.p.s
If, at the top of the main script, I execute "ps" the only thing it returns is the process ID of the interactive bash session I have open in a separate terminal window.
The value of $SHELL is /bin/bash
If I execute "ps -p $$" it correctly tells me
PID TTY TIME CMD
26884 ?? 0:00.00 mainScript
If I execute "lsof -p $$" it gives me all kinds of results (I didn't paste all the columns here assuming they aren't relevant):
FD TYPE NAME
cwd DIR /private/tmp/blahblahblah
txt REG /bin/bash
txt REG /usr/lib/dyld
txt REG /private/var/db/dyld/dyld_shared_cache_x86_64
0 PIPE
1 PIPE -> 0xffff8041ea2d10
2 PIPE -> 0xffff 8017d21cb
3r DIR /private/tmp/blahblah
4r REG /Volumes/DATA/blahblah
255r REG /Volumes/DATA/blahblah
The typical way of doing this in Unix is to double fork. In bash, you can do this with
( sleep 30 & )
(..) creates a child process, and & creates a grandchild process. When the child process dies, the grandchild process is inherited by init.
If this doesn't work, then your application is not waiting for child processes.
Other things it may be waiting for include the session and open lock files:
To create a new session, Linux has a setsid. On OS X, you might be able to do it through script, which incidentally also creates a new session:
# Linux:
setsid sleep 30
# OS X:
nohup script -q -c 'sleep 30' /dev/null &
To find a list of inherited file descriptors, you can use lsof -p yourpid, which will output something like:
sleep 22479 user 0u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 1u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 2u CHR 136,32 0t0 35 /dev/pts/32
sleep 22479 user 5w REG 252,0 0 1048806 /tmp/lockfile
In this case, in addition to the standard FDs 0, 1 and 2, you also have a fd 5 open with a lock file that the parent can be waiting for.
To close fd 5, you can use exec 5>&-. If you think the lock file might be stdin/stdout/stderr themselves, you can use nohup to redirect them to something else.
Another way is to abandon the child
#!/bin/bash
yourprocess &
disown
As far as I understand, the application replaces the normal bash shell because it is still waiting for a process to finish even if init should have taken care of this child process.
It could be that the "application" intercepts the orphan handling which is normally done by init.
In that case, only a parallel process with some IPC can offer a solution (see my other answer)
I think it depends on how your parent process tries to detect if your child process has been finished.
In my case (my parent process was gnu make), I succeed by closing stdout and stderr (slightly based on the answer of that other guy) like this:
sleep 30 >&- 2>&- &
You might also close stdin
sleep 30 <&- >&- 2>&- &
or additionally disown your child process (not for Mac)
sleep 30 <&- >&- 2>&- & disown
Currently tested only in bash on kubuntu 14.04 and Mac OSX.
If all else fails:
Create a named pipe
start the "slow" script independent from the "application", make sure executes it's task in an endless loop, starting with reading from the pipe. It will become read-blocked when it tries to read..
from the application, start your other script. When it needs to invoke the "slow" script, just write some data to the pipe. The slow script will start independently so your script won't wait for the "slow" script to finish.
So, to answer the question:
bash - how can I launch a new process that is NOT a child of the original process?
Simple: don't launch it but let an independent entity launch it during boot...like init or on the fly with the command at or batch
Here I have a shell
└─bash(13882)
Where I start a process like this:
$ (urxvt -e ssh somehost&)
I get a process tree (this output snipped from pstree -p):
├─urxvt(14181)───ssh(14182)
where the process is parented beneath pid 1 (systemd in my case).
However, had I instead done this (note where the & is) :
$ (urxvt -e ssh somehost)&
then the process would be a child of the shell:
└─bash(13882)───urxvt(14181)───ssh(14182)
In both cases the shell prompt is immediately returned and I can exit
without terminating the process tree that I started above.
For the latter case the process tree is reparented beneath pid 1 when
the shell exits, so it ends up the same as the first example.
├─urxvt(14181)───ssh(14182)
Either way, the result is a process tree that outlives the shell. The
only difference is the initial parenting of that process tree.
For reference, you can also use
nohup urxvt -e ssh somehost &
urxvt -e ssh somehost & disown $!
Both give the same process tree as the second example above.
└─bash(13882)───urxvt(14181)───ssh(14182)
When the shell is terminated the process tree is, like before, reparented
to pid 1.
nohup additionally redirects the process' standard output to a file
nohup.out so, if that is a useful trait, it may be a more useful choice.
Otherwise, with the first form above, you immediately have a completely
detached process tree.

How to stop ffmpeg remotely?

I'm running ffmpeg on another machine for screen capture. I'd like to be able to stop it recording remotely. FFMPEG requires that q is pressed to stop encoding as it has to do some finalization to finish the file cleanly. I know I could kill it with kill/killall however this can lead to corrupt videos.
Press [q] to stop encoding
I can't find anything on google specifically for this, but some there is suggestion that echoing into /proc//fd/0 will work.
I've tried this but it does not stop ffmpeg. The q is however shown in the terminal in which ffmpeg is running.
echo -n q > /proc/16837/fd/0
So how can I send a character to another existing process in such a way it is as if it were typed locally? Or is there another way of remotely stopping ffmpeg cleanly.
Here's a neat trick I discovered when I was faced with this problem: Make an empty file (it doesn't have to be a named pipe or anything), then write 'q' to it when it's time to stop recording.
$ touch stop
$ <./stop ffmpeg -i ... output.ext >/dev/null 2>>Capture.log &
$ wait for stopping time
$ echo 'q' > stop
FFmpeg stops as though it got 'q' from the terminal STDIN.
Newer versions of ffmpeg don't use 'q' anymore, at least on Ubuntu Oneiric, instead they say to press Ctrl+C to stop them. So with a newer version you can simply use 'killall -INT' to send them SIGINT instead of SIGTERM, and they should exit cleanly.
Elaborating on the answer from sashoalm, i have tested both scenarios, and here are the results:
My experiments shows that doing
killall --user $USER --ignore-case --signal INT ffmpeg
Produces the following on the console where ffmpeg was running
Exiting normally, received signal 2.
While doing
killall --user $USER --ignore-case --signal SIGTERM ffmpeg
Produces
Exiting normally, received signal 15.
So it looks that ffmpeg is fine with both signals.
System: Debian GNU/Linux 9 (stretch), 2020-02-28
You can also try to use "expect" to automate the execution and stop of the program. You would have to start it using some virtual shell like screen, tmux or byobu and then start the ffmpeg inside of it. This way you would be able to get again the virtual shell screen and give the "q" option.
Locally or remotely start a virtual shell session, lets say with "screen". Name the session with -S option, like screen -S recvideo Then you can start the ffmpeg as you like. You can, optionally, detach from this session with a Ctrl+a + d.
Connect to the machine where the ffmpeg is running inside the screen (or tmux or whatever) and reconnect to it: screen -d -RR recvideo and then send the "q"
To do that from inside a script you can then use expect, like:
prompt="> "
expect << EOF
set timeout 20
spawn screen -S recvideo
expect "$prompt"
send -- "ffmpeg xxxxx\r"
set timeout 1
expect eof
EOF
Then, in another moment or script point or in another script you recover it:
expect << EOF
set timeout 30
spawn screen -d -RR recvideo
expect "$prompt"
send -- "q"
expect "$prompt"
send -- "exit\r"
expect eof
EOF
You can also automate the whole ssh session with expect, passing a sequence of commands and "expects" to do what you want.
The question has already been answered for Linux, but it came up when I was looking for the windows equivalent, so I'm gonna add that to the answers:
On powershell, you start the process like this:
$((Start-Process ffmpeg -passthru -argument "FFMPEG_ARGS").ID)
This sends back the PID of the FFMPEG process that you can store in a variable, or echo, and then you send the windows equivalent of sigint (Ctrl + C) using taskkill
taskkill /pid FFMPEG_PID
I tried with Stop-Process (which is what comes up when looking how to do this on Google) but it actually kills the process. (And yes, taskkill doesn't kill it, it gently asks the process to stop... good naming :D)

Resources