Keeping Named Pipes Running - linux

I am running a Raspberry Pi with Jessi Lite. I have created a named pipe
mkfifo soundpipe
and I send my audio capture to the pipe:
arecord -f S16_LE -c1 -r48000 -t wav -D hw:1,0 > soundpipe
In a new terminal I then consume sound from this pipe
cat soundpipe | lame -m m -f -b 32 - "/home/pi/arecordings/test.mp3"
When I press ctrl+c in the terminal running the "cat" command then the arecord process (thats writting the pipe) stops. Why is this process stopping and how do I keep it always running?
Thanks.

The writer is blocking because there is no reader for the named pipe.
If you need it to run continuously, you'll have to have a program continuously read the pipe, or use something other than a named pipe.

Related

Force kill subprocess with a very big shell script

I have a big ffmpeg shell script of 80.000 chars and another smaller one.
I'm executing it with execa and i get a PID.
execa(`chmod +x command.sh; chmod +x command2.sh; ./command.sh & ./command2.sh`, {shell: true, detached: true}
After i leave a tab i'm executing exec(kill -9 ${pid}) on that subprocess. And it takes like 3-4 minutes to kill it. How can i optimize this? Can i kill it instantly? In that 3-4 min time that it tries to close, it takes a lot of CPU power.
Followup infomration
So i have 2 shell scripts.
One for audio stream and one for video stream.
I'm using & to execute them at the same time, because i need both audio and video at the same time when playing a video.
audioStream.sh is just a ffmpeg command that outputs hls audio chunks.
videoStream.sh contains 400 ImageMagick commands that convert images with border and 85 ffmpeg commands that outputs video hls chunks that go into the m3u8 master playlist which goes into the video player. The commands are delimited with ';'.
Suggesting to start debugging your code as a simple bash script.
bash-script.sh
#!/bin/bash
chmod +x command.sh;
chmod +x command2.sh;
./command.sh &
./command2.sh
Once your script is running try to kill your command with pkill command.
Something like:
pkill -9 -f "command.sh"
See how long it takes to kill it.
If it takes too long. Run ./command.sh from the command line and try to kill it again with pkill command as above. If it is still take long.
The problem is in command.sh script handling SIGKILL signals from the operating system and you should study the command.sh command line arguments enabling it to trap SIGKILL signals.

how to duplicate /dev/ttyUSB0 stream?

Is there a way to duplicate a serial device /dev/ttyUSB0 on Linux (Ubuntu 14.04) ?
I made a program which read the data from the RS232 port and I would like to execute two instances of my program with different options in parallel. However, I cannot open the /dev/ttyUSB0 twice from my program (writen in C).
Ideally, I would like the ttyUSB0 driver to create two identical devices. (/dev/ttyUSB0 and /dev/ttyUSB1). The driver would make sure that ttyUSB1 is an exact copy of ttyUSB0.
Is there a way to perform this from the command line (bash) or should I make my own special serial port driver ?
You can use tee to output anything written to /dev/ttyUSB0 to two named pipes.
mkfifo copy1 copy2
tee copy1 copy2 < /dev/ttyUSB0 > /dev/null &
program1 < copy1
program2 < copy2
(Strictly speaking, you only need one named pipe; the other program could read from an anonymous pipe connected directly to tee. This symmetrical treatment is a little cleaner, though.)

How to stop ffmpeg remotely?

I'm running ffmpeg on another machine for screen capture. I'd like to be able to stop it recording remotely. FFMPEG requires that q is pressed to stop encoding as it has to do some finalization to finish the file cleanly. I know I could kill it with kill/killall however this can lead to corrupt videos.
Press [q] to stop encoding
I can't find anything on google specifically for this, but some there is suggestion that echoing into /proc//fd/0 will work.
I've tried this but it does not stop ffmpeg. The q is however shown in the terminal in which ffmpeg is running.
echo -n q > /proc/16837/fd/0
So how can I send a character to another existing process in such a way it is as if it were typed locally? Or is there another way of remotely stopping ffmpeg cleanly.
Here's a neat trick I discovered when I was faced with this problem: Make an empty file (it doesn't have to be a named pipe or anything), then write 'q' to it when it's time to stop recording.
$ touch stop
$ <./stop ffmpeg -i ... output.ext >/dev/null 2>>Capture.log &
$ wait for stopping time
$ echo 'q' > stop
FFmpeg stops as though it got 'q' from the terminal STDIN.
Newer versions of ffmpeg don't use 'q' anymore, at least on Ubuntu Oneiric, instead they say to press Ctrl+C to stop them. So with a newer version you can simply use 'killall -INT' to send them SIGINT instead of SIGTERM, and they should exit cleanly.
Elaborating on the answer from sashoalm, i have tested both scenarios, and here are the results:
My experiments shows that doing
killall --user $USER --ignore-case --signal INT ffmpeg
Produces the following on the console where ffmpeg was running
Exiting normally, received signal 2.
While doing
killall --user $USER --ignore-case --signal SIGTERM ffmpeg
Produces
Exiting normally, received signal 15.
So it looks that ffmpeg is fine with both signals.
System: Debian GNU/Linux 9 (stretch), 2020-02-28
You can also try to use "expect" to automate the execution and stop of the program. You would have to start it using some virtual shell like screen, tmux or byobu and then start the ffmpeg inside of it. This way you would be able to get again the virtual shell screen and give the "q" option.
Locally or remotely start a virtual shell session, lets say with "screen". Name the session with -S option, like screen -S recvideo Then you can start the ffmpeg as you like. You can, optionally, detach from this session with a Ctrl+a + d.
Connect to the machine where the ffmpeg is running inside the screen (or tmux or whatever) and reconnect to it: screen -d -RR recvideo and then send the "q"
To do that from inside a script you can then use expect, like:
prompt="> "
expect << EOF
set timeout 20
spawn screen -S recvideo
expect "$prompt"
send -- "ffmpeg xxxxx\r"
set timeout 1
expect eof
EOF
Then, in another moment or script point or in another script you recover it:
expect << EOF
set timeout 30
spawn screen -d -RR recvideo
expect "$prompt"
send -- "q"
expect "$prompt"
send -- "exit\r"
expect eof
EOF
You can also automate the whole ssh session with expect, passing a sequence of commands and "expects" to do what you want.
The question has already been answered for Linux, but it came up when I was looking for the windows equivalent, so I'm gonna add that to the answers:
On powershell, you start the process like this:
$((Start-Process ffmpeg -passthru -argument "FFMPEG_ARGS").ID)
This sends back the PID of the FFMPEG process that you can store in a variable, or echo, and then you send the windows equivalent of sigint (Ctrl + C) using taskkill
taskkill /pid FFMPEG_PID
I tried with Stop-Process (which is what comes up when looking how to do this on Google) but it actually kills the process. (And yes, taskkill doesn't kill it, it gently asks the process to stop... good naming :D)

Linux Shell: VLC programming

Is there a way to manipulate VLC with a Linux shell script without the script waiting for VLC to close.
cvlc test.mp3
echo "Now playing!"
sleep 200
cvlc://pause:60
This code keeps running VLC until the file is completed, and then evidently it is to late to pause the file.
You need to use dbus interface of VLC.
Now, you can use the mpris interface of VLC. It's a standard for most players like clementine, banshee, songbird, spotify etc.
So, lets suppose you want to Pause the currently playing song.
dbus-send --print-reply --session --dest=org.mpris.vlc /Player org.freedesktop.MediaPlayer.Pause
To play a song:
dbus-send --print-reply --session --dest=org.mpris.vlc /Player org.freedesktop.MediaPlayer.Play
I generally use qdbusviewer to know about the dbus-interface available to me.
Dbus is one way but dbus does not exist on all systems. The more common method would be to use the rc interface:
cvlc -I rc --rc-host localhost:11337 -d
Then one can use netcat to pipe commands into the tcp socket. For example:
vlc -I rc --rc-host localhost:11337 test.mp3 -d &
echo "Now playing!"
sleep 200
echo pause | netcat localhost 11337
EDIT:
After testing with a few other interfaces I have discovered the oldrc interface accepts UNIX domain sockets thus the following will work as well with out needing to play around with firewalls or worry about anyone else on the network messing around with your vlc instance.
vlc -I oldrc --rc-unix /var/run/vlc.sock -d
echo "Now Playing!"
sleep 200
echo "pause" | netcat -U /var/run/vlc.sock
It looks like you can redirect from standard input or a named pipe. For more complicated things you could use libvlc.
http://wiki.videolan.org/Uncommon_uses

Example of using named pipes in Linux shell (Bash)

Can someone post a simple example of using named pipes in Bash on Linux?
One of the best examples of a practical use of a named pipe...
From http://en.wikipedia.org/wiki/Netcat:
Another useful behavior is using netcat as a proxy. Both ports and hosts can be redirected. Look at this example:
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request.
This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo backpipe
nc -l 12345 0<backpipe | nc www.google.com 80 1>backpipe
Here are the commands:
mkfifo named_pipe
echo "Hi" > named_pipe &
cat named_pipe
The first command creates the pipe.
The second command writes to the pipe (blocking). The & puts this into the background so you can continue to type commands in the same shell. It will exit when the FIFO is emptied by the next command.
The last command reads from the pipe.
Open two different shells, and leave them side by side. In both, go to the /tmp/ directory:
cd /tmp/
In the first one type:
mkfifo myPipe
echo "IPC_example_between_two_shells">myPipe
In the second one, type:
while read line; do echo "What has been passed through the pipe is ${line}"; done<myPipe
First shell won't give you any prompt back until you execute the second part of the code in the second shell. It's because the fifo read and write is blocking.
You can also have a look at the FIFO type by doing a ls -al myPipe and see the details of this specific type of file.
Next step would be to embark the code in a script!
Creating a named pipe
$ mkfifo pipe_name
On Unix-likes named pipe (FIFO) is a special type of file with no content. The mkfifo command creates the pipe on a file system (assigns a name to it), but doesn't open it. You need to open and close it separately like any other file.
Using a named pipe
Named pipes are useful when you need to pipe from/to multiple processes or if you can't connect two processes with an anonymous pipe. They can be used in multiple ways:
In parallel with another process:
$ echo 'Hello pipe!' > pipe_name & # runs writer in a background
$ cat pipe_name
Hello pipe!
Here writer runs along the reader allowing real-time communication between processes.
Sequentially with file descriptors:
$ # open the pipe on auxiliary FD #5 in both ways (otherwise it will block),
$ # then open descriptors for writing and reading and close the auxiliary FD
$ exec 5<>pipe_name 3>pipe_name 4<pipe_name 5>&-
$
$ echo 'Hello pipe!' >&3 # write into the pipe through FD #3
...
$ exec 3>&- # close the FD when you're done
$ # (otherwise reading will block)
$ cat <&4
Hello pipe!
...
$ exec 4<&-
In fact, communication through a pipe can be sequential, but it's limited to 64 KB (buffer size).
It's preferable to use descriptors to transfer multiple blocks of data in order to reduce overhead.
Conditionally with signals:
$ handler() {
> cat <&3
>
> exec 3<&-
> trap - USR1 # unregister signal handler (see below)
> unset -f handler writer # undefine the functions
> }
$
$ exec 4<>pipe_name 3<pipe_name 4>&-
$ trap handler USR1 # register handler for signal USR1
$
$ writer() {
> if <condition>; then
> kill -USR1 $PPID # send the signal USR1 to a specified process
> echo 'Hello pipe!' > pipe_name
> fi
> }
$ export -f writer # pass the function to child shells
$
$ bash -c writer & # can actually be run sequentially as well
$
Hello pipe!
FD allows data transfer to start before the shell is ready to receive it. Required when used sequentially.
The signal should be sent before data to prevent a deadlock if pipe buffer will fill up.
Destroying a named pipe
The pipe itself (and its content) gets destroyed when all descriptors to it are closed. What's left is just a name.
To make the pipe anonymous and unavailable under the given name (can be done when the pipe is still open) you could use the rm con­sole com­mand (it's the opposite of mkfifo command):
$ rm pipe_name
Terminal 1:
$ mknod new_named_pipe p
$ echo 123 > new_named_pipe
Terminal 1 created a named pipe.
It wrote data in it using echo.
It is blocked as there is no receiving end (as pipes both named and unnamed need receiving and writing ends to it)
Terminal 2:
$ cat new_named_pipe
$ 123
$
From Terminal 2, a receiving end for the data is added.
It read the data in it using cat.
Since both receiving and writing ends are there for the new_named_pipe it displays the information and blocking stops
Named pipes are used everywhere in Linux, most of the char and block files we see during ls -l command are char and block pipes (All of these reside at /dev).
These pipes can be blocking and non-blocking, and the main advantage is these provides the simplest way for IPC.

Resources