Linux Shell: VLC programming - linux

Is there a way to manipulate VLC with a Linux shell script without the script waiting for VLC to close.
cvlc test.mp3
echo "Now playing!"
sleep 200
cvlc://pause:60
This code keeps running VLC until the file is completed, and then evidently it is to late to pause the file.

You need to use dbus interface of VLC.
Now, you can use the mpris interface of VLC. It's a standard for most players like clementine, banshee, songbird, spotify etc.
So, lets suppose you want to Pause the currently playing song.
dbus-send --print-reply --session --dest=org.mpris.vlc /Player org.freedesktop.MediaPlayer.Pause
To play a song:
dbus-send --print-reply --session --dest=org.mpris.vlc /Player org.freedesktop.MediaPlayer.Play
I generally use qdbusviewer to know about the dbus-interface available to me.

Dbus is one way but dbus does not exist on all systems. The more common method would be to use the rc interface:
cvlc -I rc --rc-host localhost:11337 -d
Then one can use netcat to pipe commands into the tcp socket. For example:
vlc -I rc --rc-host localhost:11337 test.mp3 -d &
echo "Now playing!"
sleep 200
echo pause | netcat localhost 11337
EDIT:
After testing with a few other interfaces I have discovered the oldrc interface accepts UNIX domain sockets thus the following will work as well with out needing to play around with firewalls or worry about anyone else on the network messing around with your vlc instance.
vlc -I oldrc --rc-unix /var/run/vlc.sock -d
echo "Now Playing!"
sleep 200
echo "pause" | netcat -U /var/run/vlc.sock

It looks like you can redirect from standard input or a named pipe. For more complicated things you could use libvlc.
http://wiki.videolan.org/Uncommon_uses

Related

How to interact with telnet using empty

I need to replace a very simple expect script that looks like this:
#!/usr/bin/expect
spawn telnet 192.168.1.175
expect {
"assword" {send "lamepassword\r"}
}
interact
With the equivalent bash script using empty, like this:
#!/bin/bash
empty -f -i in -o out telnet 192.168.1.175
empty -w -i out -o in "assword" "lamepassword\n"
After which I need the user to interact with telnet, which I do not know how to do. The closest thing that comes to my mind is binding stdin and stdout with named pipes using something like socat - in. Any suggestions are more than welcome!
I tried cat out & cat /dev/stdin >in, it works, but it has an extra
newline, tab completion does not work and ctr+c terminates cat and
not the running host process. I am trying to persuade socat to act
according to those needs.
Using socat for transmitting keyboard input to the telnet process is a good idea. Example:
cat out & socat -u -,raw,echo=0 ./in
For allowing Ctrl-C to terminate socat, add escape=3:
cat out & socat -u -,raw,echo=0,escape=3 ./in
But note that this will not terminate the telnet session, since it did start in daemon mode, so you can reconnect to telnet by executing socat again. To end telnet, you could just logout.

Keeping Named Pipes Running

I am running a Raspberry Pi with Jessi Lite. I have created a named pipe
mkfifo soundpipe
and I send my audio capture to the pipe:
arecord -f S16_LE -c1 -r48000 -t wav -D hw:1,0 > soundpipe
In a new terminal I then consume sound from this pipe
cat soundpipe | lame -m m -f -b 32 - "/home/pi/arecordings/test.mp3"
When I press ctrl+c in the terminal running the "cat" command then the arecord process (thats writting the pipe) stops. Why is this process stopping and how do I keep it always running?
Thanks.
The writer is blocking because there is no reader for the named pipe.
If you need it to run continuously, you'll have to have a program continuously read the pipe, or use something other than a named pipe.

Pause programmatically video player mpv

I would like to know if there is a way to send a message to a running process on linux ?
For example, is it possible to programmatically "pause" a video launched with mpv.
To control mpv remotely (eg from another terminal session) you can also start it with the option
--input-ipc-server=/tmp/mpvsocket
and control it by issuing commands like this:
echo '{ "command": ["set_property", "pause", true] }' | socat - /tmp/mpvsocket
See man mpv for (many) more details.
edit: see also mpv --list-properties
edit2: The most simple way I've found to "toggle" pause/play is
{"command": ["cycle", "pause"]}
kill -s STOP $(pidof mpv) and kill -s CONT $(pidof mpv)
or better :
xdotool key --window "$(xdotool search --class mpv)" p
The key "P", is set by default to pause the video.
It's possible to control mpv through IPC. From the manual mpv(1):
--input-ipc-server=<filename>
Enable the IPC support and create the listening socket at the given path.
On Linux and Unix, the given path is a regular filesystem path.
On Windows, named pipes are used, so the path refers to the pipe namespace (\\.\pipe\<name>). If the \\.\pipe\ prefix is missing, mpv will add it automatically before creating the pipe, so --input-ipc-server=/tmp/mpv-socket and --input-ipc-server=\\.\pipe\tmp\mpv-socket are equivalent for IPC on Windows.
See JSON IPC for details.
A couple of examples:
$ echo 'cycle pause' | socat - /tmp/mpv-socket
$ echo 'playlist-prev' | socat - /tmp/mpv-socket
$ echo 'playlist-next' | socat - /tmp/mpv-socket
See mpv(1) to learn more.
See also:
https://alexherbo2.github.io/blog/mpv/command-line-controller/
https://gist.github.com/dwgill/a66769e0edef69c04d3b

How to stop ffmpeg remotely?

I'm running ffmpeg on another machine for screen capture. I'd like to be able to stop it recording remotely. FFMPEG requires that q is pressed to stop encoding as it has to do some finalization to finish the file cleanly. I know I could kill it with kill/killall however this can lead to corrupt videos.
Press [q] to stop encoding
I can't find anything on google specifically for this, but some there is suggestion that echoing into /proc//fd/0 will work.
I've tried this but it does not stop ffmpeg. The q is however shown in the terminal in which ffmpeg is running.
echo -n q > /proc/16837/fd/0
So how can I send a character to another existing process in such a way it is as if it were typed locally? Or is there another way of remotely stopping ffmpeg cleanly.
Here's a neat trick I discovered when I was faced with this problem: Make an empty file (it doesn't have to be a named pipe or anything), then write 'q' to it when it's time to stop recording.
$ touch stop
$ <./stop ffmpeg -i ... output.ext >/dev/null 2>>Capture.log &
$ wait for stopping time
$ echo 'q' > stop
FFmpeg stops as though it got 'q' from the terminal STDIN.
Newer versions of ffmpeg don't use 'q' anymore, at least on Ubuntu Oneiric, instead they say to press Ctrl+C to stop them. So with a newer version you can simply use 'killall -INT' to send them SIGINT instead of SIGTERM, and they should exit cleanly.
Elaborating on the answer from sashoalm, i have tested both scenarios, and here are the results:
My experiments shows that doing
killall --user $USER --ignore-case --signal INT ffmpeg
Produces the following on the console where ffmpeg was running
Exiting normally, received signal 2.
While doing
killall --user $USER --ignore-case --signal SIGTERM ffmpeg
Produces
Exiting normally, received signal 15.
So it looks that ffmpeg is fine with both signals.
System: Debian GNU/Linux 9 (stretch), 2020-02-28
You can also try to use "expect" to automate the execution and stop of the program. You would have to start it using some virtual shell like screen, tmux or byobu and then start the ffmpeg inside of it. This way you would be able to get again the virtual shell screen and give the "q" option.
Locally or remotely start a virtual shell session, lets say with "screen". Name the session with -S option, like screen -S recvideo Then you can start the ffmpeg as you like. You can, optionally, detach from this session with a Ctrl+a + d.
Connect to the machine where the ffmpeg is running inside the screen (or tmux or whatever) and reconnect to it: screen -d -RR recvideo and then send the "q"
To do that from inside a script you can then use expect, like:
prompt="> "
expect << EOF
set timeout 20
spawn screen -S recvideo
expect "$prompt"
send -- "ffmpeg xxxxx\r"
set timeout 1
expect eof
EOF
Then, in another moment or script point or in another script you recover it:
expect << EOF
set timeout 30
spawn screen -d -RR recvideo
expect "$prompt"
send -- "q"
expect "$prompt"
send -- "exit\r"
expect eof
EOF
You can also automate the whole ssh session with expect, passing a sequence of commands and "expects" to do what you want.
The question has already been answered for Linux, but it came up when I was looking for the windows equivalent, so I'm gonna add that to the answers:
On powershell, you start the process like this:
$((Start-Process ffmpeg -passthru -argument "FFMPEG_ARGS").ID)
This sends back the PID of the FFMPEG process that you can store in a variable, or echo, and then you send the windows equivalent of sigint (Ctrl + C) using taskkill
taskkill /pid FFMPEG_PID
I tried with Stop-Process (which is what comes up when looking how to do this on Google) but it actually kills the process. (And yes, taskkill doesn't kill it, it gently asks the process to stop... good naming :D)

SSH: guarding stdout against disconnect

My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.

Resources