How to access mplayer output/know when mplayer video stopped playing? - linux

I'm running a bash script that will play a video with mplayer depending on an input from an Arduino (on/off).
When the movie ends, I need to get a timestamp in a txt file. First question is whether there's a command in mplayer slave mode to tell me that, so I can output a timestamp easily.
If not, here's my strategy so far:
I'm running mplayer in slave mode with a fifo, where I echo "pause", whenever I want it to stop.
So, I've been doing this: echo "get_time_pos" to my fifo, which will tell mplayer to show in my Terminal the current position in the movie in seconds. When I say in my Terminal, it's in the same window where I'm running my script.
Now, I need to store this value in a variable to be able to compare with the total length and then output time.
I'm stuck at getting this output into a variable in my bash script.

I recently put together a small bash library that may grow with time. At the moment, it has the functionality you're looking for. I'll explain how to get the info you seek and then point you to my library which simplifies the task.
To get the information you seek, you don't even need to call get_time_pos. You can simply dump the mplayer (not running in quiet mode) output to a file and search that for the last timestamp. The trick here is that the timestamps listed in the dump may not be intuitive to search because of some special characters that control how text is displayed. You have to replace some of these special characters with new lines so that you can easily search it. Then you have to grab the last two lines in case the last line is not a timestamp.
Using my bash library:
Now, if you would like to simplify this process, check out this little library I wrote. Follow the usage directions on my GitHub to incorporate it, and then when you play a media file, play it with the playMediaFile function. If you do this, you'll be able to call the getElapsedSeconds or getElapsedTimestamp function to retrieve the current playback position or the playback position after mplayer has stopped. Storing it to a variable from within bash would be as simple as:
pos=$(getElapsedSeconds)
or
pos=$(getElapsedTimestamp)
This library contains other functions as well. The isFinishedPlaying function may or may not also be of use to you.

Related

Printer Queue in MacOS

I have a LiveCode app standalone that needs to know if there is a job waiting in the MacOS print queue before printing. If app user 1 prints the 2 page report and just one page prints (out of paper) then user 2 comes along and prints the report, the first page out is user 1's report and this is causing mixups. I would like to check the MacOS print queue and prevent printing if there is a job already waiting.
It's not something I've ever needed to do, but I suspect that this capability is not included in LiveCode natively. Instead your best bet will probably be to use LiveCode's shell() function to run a unix terminal command. For instance, lpstat is a command line utility that allows you to query various things about printers connected to your Mac. The following command, run in the MacOS terminal, shows which printers are available and their current status.
lpstat -p
In LiveCode you use the shell() function to call this command line utility, like so:
put shell("lpstat -p") into tPrinterStatus
To find out more about lpstat, open the Terminal and look up the man page:
man lpstat
Lots of options for that utility will appear. There should be one that gives you the information you need.

How to call external "interactive/TUI" command, interact, and read std output

I am trying to write my first vim script, so I apologize if this question boils down to not understanding the basics.
The main goal is that I want to call out to an external command from inside vim and read the results back into the file.
I know how to do this with simple shell commands, e.g. r !ls. However the command I want to interact with is "interactive".
I don't know if this is a meaningful description. But calling this command in the shell opens a TUI, then after interacting with the TUI the command will exit and put things into standard output. I want to read that standard output back into vim.
Possibly it will help to discuss the specific command, which is papis a cli citation manager. If you call, e.g. papis list --format '{doc[title]} {doc[author]}' in the shell it will open up a TUI that allows me to filter down and select a document. After selecting the document it will put the title and author into the standard output. This is what I want to read into vim.
However, my first few attempts have not been successful. Trying the naive :r !papis list results in an error, even though that command is valid in the shell and would result in the TUI being opened. So I'm obviously missing something.
Can anyone recommend a guide or suggest a possible solution for correctly calling out to TUI-based external commands and reading back their standard output?

How to stream log files content that is constantly changing file names in perl?

I a series of applications on Linux systems that I need to basically constantly 'stream' out or even just 'tail' out but the challenge is the filenames are constantly rolling and changing.
The are all date encoded (dates being in different formats) and each then have different incremented formats.
Most of them start with one and increase, but one doesn't have an extension and then adds an extension past the first file and the other increments a number but once hitting 99 rolls to increment a alpha and returns the numeric to 01 and then up again as it rolls so quickly.
I just have the OS level shell scripting, OS command line utilities, and perl available to me to handle this situation for another application to pickup and read these logs.
The new files are always created right when it starts writing to the new file and groups of different logs (some I am reading some I am not) are being written to the same directory so I cannot just pickup anything hitting the directory.
If I simply 'tail -n 1000000 -f |' them today this works fine for the reader application I am using until the file changes and I cannot setup file lists ranges within the reader application, but can pre-process them so they basically appear as a continuous stream to the reader vs. the reader directly invoking commands to read them. A simple Perl log reader like this also work fine for a static filename but not for dynamic ones. It is critical I don't re-process any logs lines and just capture new lines being written to the logs.
I admit I am not any form a Perl guru, and the best answers / clue I've been able to find so far is the use of Perl's Glob function to possibly do this but the examples I've found basically reprocess all of the files on each run then seem to stop.
Example File Names I am dealing with across multiple apps I am trying to handle..
appA_YYMMDD.log
appA_YYMMDD_0001.log
appA_YYMMDD_0002.log
WS01APPB_YYMMDD.log
WS02APPB_YYMMDD.log
WS03AppB_YYMMDD.log
APPCMMDD_A01.log
APPCMMDD_B01.log
YYYYMMDD_001_APPD.log
As denoted above the files do not have the same inode and simply monitoring the directory for change is not possible as a lot of things are written there. On the dev system it has more than 50 logs being written to the directory and thousands of files and I am only trying to retrieve 5. I am seeing if multitail can be made available to try that suggestion but it is not currently available and installing any additional RPMs in the environment is generally a multi-month battle.
ls -i
24792 APPA_180901.log
24805 APPA__180902.log
17011 APPA__180903.log
17072 APPA__180904.log
24644 APPA__180905.log
17081 APPA__180906.log
17115 APPA__180907.log
So really the root of what I am trying to do is simply a continuous stream regardless if the file name changes and not have to run the extract command repeatedly nor have big breaks in the data feed while some script figures out that the file being logged to has changed. I don't need to parse the contents (my other app does that).. Is there an easy way of handling this changing file name?
How about monitoring the log directory for changes with Linux inotify, e.g. Linux::inotify2? Then you could detect when new log files are created, stop reading from the old log file and start reading from the new log file.
Try tailswitch. I created this script to tail log files that are rotated daily and have YYYY-MM-DD on their names. To use this script, you just say:
% tailswitch '*.log'
The quoting prevents the shell from interpreting the glob pattern. The script will perform glob pattern from time to time to switch to a newer file based on its name.

stdio/piping issues when using vim in child process in node.js

I am using node.js to write a command line interface that generates unit test files. I have been using inquirer to get user input, however there is one field in which the user will very likely want to copy-paste and/or edit, large multi-line chunks of JSON data. Therefore, my goal is to:
open vim # certain point in CLI -> allow input-> close vim -> write out to tmp file -> process the result.
The problem is that input to vim is also going to the parent stdin, and when the return key is hit, the program continues on top of vim (mayhem). I'm fairly certain that stdio/in/out/err are not set up properly, but i cant seem to find the exact solution anywhere. Every iteration of my manipulating the streams seems closer, but i know that there is a small missing link.
i have tried a lot of things along the lines of:
var vim = child_process.spawn('vim', [path], {stdin: 'pipe', stdout: 'pipe', stderr: 'pipe'});
var vim = child_process.spawn('vim', [path], {stdio: 'inherit'}); //{stdio: ['pipe','pipe','pipe']}
Finally, i have followed a lot of the stdio manipulation from this example, How do I open a terminal application from node.js?, but there still remains some small missing link that i need help with
Notes:
I am 99% certain that my async promises are in order.
it doesn't necessarily have to be vim, as I am checking the ENV for
an editor first
I liken this to git commit, where an editor pops up and allows input
before closing
in a small test program, i can get perfect functionality, but when
trying to do this over another process, it doesnt go well
tl;dr : i want to ignore the parent process while input goes only to vim (child_process), but i cannot keep them separated, and because of this, the program goes haywire
If there is anything i can clarify, please let me know.
Thanks!
I know this is old, but anyway 6 days ago they released a function with 2015.02.06 Version 0.12.0 (Stable) that makes this very easy.
var spawnSync=require("child_process").spawnSync;
spawnSync("vim",[__filename],{stdio:"inherit"});
It will of course block the event loop, but in such a situation you likely want to wait for the user. Otherwise, you may end up with node writing to stdout and reading stdin while you are editing, which is obviously very confusing, and is the issue you were running into.
If you really still need asynchronous stuff while you are editing, it's probably easier to require("child_process").fork so you don't confuse the stdin/stdout. I imagine you could do some fancy stuff to remove all the listeners and add them back later, but it's probably not worth the effort.

Getting linux terminal value from my application

I am developing a Qt application in Linux. I wanted to pass Linux commands to a terminal. That worked but now i also want to get a response from the terminal for this specific command.
For example,
ls -a
As you know this command lists the directories and files of the current working directory. I now want to pass the returned values from the ls call to my application. What is a correct way to do this?
QProcess is the qt class that will let you spawn a process and read the result. There's an example of usage for reading the result of a command on that page.
popen() , api of linux systerm , return FILE * that you can read it like a file descriptor, may help youp erhaps。
Parsing ls(1) output is dangerous -- make a few files with funny names in a directory and test it out:
touch "one file"
touch "`printf "\x0a\x0a\x0ahello\x0a world"`"
That creates two files in the current working directory. I expect your attempts to parse ls(1) output won't work. This might be alright if you're showing the results to a human, (though a human will be immensely confused if a filename includes output that looks just like ls(1) output!) but if you're trying to present something like an explorer.exe or Finder.app representation of files in the filesystem, this is horribly broken.
Instead, use opendir(3), readdir(3), and closedir(3) to read directory entries yourself. This will be safer, more portable, and (as a side benefit) slightly better performing.

Resources