Using fifos and links to control remote process from a webpage (Linux) - linux

I'm using mpg123 in a linux server and I'm trying to control it through a webpage (apache), using links to send different controls.
mpg123 has the -C flag so you can send characters to it. "s" key = start/stop, "f" key = next track... and so on.
I think I can send mpg123 those keys with a fifo, changing mpg123 stdin to the fifo stdout but i don't know how to use the fifo in this case (how to make mpg123 "listen" to another stdin).
What I've done is:
mkfifo myfifo
myfifo | mpg123 -C -# musiclist.txt
And then I've created keys.txt adding some keys to the file:
echo "f" >> keys.txt
echo "s" >> keys.txt
Then:
cat keys.txt >> myfifo.fifo
This does not work.
Can someone explain ow can I use fifos to pass messages to a running mpg123?
I've googled it but can't find what I want.
This question seems to give me some light, but I still don't know how to make mpg123 listen to the fifo's output.
Thank you in advance,

What are you using to pipe myfifo?
I'll assume cat myfifo | mpg123 -C -# musiclist.txt
cat will close after the first file piped through myfifo. Try using tail -f myfifo | ... instead to keep it open after EOF.

mkfifo myfifo
tail -f myfifo | mpg123 .. &
# send commands
echo .. > myfifo
sleep 5
echo .. > myfifo
tail is optional unless mpg123 exits if it receives EOF on stdin. Otherwise you could use: <myfifo mpg123 ...
& at the end (control operator to execute the command in the background) is optional if you run echo commands in another terminal.

mpg123 has it's own --fifo option

Related

Is there a way to perform a "tail -f" from an url?

I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.
Now, said webserver was moved to another host and I have no shell privileges for that.
Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.
So, I'd like to do something like tail -f, but on that url.
Would it be possible?In the end "in linux everything is a file" so..
You can do auto-refresh with help of watch combined with wget.
It won't show history, like tail -f, rather update screen like top.
Example of command, that shows content on file.txt on the screen, and update output every five seconds:
watch -n 5 wget -qO- http://fake.link/file.txt
Also, you can output n last lines, instead of the whole file:
watch -n 5 "wget -qO- http://fake.link/file.txt | tail"
In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.
I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file
I wanted to stream AWS amplify logs in my Jenkins pipeline
while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done
don't forget to create empty file output.txt file first
: > output.txt
view the stream :
tail -f output.txt
original comment : https://stackoverflow.com/a/62347827/2073339
UPDATE:
I found better solution using wget here:
while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done
https://superuser.com/a/514078/603774
I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:
# Poll logs continuously over HTTP
logpoll() {
FILE=$(mktemp)
echo "———————— LOGPOLLING TO $FILE ————————"
tail -f $FILE &
tail_pid=$!
bg %1
stop=0
trap "stop=1" SIGINT SIGTERM
while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done
echo "——————————— LOGPOLL DONE ————————————"
kill $tail_pid
rm $FILE
trap - SIGINT SIGTERM
}
Explanation:
Create a temporary logfile using mktemp and save its path to $FILE
Make tail -f output the logfile continuously in the background
Make ctrl+c set stop to 1 instead of exiting the function
Loop until stop bit is set, i.e. until the user presses ctrl+c
wget given URL in a loop every two seconds:
-c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
-o /dev/null - wget's log messages shall be thrown into the void
-O $FILE - output the contents to the temp logfile we've created
Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
The proposed solutions periodically download the full file.
To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.
Check it out and let me know if you need any help.
https://www.npmjs.com/package/#imdt-os/url-tail

What is meant by 'output to stdout'

New to bash programming. I am not sure what is meant by 'output to stdout'. Does it mean print out to the command line?
If I have a simple bash script:
#!/bin/bash
wget -q http://192.168.0.1/test -O - | grep -m 1 'Hello'
it outputs a string to the terminal. Does this mean it's 'outputting to stdout' ?
Thanks
Yes, stdout is the terminal (unless it's redirected to a file using the > operator or into the stdin of another process using |)
In your specific example, you're actually redirecting using | grep ... through grep then to the terminal.
Every process on a Linux system (and most others) has at least 3 open file descriptors:
stdin (0)
stdout (1)
stderr (2)
Regualary every of this file descriptors will point to the terminal from where the process was started. Like this:
cat file.txt # all file descriptors are pointing to the terminal where you type the command
However, bash allows to modify this behaviour using input / output redirection:
cat < file.txt # will use file.txt as stdin
cat file.txt > output.txt # redirects stdout to a file (will not appear on terminal anymore)
cat file.txt 2> /dev/null # redirects stderr to /dev/null (will not appear on terminal anymore
The same is happening when you are using the pipe symbol like:
wget -q http://192.168.0.1/test -O - | grep -m 1 'Hello'
What is actually happening is that the stdout of the wget process (the process before the | ) is redirected to the stdin of the grep process. So wget's stdout isn't a terminal anymore while grep's output is the current terminal. If you want to redirect grep's output to a file for example, then use this:
wget -q http://192.168.0.1/test -O - | grep -m 1 'Hello' > output.txt
Unless redirected, standard output is the text terminal which initiated the program.
Here's a wikipedia article: http://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29

linux pipe with multiple programs asking for user input

I wonder how to create a pipe
program 1 | ... | program N
where multiple of the programs ask for user input. The problem is that | starts the programs in parallel and thus they start reading from the terminal in parallel.
For such cases it would be useful to have a pipe | that starts program (i+1) only after program i has produced some output.
Edit:
Example:
cat /dev/sda | bzip2 | gpg -c | ssh user#host 'cat > backup'
Here both gpg -c as well as ssh ask for a password.
A workaround for this particular example would be the creation of ssh key pairs, but this is not possible on every system, and I was wondering whether there is a general solution.
Also gpg allows for the passphrase to be passed as command line argument, but this is not suggested for security reasons.
You can use this construction:
(read a; echo "$a"; cat) > file
For example:
$ (read a; echo "$a"; echo cat is started > /dev/stderr; cat) > file
1
cat is started
2
3
Here 1, 2 and 3 were entered from keyboard; cat is started was written by echo.
Contents of file after execution of the command:
$ cat file
1
2
3
I am now using:
#!/bin/bash
sudo echo "I am root!"
sudo cat /dev/disk0 | bzip2 | gpg -c | (read -n 1 a; (echo -n "$a"; cat) | ssh user#host 'cat > backup')
The first sudo will prevent the second from asking the password again. As suggested above, the read postpones the starting of ssh. I used -n 1 for read since I don't want to wait for newline, and -n for echo to surpress the newline.
for one you can give gpg the password with the --passphrase option.
For ssh the best solution would be to login by key. But if you need to do by password the expect command will be good. Here's a good example: Use expect in bash script to provide password to SSH command
Expect also allows you to have some input - so if you don't want to hardcode your passwords this might be the way to go.
I've needed something similar a few times before, where the first command in the pipeline requires a password to be entered, and the next command doesn't automatically cater for this (like the way that less does).
Similar to Igor's response, I find the use of read inside a subshell useful:
cmd1 | ( read; cat - | cmd2 )

How to log output in bash and see it in the terminal at the same time?

I have some scripts where I need to see the output and log the result to a file, with the simplest example being:
$ update-client > my.log
I want to be able to see the output of the command while it's running, but also have it logged to the file. I also log stderr, so I would want to be able to log the error stream while seeing it as well.
update-client 2>&1 | tee my.log
2>&1 redirects standard error to standard output, and tee sends its standard input to standard output and the file.
Just use tail to watch the file as it's updated. Background your original process by adding & after your above command After you execute the command above just use
$ tail -f my.log
It will continuously update. (note it won't tell you when the file has finished running so you can output something to the log to tell you it finished. Ctrl-c to exit tail)
You can use the tee command for that:
command | tee /path/to/logfile
The equivelent without writing to the shell would be:
command > /path/to/logfile
If you want to append (>>) and show the output in the shell, use the -a option:
command | tee -a /path/to/logfile
Please note that the pipe will catch stdout only, errors to stderr are not processed by the pipe with tee. If you want to log errors (from stderr), use:
command 2>&1 | tee /path/to/logfile
This means: run command and redirect the stderr stream (2) to stdout (1). That will be passed to the pipe with the tee application.
Learn about this at askubuntu site
another option is to use block based output capture from within the script (not sure if that is the correct technical term).
Example
#!/bin/bash
{
echo "I will be sent to screen and file"
ls ~
} 2>&1 | tee -a /tmp/logfile.log
echo "I will be sent to just terminal"
I like to have more control and flexibility - so I prefer this way.

Suppress Notice of Forked Command Being Killed

Let's suppose I have a bash script (foo.sh) that in a very simplified form, looks like the following:
echo "hello"
sleep 100 &
ps ax | grep sleep | grep -v grep | awk '{ print $1 } ' | xargs kill -9
echo "bye"
The third line imitates pkill, which I don't have by default on Mac OS X, but you can think of it as the same as pkill. However, when I run this script, I get the following output:
hello
foo: line 4: 54851 Killed sleep 100
bye
How do I suppress the line in the middle so that all I see is hello and bye?
While disown may have the side effect of silencing the message; this is how you start the process in a way that the message is truly silenced without having to give up job control of the process.
{ command & } 2>/dev/null
If you still want the command's own stderr (just silencing the shell's message on stderr) you'll need to send the process' stderr to the real stderr:
{ command 2>&3 & } 3>&2 2>/dev/null
To learn about how redirection works:
From the BashGuide: http://mywiki.wooledge.org/BashGuide/TheBasics/InputAndOutput#Redirection
An illustrated tutorial: http://bash-hackers.org/wiki/doku.php/howto/redirection_tutorial
And some more info: http://bash-hackers.org/wiki/doku.php/syntax/redirection
And by the way; don't use kill -9.
I also feel obligated to comment on your:
ps ax | grep sleep | grep -v grep | awk '{ print $1 } ' | xargs kill -9
This will scortch the eyes of any UNIX/Linux user with a clue. Moreover, every time you parse ps, a fairy dies. Do this:
kill $!
Even tools such as pgrep are essentially broken by design. While they do a better job of matching processes, the fundamental flaws are still there:
Race: By the time you get a PID output and parse it back in and use it for something else, the PID might already have disappeared or even replaced by a completely unrelated process.
Responsibility: In the UNIX process model, it is the responsibility of a parent to manage its child, nobody else should. A parent should keep its child's PID if it wants to be able to signal it and only the parent can reliably do so. UNIX kernels have been designed with the assumption that user programs will adhere to this pattern, not violate it.
How about disown? This mostly works for me on Bash on Linux.
echo "hello"
sleep 100 &
disown
ps ax | grep sleep | grep -v grep | awk '{ print $1 } ' | xargs kill -9
echo "bye"
Edit: Matched the poster's code better.
The message is real. The code killed the grep process as well.
Run ps ax | grep sleep and you should see your grep process on the list.
What I usually do in this case is ps ax | grep sleep | grep -v grep
EDIT: This is an answer to older form of question where author omitted the exclusion of grep for the kill sequence. I hope I still get some rep for answering the first half.
Yet another way to disable job termination messages is to put your command to be backgrounded in a sh -c 'cmd &' construct.
And as already pointed out, there is no need to imitate pkill; you may store the value of $! in another variable instead.
echo "hello"
sleep_pid=`sh -c 'sleep 30 & echo ${!}' | head -1`
#sleep_pid=`sh -c '(exec 1>&-; exec sleep 30) & echo ${!}'`
echo kill $sleep_pid
kill $sleep_pid
echo "bye"
Have you tried to deactivate job control? It's a non-interactive shell, so I would guess it's off by default, but it does not hurt to try... It's regulated by the -m (monitor) shell variable.

Resources