Read stdin to run a script on certain keystroke without interfering in programs/games - linux

My goal is to run a script every time I push the R key on my keyboard:
#!/bin/bash
tail -50 .local/share/binding\ of\ isaac\ afterbirth+/log.txt | grep "on seed" | awk '{print $(NF-1) $NF}' > /home/fawk/seed
I already tried to do this with a program called xbindkeys but it would interfere and pause the game.
Then I tried it with another bash script using the read command to execute the first script but my knowledge isn't that great so far. Reading and experimenting for hours didn't lead anywhere.
The purpose is to feed a game-seed to OBS. The easiest way seems to create a (text-)file containing the seed for OBS to pick up. All I want is that seed (1st script) to be written into a file.
I tried
#!/bin/bash
while :
do
read -r 1 k <&1
if [[ $k = r ]] ; then
exec /home/fawk/seedobs.sh
fi
done
and many other variations but didn't get closer to a solution.

Related

Multithreading 70 similar commands and receiving output from each

I need to parse 70 identically formatted files (different data), repeatedly, to process some information on demand from each file. I.e. (as a simplified example)...
find /dir -name "MYDATA.bam" | while read filename; do
dir=$(echo ${filename} | awk -F"/" '{ print $(NF-1)}')
ARRAY[$dir]=$(samtools view ${filename} | head -1)
done
Since it's 70 files, I wanted each samtools view command to run as an independent thread...so I didn't have to wait for each command to finish (each command takes around 1 second.) Something like...
# $filename will = "/dir/DATA<id#>/MYDATA.bam"
# $dir then = "DATA<id#>" as the ARRAY key.
find /dir -name "MYDATA.bam" | while read filename; do
dir=$(echo ${filename} | awk -F"/" '{ print $(NF-1)}')
command="$(samtools view ${filename} | head -1)
ARRAY[$dir]=$command &
done
wait # To get the array loaded
(... do stuff with $ARRAY...)
But I can't seem to find the syntax to get all the commands called in the background, but still have "result" receive the (correct) output.
I'd be running this on a slurm cluster, so I WOULD actually have 70 cores available to run each command independently (theoretically making that step take 1-2 seconds concurrently, instead of 70 seconds consecutively).
You can do this simply with GNU Parallel like this:
#!/bin/bash
doit() {
dir=$(echo "$1" | awk -F"/" '{print $(NF-1)}')
result=$(samtools view "$1" | head -1)
echo "$dir:$result"
}
# export doit() function for subshells of "parallel" to use
export -f doit
# find the files and pass, null-terminated, to GNU Parallel
find somewhere -name "MYDATA.bam" -print0 | parallel -0 doit {}
It will run one copy of samtools per CPU core you have available, but you can easily change that, with parallel -j 8 if you just want 8 at a time, for example.
If you want the outputs in order, use parallel -k ...
I am not familiar with slurm clusters, so you may have to read up on how to tell GNU Parallel about your nodes, or let it just run 8 at a time or however many cores your main node has.
Capturing the output of a process even when spawned in the background blocks the shell. Here a small example:
echo "starting to sleep in the background"
sleep 2 &
echo "some printing in the foreground"
wait
echo "done sleeping"
This will produce the following output:
starting to sleep in the background
some printing in the foreground
<2 second wait>
done sleeping
If however you capture like this:
echo "starting to sleep in the background"
output=$(sleep 2 &)
echo "some printing in the foreground"
wait
echo "done sleeping"
The following happens:
starting to sleep in the background
<2 second wait>
some printing in the foreground
done sleeping
The actual waiting happened on the assignment of the output. By the time the wait statement is reached there is no more background process and thus no waiting.
So one way would be to pipe the output into files and stitch them back together
after the wait. This is a bit awkward.
A simpler solution might be to use GNU Parallel, a tool that deals with
collecting the output of parallel processes. It works particularly well when the output is line based.
You should be able to do this with just Bash. This snippet show how you can run each command in the background and write the results to stdout. The inner loop reads in these results and adds them to your array. You'll probably have to tweak this to make it work.
while read -r dir && read -r data; do
ARRAY[$dir]="$data"
done < <(
# sub shell level one
find /dir -name "MYDATA.bam" | while read filename; do
(
# sub shell level two
# run each task in parallel, output will be in the following format
# "directory"
# "result"
# ...
dir=$(awk -F"/" '{ print $(NF-1)}' <<< "$filename")
printf "%s\n%s\n" \
"$dir" "$(samtools view "$filename" | head -1)"
) &
done
)
The key is that ( command; command ) & runs each command in a new sub shell in the background, so the top level shell can continue to the next task.
The < <(command) allows us to redirect the stdout of a subprocess to the stdin of another shell command. This is how we can read the results into our variable and have the variable be available later.

referencing stdout in a command that has been piped into

I want to make a simple dmenu command that reads a file of commands and names. Then takes the names and displays them using dmenu then takes dmenu's output and runs the associated command using the file again.
I got to the point where dmenu displays the names, but I don't really know where to go from there. Learning bash is a really daunting task to me and I don't really know where to start with this seemingly simple script/command.
here is the file:
Pushbullet
google-chrome-stable --app=https://www.pushbullet.com
Steam
steam
Chrome
google-chrome-stable
Libre Office
libreoffice
Transmission
transmission-qt
Audio Control Panel
sudo pavucontrol & bluberry
and here is what I have so far for my command:
awk 'NR % 2 != 0' /home/rocco/programlist | dmenu | ??(grep -l "stdout" /home/rocco/programlist....)
It was my thinking that I could somehow pipe into grep or awk with the name of the application then get the line number then add one and pipe that into sh.
Thanks
I have no experience with dmenu but if I understand how it works correctly, this should do what you want. Wrapping a command in $(…) returns the output as a variable, which we can pass on to another command.
#!/bin/bash
plist="/home/rocco/programlist"
# pipe every second line to dmenu
selected=$(awk 'NR % 2 != 0' "$plist" | dmenu)
# search for the selected item, get the command after it
cmd=$(grep -A1 "$selected" "$plist" | tail -n 1)
# run the command
$cmd
Worth mentioning a mistake in your question. dmenu sends to stdout, or standard output, but the next program in line would be reading stdin, or standard input. In any case, grep can't take patterns on standard input, which is why I've saved to a variable instead of trying to pipe it somewhere.
Assuming you have programlist.txt in the working directory you can use:
awk 'NR%2 !=0' programlist.txt |dmenu |awk '{system("grep --no-group-separator -A 1 '"'"'"$0"'"'"' programlist.txt");}' |awk '{if(NR==2){system($0);}}'
Note the quoting of the $0 in the first awk envocation. This is necessary to get names with spaces in them like "Libre Office"

Bash while read loop: output never changes; being cached?

***FINAL ANSWER:
dbus-monitor --profile "interface='net.sacredchao.QuodLibet',member='SongStarted'" |
while read -r line; do
sleep 1
echo -e "$(grep '~filename=' $HOME/.quodlibet/current | sed 's/~filename=//')\n$(python2 -c "import sys, urllib as ul; print ul.unquote_plus(sys.argv[1])" "$(quodlibet --print-queue | sed 's|file://||g')")" > $HOME/Dropbox/Playlists/queue
done
There were issues with race conditions and clobbering a file with more than one write, and I also replaced the echo -e with a python command since, quodlibet being made with python, it was probably parsed into the parsing I don't want with python in the first place. It all works now. I've finished giving quodlibet functionality that most music players already have: a queue that actually survives the program crashing and doesn't require constant tinkering and babysitting to just play each song exactly once. sigh
------------------------Original Question:
I'm using Arch Linux, and I've made a script that's supposed to save quodlibet's queue every time a song finishes/starts playing. Everything works EXCEPT for one thing (so you don't have to check all those sed commands for errors or anything). If I run all this stuff manually in a terminal, without any while loop or DBus--just manually writing to the files every time I notice a song ending--it works fine. Once I put it in the while loop, it still writes to the file every time a song starts, but it writes the exact same file every single time until I restart the script. I think it's caching the outputs of these commands and not ever updating that cache.
So, let's spell this out: quodlibet has its currently playing song and the queue of what to play next. Here's the script:
while read -r line; do
grep '~filename=' $HOME/.quodlibet/current \
| sed 's/~filename=//' > $HOME/Dropbox/Playlists/queue
echo -e "$(quodlibet --print-queue | sed 's|%|\\\x|g' | sed 's|file://||g')" \
>> $HOME/Dropbox/Playlists/queue
done <<< "$(dbus-monitor --profile "interface='net.sacredchao.QuodLibet',member='SongStarted'")"
Every time a song changes, what's supposed to happen is those two commands write the now-playing song and the queue to a file. What happens instead is every time a song changes, those two commands write what WAS the now-playing song and queue the FIRST time that while loop wrote to the file.
So let's say the now-playing is track 1 and tracks 2 thru 10 are in the queue. The script writes that to the file.
Then, track 1 finishes. Now, now-playing is track 2 and the queue is tracks 3 thru 10. However, even though the loop notices the song change and writes to the file, what it writes instead is track 1 as now-playing and tracks 2 thru 10 as the queue. Again. And on and on, and before you know it, it's playing track 10 and the queue is empty, but the file still has all 10 tracks in it.
I tried running the exact same commands inside the loop manually, myself, in a terminal outside the loop, immediately after a song change, while the loop script was running. The file would reflect what it was supposed to. But then when the song changed again, the loop would catch that and rewrite all 10 tracks, again. In other words, these exact commands only don't do what I want when they're inside the loop. SOMEthing is definitely being cached and never updated, here.
EDIT: It looks like I need to clarify some things.
1) Despite how this shouldn't be, my script behaves the exact same way whether it's:
while read -r line; do
grep '~filename=' $HOME/.quodlibet/current \
| sed 's/~filename=//' > $HOME/Dropbox/Playlists/queue
echo -e "$(quodlibet --print-queue | sed 's|%|\\\x|g' | sed 's|file://||g')" \
>> $HOME/Dropbox/Playlists/queue
done <<< "$(dbus-monitor --profile "interface='net.sacredchao.QuodLibet',member='SongStarted'")"
Or:
dbus-monitor --profile "interface='net.sacredchao.QuodLibet',member='SongStarted'" |
while read -r line; do
grep '~filename=' $HOME/.quodlibet/current \
| sed 's/~filename=//' > $HOME/Dropbox/Playlists/queue
echo -e "$(quodlibet --print-queue | sed 's|%|\\\x|g' | sed 's|file://||g')" \
>> $HOME/Dropbox/Playlists/queue
done
2) I tested a simpler version out before bringing the more complex stuff in. Here's a script I made:
while true; do
echo "$(cat /tmp/foo | sed 's/b/n/g')"
done
I tried changing the file /tmp/foo in the middle of that loop running, just as the output of the quodlibet commands changes on its own, and it updated just fine. The output changed like it should.
This wasn't what I started with, but it's the last one I made before moving on to the actual script. You'll see it incorporates everything I'm doing except for 4 things: quodlibet, the dbus command, >saving or >>appending to a file, and the while-read combo. One of those is making the output constantly the same no matter what changes in the environment, and I think we can rule out quodlibet, since, as I said before, running those commands manually works fine.
EDIT 2: Welp, I haven't been scrolling down on the file, but I did just now. This issue just got more complicated but probably easier to solve. It's not writing the exact same output every time at all. It's somehow skipping the line that overwrites the file--the one that starts with grep--and JUST appending the output of the second line, the echo -e.
EDIT 3: And now I'm stumped again. When I copy and paste the exact two lines right out of the while loop and into a bash Terminal, they do what I want. But in the while-loop, the first grep command never actually writes to that file. I thought maybe it was inexplicably eating the first command, so I tried adding an empty echo beforehand:
dbus-monitor --profile "interface='net.sacredchao.QuodLibet',member='SongStarted'" |
while read -r line; do
echo
grep '~filename=' $HOME/.quodlibet/current \
| sed 's/~filename=//' > $HOME/Dropbox/Playlists/queue
echo -e "$(quodlibet --print-queue | sed 's|%|\\\x|g' | sed 's|file://||g')" \
>> $HOME/Dropbox/Playlists/queue
done
but it's still eating that first grep. Why won't it save the file?
EDIT 4: I've confirmed that the grep command is definitely outputting the correct output, but just won't write it to the file.
In an effort to decouple the specific thing you're trying to do from the problem you perceive, I'm going to suggest trying out some independent commands before trying to use anything fancy.
while read -r line ; do
echo $line
done <<< $(yes)
Note that this will never output anything. The input provided by $(yes) never completes so the outer loop will never print anything. This was pointed out by at least one of the comments to your main post.
Instead of using a capture variable, try piping the output directly to your loop. I can't say this will work for you, but it at least fixes one of the obvious problems of streaming output to a while loop.
yes | while read -r line ; do
echo $line
done
Based on the other debugging efforts here, the problem appears to be a race condition between the signal to dbus about the song transition and the writing of the new "current" song to disk. In an effort to provide a more complete solution (untested of course), here is what I might start with:
function urldecode() {
python -c "import sys, urllib as ul; print ul.unquote_plus(sys.argv[1])" "$#"
}
dbus-monitor --profile "interface='net.sacredchao.QuodLibet',member='SongStarted'" | \
while read -r line; do
sleep 1 # Adjust as necessary to preventing race conditions
$CurrentSongFile=$(grep '~filename=' $HOME/.quodlibet/current | sed 's/~filename=//')
$SongFileQueue=$(urldecode "$(quodlibet --print-queue)") | sed 's_file://__g')
printf "$CurrentSongFile\n$SongFileQueue" > $HOME/Dropbox/Playlists/queue
done

Can I avoid using a FIFO file to join the end of a Bash pipeline to be stored in a variable in the current shell?

I have the following functions:
execIn ()
{
local STORE_INvar="${1}" ; shift
printf -v "${STORE_INvar}" '%s' "$( eval "$#" ; printf %s x ; )"
printf -v "${STORE_INvar}" '%s' "${!STORE_INvar%x}"
}
and
getFifo ()
{
local FIFOfile
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
while [ -e "${FIFOfile}" ]
do
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
done
mkfifo "${FIFOfile}"
echo "${FIFOfile}"
}
I want to store the output of the end of a pipeline into a variable as given to a function at the end of the pipeline, however, the only way I have found to do this that will work in early versions of Bash is to use mkfifo to make a temp fifo file. I was hoping to use file descriptors to avoid having to create temporary files. So, This works, but is not ideal:
Set Up: (before I can do this I need to have assigned a FIFO file to a var that can be used by the rest of the process)
$ FIFOfile="$( getFifo )"
The Pipeline I want to persist:
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 # for e.g.
The action: (I can now add) >${FIFOfile} &
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 >${FIFOfile} &
N.B. The need to background it with & - Problem 1: I get [1] <PID_NO> output to the screen.
The actual persist:
$ execIn SOME_VAR cat - <${FIFOfile}
Problem 2: I get more noise to the screen
[1]+ Done printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 > ${FIFOfile}
Problem 3: I loose the blanks at the start of the stream rather than at the end as I have experienced before.
So, am I doing this the right way? I am sure that there must be a way to avoid the need of a FIFO file that needs cleanup afterwards using file descriptors, but I cannot seem to do this as I cannot assign either side of the problem to a file descriptor that is not attached to a file or a FIFO file.
I can try and resolve the problems with what I have, although to make this work properly I guess I need to pre-establish a pool of FIFO files that can be pulled in to use or else I have a pre-req of establishing this file before the command. So, for many reasons this is far from ideal. If anyone can advise me of a better way you would make my day/week/month/life :)
Thanks in advance...
Process substitution was available in bash from the ancient days. You absolutely do not have a version so ancient as to be unable to use it. Thus, there's no need to use a FIFO at all:
readToVar() { IFS= read -r -d '' "$1"; }
readToVar targetVar < <(printf '\n\n123\n456\n524\n789\n\n\n')
You'll observe that:
printf '%q\n' "$targetVar"
...correctly preserves the leading newlines as well as the trailing ones.
By contrast, in a use case where you can't afford to lose stdin:
readToVar() { IFS= read -r -d '' "$1" <"$2"; }
readToVar targetVar <(printf '\n\n123\n456\n524\n789\n\n\n')
If you really want to pipe to this command, are willing to require a very modern bash, and don't mind being incompatible with job control:
set +m # disable job control
shopt -s lastpipe # in a pipeline, parent shell becomes right-hand side
readToVar() { IFS= read -r -d '' "$1"; }
printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 | readToVar targetVar
The issues you claim to run into with using a FIFO do not actually exist. Put this in a script, and run it:
#!/bin/bash
trap 'rm -rf "$tempdir"' 0 # cleanup on exit
tempdir=$(mktemp -d -t fifodir.XXXXXX)
mkfifo "$tempdir/fifo"
printf '\n\n123\n456\n524\n789\n\n\n' >"$tempdir/fifo" &
IFS= read -r -d '' content <"$tempdir/fifo"
printf '%q\n' "$content" # print content to console
You'll notice that, when run in a script, there is no "noise" printed to the screen, because all that status is explicitly tied to job control, which is disabled by default in scripts.
You'll also notice that both leading and tailing newlines are correctly represented.
One idea, tell me I am crazy, might be to use the !! notation to grab the line just executed, e.g. if there is a command that can terminate a pipeline and stop it actually executing, whilst still as far as the shell is concerned, consider it as a successful execution, I am thinking something like the true command, I could then use !! to grab that line and call my existing function to execute it with process substitution or something. I could then wrap this into an alias, something like: alias streamTo=' | true ; LAST_EXEC="!!" ; myNewCommandVariation <<<' which I think could be used something like: $ cmd1 | cmd2 | myNewCommandVariation THE_VAR_NAME_TO_SET and the <<< from the alias would pass the var name to the command as an arg or stdin, either way, the command would be not at the end of a pipeline. How mad is this idea?
Not a full answer but rather a first point: is there some good reason not using mktemp for creating a new file with a random name? As far as I can see, your function called getFifo() doesn't perform much more.
mktemp -u
will give to you a free new name without creating anything; then you can use mkfifo with this name.

Use I/O redirection between two scripts without waiting for the first to finish

I have two scripts, let's say long.sh and simple.sh: one is very time consuming, the other is very simple. The output of the first script should be used as input of the second one.
As an example, the "long.sh" could be like this:
#!/bin/sh
for line in `cat LONGIFLE.dat` do;
# read line;
# do some complicated processing (time consuming);
echo $line
done;
And the simple one is:
#!/bin/sh
while read a; do
# simple processing;
echo $a + "other stuff"
done;
I want to pipeline the two scripts this:
sh long.sh | sh simple.sh
Using pipelines, the simple.sh has to wait the end of the long script before it could start.
I would like to know if in the bash shell it is possible to see the output of simple.sh per current line, so that I can see at runtime what line is being processed at this moment.
I would prefer not to merge the two scripts together, nor to call the simple.sh inside long.sh.
Thank you very much.
stdout is normally buffered. You want line-buffered. Try
stdbuf -oL sh long.sh | sh simple.sh
Note that this loop
for line in `cat LONGIFLE.dat`; do # see where I put the semi-colon?
reads words from the file. If you only have one word per line, you're OK. Otherwise, to read by lines, use while IFS= read -r line; do ...; done < LONGFILE.dat
Always quote your variables (echo "$line") unless you know specifically when not to.

Resources