bash script loses focus when displaying images - linux

I am trying to figure out how to not lose focus in a shell script while simultaneously displaying an image. User input may come at any time, but seeing the photo taken, would seem important.
To Clarify, I have no problem outputing an image. display works fine, as does animate, and feh, etc.. what i need is for the shellscript to still process user input, (in this example, "t") while displaying the last image taken, for an undefined amount of time.
I'm writing in bash, in Linux.
Heres an example of what I'm trying:
#!/bin/bash
i=0
capture() {
cd ~/Desktop/ani
streamer -c /dev/video0 -s 800x600 -o outfile$i.jpeg
display outfile$i.jpeg &
let i++
}
while true; do
#clear
read -rsn1 input
if [ "$input" = "t" ]; then
capture
else
exit
fi
done
In the actual script I may continue to take photos, so I want to continue listening for user input. I can imagine a couple ways to do this, but I cannot figure it out.

To continue listening user input. you can do like
while true; do
#clear
read -p "Your input: " input
if [ "$input" == "t" ]; then
capture
fi
done

A rather ugly way to solve this: install the utility wmctrl (in debian/Ubuntu, sudo apt-get install wmctrl). Then, after your display command, add:
sleep 1
wmctrl -i -a "$WINDOWID"
This will sleep for one second (to leave some time to the display command to finish loading—tune this value to whatever feels right to you). Then, wmctrl will use the value of the variable WINDOWID (that is hopefully set by your terminal emulator) as a numeric value (-i) and raise the window and give it focus (-a).

Related

Make a bash script that refresh values without scrolling down the terminal like TOP command

How to set a bash script to refresh the screen with variable values like TOP command?
To show the variable value I use:
echo "$var"
But it prints a new line, instead of I would like something that refresh the screen
How to achieve this?
The easiest way is to just use watch:
watch date
You can also just call clear before each iteration. Here, all the output from clear and any commands are collected as a form of double buffering to reduce flickering:
show_things() {
date
uname -a
echo "Your lucky number is $RANDOM"
}
while sleep 1
do
printf '%s\n' "$(clear; show_things)"
done

How can I add to this command to make "notifications" via a new terminal pops up with a message?

I am using this command via a script:
watch "who | egrep -i 'user1|user2|user3'"
I am trying to get this to make a new terminal pop up and say:
"user has logged on"
I would like to run everything in the background with "&" but as one of the users in the command login I would like to have this script pop up a new terminal and say this user is logged in.
I want it to only happen when they log in and if they log out and log back it.
I understand if I run the initial command in the foreground I can watch my "custom" list of users log in and off every 2 seconds BUT I am looking to run it all in the background and have the new terminal pop up with the specific user who logged in.
I am sorry for repeating myself but I am trying to be specific as possible.
watch is good to watch a command output, but not to work on it.
I would suggest using a loop, wich saves the output between iterations and check for diff. Somethink like this:
last_output=$(tempfile)
output=$(tempfile)
while true; do
who | egrep -i 'user1|user2|user3' > $output
# check for new users logged
new_users=$(diff $last_output $output | grep '>' | cut -d ' ' -f 2)
# if there is some, throw a notification
if [ -n "$new_users" ]; then
xterm -e "echo -e 'New users logged:\n$new_users'; read -n 1" &
fi
# we save the output
mv $output $last_output
sleep 2
done
Here I use xterm to throw a notification, but you can use others tools like libnotify (wich provides notify-send). And because xterm stops when the executed command is finished, I add the read -n 1 command who waits for an input, but you can use sleep to make the notification disappear without an user interaction.
Edit
To read the list of users to watch from a file, you can use something like this (with a file containing one user per line) :
regex=$(tr '\n' '|' < path/to/file)
regex=${regex%?} # to remove the last '|'

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

read command is not taking input from the terminal

I dont know if it is weird that read is not taking the input from the terminal.
The configure script, which is used in source code making process, should ask the user to give the input to select the type of Database either MYSQL or ORACLE(below is the code).
MYSQLLIBPATH="/usr/lib/mysql"
echo "Enter DataBase-Type 1-ORACLE, 2-MySQL (default MySQL):"
read in
echo $? >> /tmp/error.log
if test -z "$in" -o "$in" = "2"
then
DATABASE=-DDB_MYSQL
if true; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
echo "Enter Mysql Library Path: (eg: $MYSQLLIBPATH (default))"
read in
echo $? >> /tmp/error.log
if test -n "$in"
then
MYSQLLIBPATH=`echo $in`
fi
echo "Mysql Lib path is $MYSQLLIBPATH"
else
if false; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
DATABASE=-DDB_ORACLE
LD_PATH=
fi
But, the read command is not asking for the user input. Its failing to take the input from the stdin.
When I checked the status of the command in the error.log it was showing
1
1
Could anyone tell why read is failing to take the input from the stdin.
Are there any builtin variable which can block read taking the input?
Most likely read executes with standard input redirected from a file that has reached EOF. If the above is not the whole of your configure code, check that there are no input redirections. Could the code above be a part of a function which was invoked with some input from a pipe or a file? Otherwise check how configure is executed - are there any redirections?
Otherwise, the universal advice applies: try simplifying and stripping down your code until it is obvious what's happening.
BTW, it is not a good idea to make configure interactive, if you want to have your program packaged for a distribution - it's not easy to control execution of interactive programs. Consider adding support for supplying parameters through command line options.

Why does redirecting output affect the result of a test in bash?

I'm trying to write a script to launch xfce and xbmc in their own x sessions.
To do this I'm setting the DISPLAY value, running the first one in the background and waiting until I get a successful return from xset q. Then I change DISPLAY and do the same for the other.
I'm writing this piece by piece to check I've got the syntax right for each part and the part I'm stuck on is the 'waiting until I get a successful return from xset q.
export DISPLAY=":0.0"
while [[ ! `xset q` ]]
do
echo -n "."
done
This code seems to work so when XFCE is running it exits immediately and when it is not it sits there printing .xset: unable to open display ":0.0"
However I don't want to see the output of xset so I'm trying to redirect its output.
export DISPLAY=":0.0"
while [[ ! `xset q > /dev/null 2>&1` ]]
do
echo -n "."
done
Adding this redirection however seems to break the detection, and regardless of whether XFCE is running or not it just sits there printing dots.
I've tested the two commands on their own and in a shell script on their own they both work as I expect, returning 1 when XFCE is not running and 0 when it is.
Can anyone explain why putting that command inside of [[ ! `…` ]] breaks the while test and how I could rewrite this while loop correctly?
(Running on Arch)
The problem is that you're not testing the return code of xset at all, you processing it's output. When you redirect the output to /dev/null, the expression in backticks doesn't return anything, it's as if you had:
while [[ ! '' ]] ...
which will always run the while body.
What you should be doing is:
while ! xset q > /dev/null 2>&1
do
...
done

Resources