Any alternatives or ways to make this script faster? - linux

I have this script which displays the terminal type being used. So for instance if you were running konsole it would display konsole. The script needs to go into another program that runs when a terminal is opened so it has to be very fast. Here's what I have so far
#!/bin/bash
shopt -s extglob
SHELLTTY=$(exec ps -p "$$" -o tty=)
P=$$
while read P < <(exec ps -p "$P" -o ppid=) && [[ $P == +([[:digit:]]) ]]; do
if read T < <(exec ps -p "$P" -o tty=) && [[ $T != "$SHELLTTY" ]]; then
ps -p "$P" -o comm=
break
fi
done
When the script is saved into a file it takes this long for it to run.
[~]$ time ./termgrab
konsole
real 0m0.063s
user 0m0.017s
sys 0m0.040s
The whole program itself takes .04 seconds so this slows it down considerably. Does anyone have any suggestions to make the script any faster or any alternative ways to achieve the same thing?

Related

Detect if current script has been changed WITHOUT using subshell? (and start the new script)

is it possible to make a running script see if it has been changed/updated WITHOUT using a subshell command?
and if it has been updated, start the new script and kill the old one.
previously I used a separate file for it, so when I created the file, the script detected it. But if you're running multiply instances of the script this can be pretty messy
if [[ -f /mnt/g/update.tt ]]; then script.sh 2 && kill $$ ;fi
This function would be placed inside a loop that's taking about .8 second, that's why no subshell is important.
The best and easiest way to auto-update scripts is to compare the scripts modification stamp against an temporary file, which you create in the beginning of the script.
scriptupdate=$(mktemp)
Just make sure you only run the mktemp once, and outside any loops, otherwise the script will update the control-file over and over again, making it newer than the script..
Then you just need to compare the script file with the temporary file to see if the script is never, then restart the script
if [[ $0 -nt $scriptupdate ]]; then
exec $0 $#
fi
Exec replaces the running script with the new one
$0 is the full path and name of the current script
$# passes along the arguments to the new run.
Perhaps:
age_of_script=$(( $(date +%s) - $(stat -c %Y "$0") ))
running_time=$SECONDS
if (( running_time > age_of_script )); then
# script has been updated since I started running
exec "$0" 2
fi
As that other guy commented, use exec to replace the current process.
With bash 4.3+, you may be able to use:
bash_root=${BASH%/bin/bash}
if [[ -d "$bash_root/lib/bash" ]]; then
enable -f "$bash_root/lib/bash/finfo" finfo
file_age() { echo $(( $(printf '%(%s)T' -1) - $(finfo -m "$1") )); }
else
file_age() { echo $(( $(printf '%(%s)T' -1) - $(stat -c %Y "$1") )); }
fi
age_of_script=$(file_age "$0")
Still uses subshells for the command substitutions, but if your bash build has the loadable modules you may not need to use any external tools
For posterity, a quick benchmark for stat vs loadable finfo
$ file_age() { echo $(( $(printf '%(%s)T' -1) - $(stat -c %Y "$1") )); }
$ time for ((i=0; i<1000; i++)); do x=$(file_age /etc/hosts); done
real 0m14.750s
user 0m2.288s
sys 0m4.139s
$ file_age() { echo $(( $(printf '%(%s)T' -1) - $(finfo -m "$1") )); }
$ time for ((i=0; i<1000; i++)); do x=$(file_age /etc/hosts); done
real 0m7.162s
user 0m1.148s
sys 0m2.085s
You can create a reference file, then keep checking if the current script is newer than that reference:
#!/bin/bash
reference=$(mktemp)
thisscript="${BASH_SOURCE[0]}"
trap 'rm "$reference"' EXIT
printf '\nStarting\n'
while true
do
if [[ "$thisscript" -nt "$reference" ]]
then
rm "$reference"
exec "$thisscript"
fi
printf "beep boop "
read -t 0.8
done

PPSS shell script

I made a simple shell script to process mp3 files with SoX.
for f in ./*.mp3; do sox "$f" "${f%%.mp3}S.mp3" silence 1 0.02 1% -1 0.02 1%; done
The syntax should be like this:
sox in.wav out.wav silence 1 0.1 1% -1 0.1 1%
It will remove silence from the files I have in a folder, and create a new file with an "X" at the end (to distinguish from the original). I saved the script in my /bin folder and it works fine.
However, now I want to use it with PPSS, in order to run 8 instances in parallel. I cannot seem to get it working though, in the log file the error I keep getting is this error in the logs:
/usr/local/bin/ppss: line 2283: soxy.sh/Users/marw/Downloads/testfolder//ppss_dir/job_log/_Users_marw_Downloads_testfolder__10_audio_mp3: No such file or directory
Status: FAILURE
Total processing time (hh:mm:ss): 00:00:01
The PPSS syntax should be like this:
|P|P|S|S| Distributed Parallel Processing Shell Script 2.97
usage: /usr/local/bin/ppss [[ -d <sourcedir> | -f <sourcefile> ]] [[ -c '<command> "$ITEM"' ]]
[[ -C <configfile> ]] [[ -j ]] [[ -l <logfile> ]] [[ -p <# jobs> ]]
[[ -q ]] [[ -D <delay> ]] [[ -h ]] [[ --help ]] [[ -r ]] [[ --daemon ]]
Examples:
/usr/local/bin/ppss -d /dir/with/some/files -c 'gzip '
/usr/local/bin/ppss -d /dir/with/some/files -c 'cp "$ITEM" /tmp' -p 2
/usr/local/bin/ppss -f <file> -c 'wget -q -P /destination/directory "$ITEM"' -p 10
I'm new to shell scripting, forgive me if it's a stupid question. My OS is MacOS 10.11.5.
This is what I'm trying with PPSS:
ppss -d /Users/marw/Downloads/testfolder -c 'soxy.sh'
Maybe I have to write the my original script differently? It works fine without PPSS though.
EDIT:
I got a debug log here: http://pastebin.com/wak47rf8
The -c argument has to have a trailing space at the end. This works:
ppss -d /Users/marw/Downloads/testfolder -c 'soxy.sh '
Whereas this does not work:
ppss -d /Users/marw/Downloads/testfolder -c 'soxy.sh'
I got this from the PPSS wiki on Github:
The -c option specifies the command that will be executed by PPSS in
parallel for each file within the directory specified by -d. In this
example the command has a trailing space, which is necessary since the
command will expand to 'gzip example.tar' when executed. If the space
is omitted, an error will occur.

Bash script multiple commands issue

I am currently working on a program that needs to boot a program automatically whenever it registers that this program is not open already. It needs superuser rights to boot.
Currently, I have a working Bash script, looking as follows:
#!/bin/bash
while true; do #Continue indefinitely
if [ $(ps aux | grep '/odroid_detection' | grep -v '<defunct>' -c) -le 4 ]; then #if less than 3 odroid_servers are active (booter opens 3 processes)
xterm -iconic -e su -c "xterm -iconic -hold /home/odroid/Documents/SUNRISE-Odroid/_odroid_detection/_odroid_detection/bin/Debug/_odroid_detection"
fi
sleep 60 #check every minute
done
The program that executes, however, is not working exactly as planned because it is executed from the root map instead of the map it is in. I therefore want to cd to the map the executable is in (~/Documents/SUNRISE-Odroid/_odroid_detection/_odroid_detection/bin/Debug) but have the same functionality as mentioned above. This is what I came up with:
#!/bin/bash
while true; do #Continue indefinitely
if [ $(ps aux | grep '/odroid_detection' | grep -v '<defunct>' -c) -le 4 ]; then #if less than 3 odroid_servers are active (booter opens 3 processes)
xterm -iconic -e "cd ../_odroid_detection/_odroid_detection/bin/Debug/ && su -c "xterm -iconic -hold -e _odroid_detection""
fi
sleep 60 #check every minute
done
This does not work, however, and I have tried many alternatives but I cannot seem to get it working.. It gives the following errors in the terminal:
xterm: Can't execvp cd ../_odroid_detection/_odroid_detection/bin/Debug && su -c xterm: No such file or directory
The xterm that gives this error opens in the map ~/Documents/SUNRISE-Odroid/Bash, and executing the cd mentioned above does work when I execute it seperately, so I do not understand why it cannot find the file or directory.
Any suggestions?
The colouring of StackOverflow made me understand one mistake that I made: the starting quote after 'su -c' gets interpreted as an ending quote of the xterm execute line. The working code is as follows:
#!/bin/bash
while true; do #Continue indefinitely
if [ $(ps aux | grep '/odroid_detection' | grep -v '<defunct>' -c) -le 2 ]; then #if less than 3 odroid_servers are active (booter opens 3 processes)
xterm -iconic -e "cd ../_odroid_detection/_odroid_detection/bin/Debug/ && su -c ./_odroid_detection"
fi
sleep 60 #check every minute
done

Using inotifywait to process two files in parallel

I am using:
inotifywait -m -q -e close_write --format %f . | while IFS= read -r file; do
cp -p "$file" /path/to/other/directory
done
to monitor a folder for file completion, then moving it to another folder.
Files are made in pairs but at separate times, ie File1_001.txt is made at 3pm, File1_002.txt is made at 9pm. I want to monitor for the completion of BOTH files, then launch a script.
script.sh File1_001.txt File1_002.txt
So I need to have another inotifywait command or a different utility, that can also identify that both files are present and completed, then start the script.
Does anyone know how to solve this problem?
I found a Linux box with inotifywait installed on it, so now I understand what it does and how it works. :)
Is this what you need?
#!/bin/bash
if [ "$1" = "-v" ]; then
Verbose=true
shift
else
Verbose=false
fi
file1="$1"
file2="$2"
$Verbose && printf 'Waiting for %s and %s.\n' "$file1" "$file2"
got1=false
got2=false
while read thisfile; do
$Verbose && printf ">> $thisfile"
case "$thisfile" in
$file1) got1=true; $Verbose && printf "... it's a match!" ;;
$file2) got2=true; $Verbose && printf "... it's a match!" ;;
esac
$Verbose && printf '\n'
if $got1 && $got2; then
$Verbose && printf 'Saw both files.\n'
break
fi
done < <(inotifywait -m -q -e close_write --format %f .)
This runs a single inotifywait but parses its output in a loop that exits when both files on the command line ($1 and $2) are seen to have been updated.
Note that if one file is closed and then later is reopened while the second file is closed, this script obviously will not detect the open file. But that may not be a concern in your use case.
Note that there are many ways of building a solution -- I've shown you only one.

bash lsof : get pid from one tty to another one

How to get the pid in tty1 of the process launched in tty2 ?
Context :
Trying to write a bash one-liner to kill a process generating a file when this file exceeds a pre defined max size. (The one-liner is not operating yet as it is as need to embed this into a loop).
During testing, the point is that lsof does not return any PID in the terminal tty1 despite the pid exists in the tty2 where the command is run.
tty1: generating the file and monitoring changes
MAX_SIZE_Ko=10001;file=test_lsof;dd if=/dev/zero of=$file bs=1k count=800;inotifywait $file;SIZE_Ko=$(du -s $file | cut -f1); [[ "$SIZE_Ko" -gt "$MAX_SIZE" ]] && ( PID=$(lsof $file | tail -n1 | awk -F" " '{ print $2 }') ; [[ ! -z $PID ]] && kill -9 $PID || echo "no running PID modifying $file" )
tty2 : increasing the file size
for (( 1; 1; 1));do echo -e "foobar\n" >> test_lsof; echo $(( i++ ))" - pid="$$; done
As mentioned in the other answer, the file is opened only for a short time, so the odds of your lsof catching it are low.
However, you can change that:
exec 5>test_lsof
for (( 1; 1; 1)); do
echo -e "foobar\n" >&5
echo $(( i++ ))" - pid="$$
done
This uses advanced shell redirection - the exec line opens a file descriptor, the >&5 redirects output from the command to that file descriptor.
If you do that, the shell will be visible to lsof.
The problem is that the process in tty2 opens the file only for a split second to append the string. Unless you run lsof in the same split second, you won't catch it.
One way to deal with this is to use inotify-tools. The program inotifywait allows you to wait until the file is opened and the run lsof, e.g. inotifywait $file; lsof $file.

Resources