'exit' Command inside a function causing host screen window to close - linux

Edit: If I can use return instead, what about functions that use set -e?
I run screen on my Linux VM in order to have multiple terminals that I can disconnect from and return to when I want.
I'm having issues with running bash commands that use functions that either directly call exit or have set -e in them - they cause the screen terminal to close and kick me out to another one of my screen terminals.
Is there a way to get screen to stop responding to exit calls, or perhaps make it aware that the exit is purely for the return value of the function that's using it?
The offending function:
function unpackdb() {
cd $DB_LOCATION
mkdir -p downloads
cd downloads
db_file_name=$(ls -t *.tar.gz 2> /dev/null | head -1)
if [ -n "$db_file_name" ] ; then
tar xf $db_file_name -C ../data
echo "Latest file (${db_file_name}) restored in to ${pwd}/../data"
else
echo "Could not find a .tar.gz in ${pwd}, if you don't care use command 'startdb'"
exit 1
fi
}

Related

How can we prevent CTRL-C from screen terminating?

I'm currently writing a bash script that would create multiple shell instances (with screen command) and execute a subprogram.
The problem is when I try to interrupt the subprogram, it interrupts the screen instance too. I already searched for trap command on internet with SIGINT but I don't really know how to use it in this case.
Here is my code to show you how do I create the screen:
#!/bin/bash
#ALL PATHS ARE DECLARED HERE.
declare -A PATHS; declare -a orders;
PATHS["proxy"]=/home/luna/proxy/HydraProxy; orders+=( "proxy" )
PATHS["bot"]=/home/luna/bot; orders+=( "bot" )
#LAUNCH SERVERS
SERVERS=/home/luna/servers
cd "$SERVERS"
for dir in */; do
d=$(basename "$dir")
PATHS["$d"]="$(realpath $dir)"; orders+=( "$d" )
done
for name in "${orders[#]}"; do
if ! screen -list | grep -q "$name"; then
path="${PATHS[$name]}"
cd "$path"
screen -dmS "$name" ./start.sh
echo "$name CREATED AT $path"
sleep 2
else
echo "SCREEN $name IS ALREADY CREATED"
fi
done
Could you help me more to find a solution please ? Thank you very much for your time.
Each of your screen instances is created to run a single command, start.sh. When this command terminates, for instance when you interrupt it, the screen will have done its job and terminate. The reason for this is that screen runs shell scripts directly in a non-interactive shell, rather than spawning a new interactive shell and running it there.
If you wanted to run start.sh inside an interactive shell in each screen, you'd do something like this:
screen -dmS "$name" /bin/bash -i
screen -S "$name" -X stuff "./start.sh^M"
The ^M is needed as it simulates pressing enter in your shell within screen.
If you use this, then when you interrupt a script within screen, you will still be left with an interactive prompt afterward to deal with as you see fit.

Get bash script to open terminal

In Windows when I double-click a Batch script, it will automatically open a terminal window and show me what's happening. If I were to double-click a bash script in Linux, a terminal window does not open to show me what is happening; it runs in the background. I have seen that one can use one script to launch another script in a new terminal window with x-terminal-emulator -e "./script.sh", but is there any bash command I can put into the same (one) script.sh so that it will open a terminal and show me what's happening (or if I need to answer y/n questions)?
You can do something similar to what Slax
developers do in their bootinst.sh:
#!/usr/bin/env sh
#
# If you see this file in a text editor instead of getting it executed,
# then it is missing executable permissions (chmod). You can try to set
# exec permissions for this file by using: chmod a+x bootinst.sh
#
# Scrolling down will reveal the actual code of this script.
#
# if we're running this from X, re-run the script in konsole or xterm
if [ "$DISPLAY" != "" ]; then
if [ "$1" != "--rex" -a "$2" != "--rex" ]; then
konsole --nofork -e /bin/sh $0 --rex 2>/dev/null || xterm -e /bin/sh $0 --rex 2>/dev/null || /bin/sh $0 --rex 2>/dev/null
exit
fi
fi
# put contents of your script here
echo hi
# do not close the terminal immediately, let user look at the results
echo "Press Enter..."
read junk
This script would run correctly both when started in graphical
environment and in tty. It tries to restart the script inside
konsole and xterm and but if it doesn't find neither of them it
will simply run in the background.

How to detect when a bash script is triggered from keybinding

Background
I have a Bash script that requires user input. It can be run by calling it in a terminal or by pressing a keyboard shortcut registered in the i3 (or Sway) config file as follows:
bindsym --release $mod+Shift+t exec /usr/local/bin/myscript
The Problem
I know I can use read -p to prompt in a terminal, but that obviously won't work when the script is triggered through the keybinding. In that case I can use something like Yad to create a GUI, but I'm unable to detect when it's not "in a terminal". Essentially I want to be able to do something like this:
if [ $isInTerminal ]; then
read -rp "Enter your username: " username
else
username=$(yad --entry --text "Enter your username:")
fi
How can I (automatically) check if my script was called from the keybinding, or is running in a terminal? Ideally this should be without user-specified command-line arguments. Others may use the script and as such, I'd like to avoid introducing any possibility of user error via "forgotten flags".
Attempted "Solution"
This question suggests checking if stdout is going to a terminal, so I created this test script:
#!/usr/bin/env bash
rm -f /tmp/detect.log
logpass() {
echo "$1 IS opened on a terminal" >> /tmp/detect.log
}
logfail() {
echo "$1 IS NOT opened on a terminal" >> /tmp/detect.log
}
if [ -t 0 ]; then logpass stdin; else logfail stdin; fi
if [ -t 1 ]; then logpass stdout; else logfail stdout; fi
if [ -t 2 ]; then logpass stderr; else logfail stderr; fi
However, this doesn't solve my problem. Regardless of whether I run ./detect.sh in a terminal or trigger it via keybind, output is always the same:
$ cat /tmp/detect.log
stdin IS opened on a terminal
stdout IS opened on a terminal
stderr IS opened on a terminal
It seems like the easiest way to solve your actual problem would be to change the i3 bind to
bindsym --release $mod+Shift+t exec /usr/local/bin/myscript fromI3
and do
if [[ -n "$1" ]]; then
echo "this was from a keybind"
else
echo "this wasn't from a keybind"
fi
in your script.
False Positive
As most results on Google would suggest, you could use tty which would normally return "not a tty" when it's not running in a terminal. However, this seems to differ with scripts called via bindsym exec in i3/Sway:
/dev/tty1 # From a keybind
/dev/pts/6 # In a terminal
/dev/tty2 # In a console
While tty | grep pts would go part way to answering the question, it is unable to distinguish between running in a console vs from a keybinding which you won't want if you're trying to show a GUI.
"Sort of" Solution
Triggering via keybinding seems to always have systemd as the parent process. With that in mind, something like this could work:
{
[ "$PPID" = "1" ] && echo "keybind" || echo "terminal"
} > /tmp/detect.log
It is likely a safe assumption that the systemd process will always have 1 as its PID, but there's no guarantee that every system using i3 will also be using systemd so this is probably best avoided.
Better Solution
A more robust way would be using ps. According to PROCESS STATE CODES in the man page:
For BSD formats and when the stat keyword is used, additional characters may be displayed:
< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group
The key here is + on the last line. To check that in a terminal you can call ps -Sp <PID> which will have a STAT value of Ss when running "interactively", or S+ if it's triggered via keybinding.
Since you only need the STATE column, you can further clean that up with -o stat= which will also remove the headers, then pipe through grep to give you the following:
is_interactive() {
ps -o stat= -p $$ | grep -q '+'
}
if is_interactive; then
read -rp "Enter your username: " username
else
username=$(yad --entry --text "Enter your username:")
fi
This will work not only in a terminal emulator and via an i3/Sway keybinding, but even in a raw console window, making it a much more reliable option than tty above.

Run script in a new screen if true

I have a script where it will check if background_logging is true, if it is then I want the rest of the script to run in a new detached screen.
I have tried using this: exec screen -dmS "alt-logging" /bin/bash "$0";. This will sometimes create the screen, etc. but other times nothing will happen at all. When it does create a screen, it doesn't run the rest of the script file and when I try to resume the screen it says it's (Dead??).
Here is the entire script, I have added some comments to explain better what I want to do:
#!/bin/bash
# Configuration files
config='config.cfg'
source "$config"
# If this is true, run the rest of the script in a new screen.
# $background_logging comes from the configuration file declared above (config).
if [ $background_logging == "true" ]; then
exec screen -dmS "alt-logging" /bin/bash "$0";
fi
[ $# -eq 0 ] && { echo -e "\nERROR: You must specify an alt file!"; exit 1; }
# Logging script
y=0
while IFS='' read -r line || [[ -n "$line" ]]; do
cmd="screen -dmS alt$y bash -c 'exec $line;'"
eval $cmd
sleep $logging_speed
y=$(( $y + 1 ))
done < "$1"
Here are the contents of the configuration file:
# This is the speed at which alts will be logged, set to 0 for fast launch.
logging_speed=5
# This is to make a new screen in which the script will run.
background_logging=true
The purpose of this script is to loop through each line in a text file and execute the line as a command. It works perfectly fine when $background_logging is false so there are no issues with the while loop.
As described, it's not entirely possible. Specifically what is going on in your script: when you exec you replace your running script code with that of screen.
What you could do though is to start screen, figure out few details about it and redirect your console scripts in/output to it, but you won't be able to reparent your running script to the screen process as if started there. Something like for instance:
#!/bin/bash
# Use a temp file to pass cat's parent pid out of screen.
tempfile=$(tempfile)
screen -dmS 'alt-logging' /bin/bash -c "echo \$\$ > \"${tempfile}\" && /bin/cat"
# Wait to receive that information on the outside (it may not be available
# immediately).
while [[ -z "${child_cat_pid}" ]] ; do
child_cat_pid=$(cat "${tempfile}")
done
# point stdin/out/err of the current shell (rest of the script) to that cat
# child process
exec 0< /proc/${child_cat_pid}/fd/0
exec 1> /proc/${child_cat_pid}/fd/1
exec 2> /proc/${child_cat_pid}/fd/2
# Rest of the script
i=0
while true ; do
echo $((i++))
sleep 1
done
Far from perfect and rather messy. It could probably be helped by using a 3rd party tool like reptyr to grab console of the script from inside the screen. But cleaner/simpler yet would be to (when desired) start the code that should be executed in that screen session after it has been established.
That said. I'd actually suggest to take a step back and ask, what exactly is it that you're trying to achieve and why exactly would you like to run your script in screen. Are you planning to attach/detach to/from it? Because if running a long term process with a detached console is what you are after, nohup might be a bit simpler route to go.

Prompt not printed after redirecting bash script output to syslog

I found this article, that explains how to redirect output of a bash script to syslog. This is exactly what I needed, but there is a problem.
#!/bin/bash
# Don't ignore any error and return when first error occurs.
exec 1> >(logger -s -t $(basename $0)) 2>&1
set -e
# a list of command(s) that can fail:
chown -R user1:user1 /tmp/myappData/*
chown -R user1:user1 /tmp/myappTmp/*
chown -R user1:user1 /tmp/myappLog/*
#...
exit 0
When I execute above script, and an error occurs, I see that sometimes, the prompt doesn't return after the script is executed. I can't figure out why this is happening. The prompt doesn't return unless I hit enter.
I am concerned that if an app uses this script, it may not get proper exit code back.
If I comment out "set -e", then the prompt always returns properly after the script has executed.
So my question is, what is the proper way to setup a script so that it exits on error, and logs the corresponding message to syslog?
Thank you for your help and suggestions!
The problem here is that the logger pipeline is still running after your script exits, so some of the last content to be logged print after the parent shell has emitted its prompt. If you scroll up, you'll find the prompt hidden somewhere in that prior output.
If you have a very, very new bash, you can collect the PID of the process substitution, and wait for it later.
exec {orig_out}>&1 {orig_err}>&2 1> >(logger -s -t "${0##*/}") 2>&1; logger_pid=$!
[[ $logger_pid ]] || { echo "ERROR: Needs a newer bash" >&2; exit 1; }
cleanup() {
exec >&$orig_out 2>&$orig_err
wait "$logger_pid"
}
trap cleanup EXIT
With an older bash, you can consider other tricks. For example, on Linux, you can use the flock command to try to grab exclusive access to a lockfile before exiting, after ensuring that that lock is held for as long as the logger is running:
log_lock=$(mktemp "${TMPDIR:-/tmp}/logger.XXXXXX")
exec >(flock -x "$log_lock" logger -s -t "${0##*/}") 2>&1
cleanup() {
exec >/dev/tty 2>&1 || exec >/dev/null 2>&1
flock -x "$log_lock" true
}
trap cleanup EXIT

Resources