Cross-platform method to detect whether /dev/tty is available & functional - linux

I have a bash script from which I want to access /dev/tty, but only when it's available.
When it's not available (in my case: when running my script in GitHub Actions) then when I try to access it I get /dev/tty: No such device or address, and I'm trying to detect that in advance to avoid the error and provide fallback behaviour instead.
To do so I need a bash test that can detect cleanly this case, and which will work reliably across platforms (i.e. not using the tty command, which has issues on Mac).
I'm currently using [[ -e "/dev/tty" ]] which doesn't work - it appears to return true even on GitHub Actions, where it seems that /dev/tty exists but accessing it will fail. What should I use instead?

After testing lots of promising but not quite perfect suggestions (see the other answers), I think I've found my own solution that does exactly fit my needs:
if sh -c ": >/dev/tty" >/dev/null 2>/dev/null; then
# /dev/tty is available and usable
else
# /dev/tty is not available
fi
To explain:
: >/dev/tty does nothing (using the : bash built-in) and outputs the nothing to /dev/tty, thereby checking that it exists & it's writable, but not actually producing any visible output. If this succeeds, we're good.
If we do that at the top level without a /dev/tty, bash itself produces a noisy error in our output, complaining about /dev/tty being unusable. This can't be redirected and silenced because it comes from bash itself, not the : command.
Wrapping that with sh -c "..." >/dev/null 2>/dev/null runs the test in a bash subshell, with stdout/stderr removed, and so silences all errors & warnings while still returning the overall exit code.
Suggestions for further improvements welcome. For reference, I'm testing this with setsid <command>, which seems to be a good simulation of the TTY-less environment I'm having trouble with.

Try this approach :
if test "$(ps -p "$$" -o tty=)" = "?"; then
echo "/dev/tty is not available."
else
echo "/dev/tty is available."
fi

Instead of spawning a new shell process to test if /dev/tty can really be opened for writing (test -w lies, you know?), you can try to redirect stdout to /dev/tty from a subshell like so:
if (exec < /dev/tty) ; then
# /dev/tty is available
else
# no tty is available
fi
This is POSIX syntax and should work in any shell.

It seems that adapting this answer from this question on ServerFault (entitled How can I check in bash if a shell is running in interactive mode?, which is close to your question albeit not an exact duplicate) could be a solution for your use case.
So, could you try writing either:
[ -t 0 ] && [ -t 1 ] && echo your code
or [ -t 0 ] && echo your code ?
For completeness, here is one link documenting this POSIX flag -t, which is thus portable:
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html
-t file_descriptor
True if file descriptor number file_descriptor is open and is associated with a terminal.
False if file_descriptor is not a valid file descriptor number, or if file descriptor number file_descriptor is not open, or if it is open but is not associated with a terminal.
Furthermore, if you use bash (not just a POSIX-compliant shell), you might want to combine this idea with the special 255 file descriptor number: [ -t 255 ].
Source: On Unix&Linux-SE,
That 255 file descriptor is an open handle to the controlling tty and is only used when bash is run in interactive mode.
[…]
− In Bash, what is file descriptor 255 for, can I use it? (by #mosvy)

Beyond the other answers mentioned in this thread (and as an alternative to the other idea involving $-, which did not seem to work for you), what about this other idea mentioned in the bash manual?
if [ -z "$PS1" ]; then
echo This shell is not interactive
else
echo This shell is interactive
fi

Related

Test in a Bash script if the user can input data using `read` or equivalent

I'd like to know if the user can input data.
In a script, it is usually possible to call read -r VARIABLE to request input from the user. However, this doesn't work in all environments: for example, in CI scripts, it's not possible for the user to input anything, and I'd like to substitute a default value in that case.
So far, I'm handling this with a timeout, like this:
echo "If you are a human, type 'ENTER' now. Otherwise, automatic installation will start in 10 seconds..."
read -t 10 -r _user_choice || _user_choice="no-user-here"
But honestly, that just looks ugly.
The solution doesn't have to use read, however it needs to be portable to all major distros that have Bash, so it's not possible to use packages that are not installed by default.
$ cat stdin.bash
if [[ -t 0 ]]; then
echo "stdin is a terminal, so the user can input data"
else
echo "stdin is connected to some other redirect or pipeline"
fi
and, demonstrating
$ bash stdin.bash
stdin is a terminal, so the user can input data
$ echo foo | bash stdin.bash
stdin is connected to some other redirect or pipeline
From help test output:
-t FD True if FD is opened on a terminal.

A way to specify a command to run if the previous fails

Is it possible to trap error (unknown command) from the CLI, and do something in the case an error occured ?
To be more precise, I search a way to do something like this:
if [ previousCommandFails ] ; then
echo lastCommand >> somewhere.txt
fi
Echo is just an example to say that I need to access this lastCommand.
I want it to be a default behaviour in my computer, so the code must be placed somewhere like ~/.bashrc.
You can try the following solution. I don't guarantee that it's a good solution but it may help with your case.
Create a small script which can test the previous command i.e. test.sh with content:
if [ $? -ne 0 ]
then
history 1 >> /path/to/failed_commands.txt
fi
Then set this variable:
PROMPT_COMMAND+="source /path/to/test.sh"
PROMPT_COMMAND If set, the value is executed as a command prior to
issuing each primary prompt.
It depends on what you call fail. If it is just returning a non 0 value, I am afraid that you have to explicitely test it after each command, or use a specialized shell (*).
But trap can be used to execute a specific command when a signal is received:
trap action signal
If this is not enough, you will have to get the source of a shell (posix shell or bash) and tweak it for meet you needs...

Shell script hangs when i switch to bash - Linux [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm very very new to Linux(coming from windows) and trying to write a script that i can hopefully execute over multiple systems. I tried to use Python for this but fount it hard too. Here is what i have so far:
cd /bin
bash
source compilervars.sh intel64
cd ~
exit #exit bash
file= "~/a.out"
if[! -f "$file"]
then
icc code.c
fi
#run some commands here...
The script hangs in the second line (bash). I'm not sure how to fix that or if I'm doing it wrong. Please advice.
Also, any tips of how to run this script over multiple systems on the same network?
Thanks a lot.
What I believe you'd want to do:
#!/bin/bash
source /bin/compilervars.sh intel64
file="$HOME/a.out"
if [ ! -f "$file" ]; then
icc code.c
fi
You would put this in a file and make it executable with chmod +x myscript. Then you would run it with ./myscript. Alternatively, you could just run it with bash myscript.
Your script makes little sense. The second line will open a new bash session, but it will just sit there until you exit it. Also, changing directories back and forth is very seldom required. To execute a single command in another directory, one usually does
( cd /other/place && mycommand )
The ( ... ) tells the shell that you'd like to do this in a sub-shell. The cd happens within that sub-shell and you don't have to cd back after it's done. If the cd fails, the command will not be run.
For example: You might want to make sure you're in $HOME when you compile the code:
if [ ! -f "$file" ]; then
( cd $HOME && icc code.c )
fi
... or even pick out the directory name from the variable file and use that:
if [ -f "$file" ]; then
( cd $(dirname "$file") && icc code.c )
fi
Assigning to a variable needs to happen as I wrote it, without spaces around the =.
Likewise, there needs to be spaces after if and inside [ ... ] as I wrote it above.
I also tend to use $HOME rather than ~ in scripts as it's more descriptive.
A shell script isn't a record of key strokes which are typed into a terminal. If you write a script like this:
command1
bash
command2
it does not mean that the script will switch to bash, and then execute command2 in the different shell. It means that bash will be run. If there is a controlling terminal, that bash will show you a prompt and wait for a command to be typed in. You will have to type exit to quit that bash. Only then will the original script then continue with command2.
There is no way to switch a script to a different shell halfway through. There are ways to simulate this. A script can re-execute itself using a different shell. In order to do that, the script has to contain logic to detect that it is being re-executed, so that it can prevent re-executing itself again, and to skip some code that shouldn't be run twice.
In this script, I implemented such a re-execution hack. It consists of these lines:
#
# The #!/bin/sh might be some legacy piece of crap,
# not even up to 1990 POSIX.2 spec. So the first step
# is to look for a better shell in some known places
# and re-execute ourselves with that interpreter.
#
if test x$txr_shell = x ; then
for shell in /bin/bash /usr/bin/bash /usr/xpg4/bin/sh ; do
if test -x $shell ; then
txr_shell=$shell
break
fi
done
if test x$txr_shell = x ; then
echo "No known POSIX shell found: falling back on /bin/sh, which may not work"
txr_shell=/bin/sh
fi
export txr_shell
exec $txr_shell $0 ${#+"$#"}
fi
The txr_shell variable (not a standard variable, my invention) is how this logic detects that it's been re-executed. If the variable doesn't exist then this is the original execution. When we re-execute we export txr_shell so the re-executed instance will then have this environment variable.
The variable also holds the path to the shell; that is used later in the script; it is passed through to a Makefile as the SHELL variable, so that make build recipes use that same shell. In the above logic, the contents of txr_shell don't matter; it's used as Boolean: either it exists or it doesn't.
The programming style in the above code snippet is deliberately coded to work on very old shells. That is why test x$txr_shell = x is used instead of the modern syntax [ -z "$txr_shell" ], and why ${#+"$#"} is used instead of just "$#".
This style is no longer used after this point in the script, because the
rest of the script runs in some good, reasonably modern shell thanks to the re-execution trick.

shell prompt seemingly does not reappear after running a script that uses exec with tee to send stdout output to both the terminal and a file

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

How to restrict SSH users to a predefined set of commands after login?

This is a idea for a security. Our employees shall have access to some commands on a linux server but not all. They shall e.g. have the possibility to access a log file (less logfile) or start different commands (shutdown.sh / run.sh).
Background information:
All employees access the server with the same user name: Our product runs with "normal" user permissions, no "installation" is needed. Just unzip it in your user dir and run it. We manage several servers where our application is "installed". On every machine there is a user johndoe. Our employees sometimes need access to the application on command line to access and check log files or to restart the application by hand. Only some people shall have full command line access.
We are using ppk authentication on the server.
It would be great if employee1 can only access the logfile and employee2 can also do X etc...
Solution:
As a solution I'll use the command option as stated in the accepted answer. I'll make my own little shell script that will be the only file that can be executed for some employees. The script will offer several commands that can be executed, but no others. I'll use the following parameters in authorized_keys from as stated here:
command="/bin/myscript.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty
ssh-dss AAAAB3....o9M9qz4xqGCqGXoJw= user#host
This is enough security for us. Thanks, community!
You can also restrict keys to permissible commands (in the authorized_keys file).
I.e. the user would not log in via ssh and then have a restricted set of commands but rather would only be allowed to execute those commands via ssh (e.g. "ssh somehost bin/showlogfile")
ssh follows the rsh tradition by using the user's shell program from the password file to execute commands.
This means that we can solve this without involving ssh configuration in any way.
If you don't want the user to be able to have shell access, then simply replace that user's shell with a script. If you look in /etc/passwd you will see that there is a field which assigns a shell command interpreter to each user. The script is used as the shell both for their interactive login ssh user#host as well as for commands ssh user#host command arg ....
Here is an example. I created a user foo whose shell is a script. The script prints the message my arguments are: followed by its arguments (each on a separate line and in angle brackets) and terminates. In the log in case, there are no arguments. Here is what happens:
webserver:~# ssh foo#localhost
foo#localhost's password:
Linux webserver [ snip ]
[ snip ]
my arguments are:
Connection to localhost closed.
If the user tries to run a command, it looks like this:
webserver:~# ssh foo#localhost cat /etc/passwd
foo#localhost's password:
my arguments are:
<-c>
<cat /etc/passwd>
Our "shell" receives a -c style invocation, with the entire command as one argument, just the same way that /bin/sh would receive it.
So as you can see, what we can do now is develop the script further so that it recognizes the case when it has been invoked with a -c argument, and then parses the string (say by pattern matching). Those strings which are allowed can be passed to the real shell by recursively invoking /bin/bash -c <string>. The reject case can print an error message and terminate (including the case when -c is missing).
You have to be careful how you write this. I recommend writing only positive matches which allow only very specific things, and disallow everything else.
Note: if you are root, you can still log into this account by overriding the shell in the su command, like this su -s /bin/bash foo. (Substitute shell of choice.) Non-root cannot do this.
Here is an example script: restrict the user into only using ssh for git access to repositories under /git.
#!/bin/sh
if [ $# -ne 2 ] || [ "$1" != "-c" ] ; then
printf "interactive login not permitted\n"
exit 1
fi
set -- $2
if [ $# != 2 ] ; then
printf "wrong number of arguments\n"
exit 1
fi
case "$1" in
( git-upload-pack | git-receive-pack )
;; # continue execution
( * )
printf "command not allowed\n"
exit 1
;;
esac
# Canonicalize the path name: we don't want escape out of
# git via ../ path components.
gitpath=$(readlink -f "$2") # GNU Coreutils specific
case "$gitpath" in
( /git/* )
;; # continue execution
( * )
printf "access denied outside of /git\n"
exit 1
;;
esac
if ! [ -e "$gitpath" ] ; then
printf "that git repo doesn't exist\n"
exit 1
fi
"$1" "$gitpath"
Of course, we are trusting that these Git programs git-upload-pack and git-receive-pack don't have holes or escape hatches that will give users access to the system.
That is inherent in this kind of restriction scheme. The user is authenticated to execute code in a certain security domain, and we are kludging in a restriction to limit that domain to a subdomain. For instance if you allow a user to run the vim command on a specific file to edit it, the user can just get a shell with :!sh[Enter].
What you are looking for is called Restricted Shell. Bash provides such a mode in which users can only execute commands present in their home directories (and they cannot move to other directories), which might be good enough for you.
I've found this thread to be very illustrative, if a bit dated.
Why don't you write your own login-shell? It would be quite simple to use Bash for this, but you can use any language.
Example in Bash
Use your favorite editor to create the file /root/rbash.sh (this can be any name or path, but should be chown root:root and chmod 700):
#!/bin/bash
commands=("man" "pwd" "ls" "whoami")
timestamp(){ date +'%Y-%m-%s %H:%M:%S'; }
log(){ echo -e "$(timestamp)\t$1\t$(whoami)\t$2" > /var/log/rbash.log; }
trycmd()
{
# Provide an option to exit the shell
if [[ "$ln" == "exit" ]] || [[ "$ln" == "q" ]]
then
exit
# You can do exact string matching for some alias:
elif [[ "$ln" == "help" ]]
then
echo "Type exit or q to quit."
echo "Commands you can use:"
echo " help"
echo " echo"
echo "${commands[#]}" | tr ' ' '\n' | awk '{print " " $0}'
# You can use custom regular expression matching:
elif [[ "$ln" =~ ^echo\ .*$ ]]
then
ln="${ln:5}"
echo "$ln" # Beware, these double quotes are important to prevent malicious injection
# For example, optionally you can log this command
log COMMAND "echo $ln"
# Or you could even check an array of commands:
else
ok=false
for cmd in "${commands[#]}"
do
if [[ "$cmd" == "$ln" ]]
then
ok=true
fi
done
if $ok
then
$ln
else
log DENIED "$cmd"
fi
fi
}
# Optionally show a friendly welcome-message with instructions since it is a custom shell
echo "$(timestamp) Welcome, $(whoami). Type 'help' for information."
# Optionally log the login
log LOGIN "$#"
# Optionally log the logout
trap "trap=\"\";log LOGOUT;exit" EXIT
# Optionally check for '-c custom_command' arguments passed directly to shell
# Then you can also use ssh user#host custom_command, which will execute /root/rbash.sh
if [[ "$1" == "-c" ]]
then
shift
trycmd "$#"
else
while echo -n "> " && read ln
do
trycmd "$ln"
done
fi
All you have to do is set this executable as your login shell. For example, edit your /etc/passwd file, and replace your current login shell of that user /bin/bash with /root/rbash.sh.
This is just a simple example, but you can make it as advanced as you want, the idea is there. Be careful to not lock yourself out by changing login shell of your own and only user. And always test weird symbols and commands to see if it is actually secure.
You can test it with: su -s /root/rbash.sh.
Beware, make sure to match the whole command, and be careful with wildcards! Better exclude Bash-symbols such as ;, &, &&, ||, $, and backticks to be sure.
Depending on the freedom you give the user, it won't get much safer than this. I've found that often I only needed to make a user that has access to only a few relevant commands, and in that case this is really the better solution.
However, do you wish to give more freedom, a jail and permissions might be more appropriate. Mistakes are easily made, and only noticed when it's already too late.
You should acquire `rssh', the restricted shell
You can follow the restriction guides mentioned above, they're all rather self-explanatory, and simple to follow. Understand the terms `chroot jail', and how to effectively implement sshd/terminal configurations, and so on.
Being as most of your users access your terminals via sshd, you should also probably look into sshd_conifg, the SSH daemon configuration file, to apply certain restrictions via SSH. Be careful, however. Understand properly what you try to implement, for the ramifications of incorrect configurations are probably rather dire.
GNU Rush may be the most flexible and secure way to accomplish this:
GNU Rush is a Restricted User Shell, designed for sites that provide limited remote access to their resources, such as svn or git repositories, scp, or the like. Using a sophisticated configuration file, GNU Rush gives you complete control over the command lines that users execute, as well as over the usage of system resources, such as virtual memory, CPU time, etc.
You might want to look at setting up a jail.
[Disclosure: I wrote sshdo which is described below]
If you want the login to be interactive then setting up a restricted shell is probably the right answer. But if there is an actual set of commands that you want to allow (and nothing else) and it's ok for these commands to be executed individually via ssh (e.g. ssh user#host cmd arg blah blah), then a generic command whitelisting control for ssh might be what you need. This is useful when the commands are scripted somehow at the client end and doesn't require the user to actually type in the ssh command.
There's a program called sshdo for doing this. It controls which commands may be executed via incoming ssh connections. It's available for download at:
http://raf.org/sshdo/ (read manual pages here)
https://github.com/raforg/sshdo/
It has a training mode to allow all commands that are attempted, and a --learn option to produce the configuration needed to allow learned commands permanently. Then training mode can be turned off and any other commands will not be executed.
It also has an --unlearn option to stop allowing commands that are no longer in use so as to maintain strict least privilege as requirements change over time.
It is very fussy about what it allows. It won't allow a command with any arguments. Only complete shell commands can be allowed.
But it does support simple patterns to represent similar commands that vary only in the digits that appear on the command line (e.g. sequence numbers or date/time stamps).
It's like a firewall or whitelisting control for ssh commands.
And it supports different commands being allowed for different users.
Another way of looking at this is using POSIX ACLs, it needs to be supported by your file system, however you can have fine-grained tuning of all commands in linux the same way you have the same control on Windows (just without the nicer UI). link
Another thing to look into is PolicyKit.
You'll have to do quite a bit of googling to get everything working as this is definitely not a strength of Linux at the moment.

Resources