Is it possible to trap error (unknown command) from the CLI, and do something in the case an error occured ?
To be more precise, I search a way to do something like this:
if [ previousCommandFails ] ; then
echo lastCommand >> somewhere.txt
fi
Echo is just an example to say that I need to access this lastCommand.
I want it to be a default behaviour in my computer, so the code must be placed somewhere like ~/.bashrc.
You can try the following solution. I don't guarantee that it's a good solution but it may help with your case.
Create a small script which can test the previous command i.e. test.sh with content:
if [ $? -ne 0 ]
then
history 1 >> /path/to/failed_commands.txt
fi
Then set this variable:
PROMPT_COMMAND+="source /path/to/test.sh"
PROMPT_COMMAND If set, the value is executed as a command prior to
issuing each primary prompt.
It depends on what you call fail. If it is just returning a non 0 value, I am afraid that you have to explicitely test it after each command, or use a specialized shell (*).
But trap can be used to execute a specific command when a signal is received:
trap action signal
If this is not enough, you will have to get the source of a shell (posix shell or bash) and tweak it for meet you needs...
The format, syntax, I have obtained from another post [here][1]
Collecting the output of a remote ssh command in a variable:
I am trying to obtain a list of folder contents and compare them with a list of specific processes. I need this to know if all required instances of let's say IHS/WAS are running as expected. I cannot use specific IHS or WAS commands because I do not have access to the commands. I have limited read access to the systems and I am writing a script to obtain list of instances installed, running, etc.
Below is my code:
#!/bin/bash
HOST='xyzhostname'
$vari=$(ssh -T $HOST <<'EOF'
printf "getting folders: \n"
instances=$(ls /samplefolder/samplefolder/)
printf "got folders.\n"
printf "${instances}"
EOF
)
more code to get processes (ps -ef.....) and compare against each folder obtained in instances above will follow...
I get the below error when I run this code
./test.sh: line 9: =getting: command not found
Would appreciate any help on this..
I've searched a lot on the internet but haven't been able to find any useful info this yet. Does PFTP not allow you to run loops like 'IF' and 'WHILE' at all?
If it does, please let me know the syntax, I'm tried of banging my head against it. Annoyingly, PuTTY allows these commands but psftp doesn't seem to even though both are from the same family. I really hope there is a solution to this!
PSFTP isn't a language. It's just an SFTP client. SFTP itself is just a protocol for moving files between computers. If you have SFTP set up on the remote computer then it suggests that you have SSH running (since SFTP generally comes bundled with the SSH server install).
You can do a test in a bash shell script, for instance, to see if the file exists on the remote server, then execute your psftp command based on the result. Something like:
#!/bin/bash
# test if file exists on remote system
fileExists=$(ssh user#yourothercomputer "test -f /tmp/foo && echo 'true' || echo 'false'")
if $fileExists; then
psftp <whatever>
fi
You can stick that whole mess in a loop or whatevs. What's happening here is that we are sending a command test -f /tmp/foo && echo 'true' || echo 'false' to the remote computer to execute. The stdout of the command is returned and stored in the variable fileExists. Then we just test it.
If you are in windows you could convert this to a batch script and use plink.exe to send the command kind of like they do here. Or maybe just plop cygwin on your computer with an SSH and SFTP client and use what's above.
The big take-away here is that you will need a separate scripting environment to do the loop and run psftp based on a test.
I'm trying to write an interactive script on a remote server, whose default shell is zsh. I've been trying two different approaches to get this to work:
Approach 1: ssh -t <user>#<host> "$(<serverStatusReport.sh)"
Approach 2: ssh <user>#<host> "bash -s" < serverStatusReport.sh
I've been using approach 1 just fine up until now, when I ran into the following issue - I have a block of code that runs depending on whether certain files exist in the current directory:
filename="./service_log.*"
if ls $filename 1> /dev/null 2>&1 ; then
echo "$filename found."
##process files
else
echo "$filename not found."
fi
If I ssh into the server and run the command directly, I see "$filename found."
If I run the block of code above using Approach 1, I see "$filename not found".
If I copy this block into a new script (lets call this script2), and run it using Approach 2, then I see "$filename found".
I can't for the life of me figure out where this discrepancy is coming from. I thought that the difference may be that script2 is piped into bash whereas my original script is being run with zsh... but considering that running the same command verbatim on the server, with its default zsh shell, returns correctly... I'm stumped.
:( any help would be greatly appreciated!
I guess that when executing your approach 1 it is the local shell that expands "$(<serverStatusReport.sh)", not the remote. You can easily check this with:
ssh -t <user>#<host> "$(<hostname)"
Is the serverStatusReport.sh script also in the PATH on the local host?
What I do not understand is why you get this message instead of an error message.
This is a idea for a security. Our employees shall have access to some commands on a linux server but not all. They shall e.g. have the possibility to access a log file (less logfile) or start different commands (shutdown.sh / run.sh).
Background information:
All employees access the server with the same user name: Our product runs with "normal" user permissions, no "installation" is needed. Just unzip it in your user dir and run it. We manage several servers where our application is "installed". On every machine there is a user johndoe. Our employees sometimes need access to the application on command line to access and check log files or to restart the application by hand. Only some people shall have full command line access.
We are using ppk authentication on the server.
It would be great if employee1 can only access the logfile and employee2 can also do X etc...
Solution:
As a solution I'll use the command option as stated in the accepted answer. I'll make my own little shell script that will be the only file that can be executed for some employees. The script will offer several commands that can be executed, but no others. I'll use the following parameters in authorized_keys from as stated here:
command="/bin/myscript.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty
ssh-dss AAAAB3....o9M9qz4xqGCqGXoJw= user#host
This is enough security for us. Thanks, community!
You can also restrict keys to permissible commands (in the authorized_keys file).
I.e. the user would not log in via ssh and then have a restricted set of commands but rather would only be allowed to execute those commands via ssh (e.g. "ssh somehost bin/showlogfile")
ssh follows the rsh tradition by using the user's shell program from the password file to execute commands.
This means that we can solve this without involving ssh configuration in any way.
If you don't want the user to be able to have shell access, then simply replace that user's shell with a script. If you look in /etc/passwd you will see that there is a field which assigns a shell command interpreter to each user. The script is used as the shell both for their interactive login ssh user#host as well as for commands ssh user#host command arg ....
Here is an example. I created a user foo whose shell is a script. The script prints the message my arguments are: followed by its arguments (each on a separate line and in angle brackets) and terminates. In the log in case, there are no arguments. Here is what happens:
webserver:~# ssh foo#localhost
foo#localhost's password:
Linux webserver [ snip ]
[ snip ]
my arguments are:
Connection to localhost closed.
If the user tries to run a command, it looks like this:
webserver:~# ssh foo#localhost cat /etc/passwd
foo#localhost's password:
my arguments are:
<-c>
<cat /etc/passwd>
Our "shell" receives a -c style invocation, with the entire command as one argument, just the same way that /bin/sh would receive it.
So as you can see, what we can do now is develop the script further so that it recognizes the case when it has been invoked with a -c argument, and then parses the string (say by pattern matching). Those strings which are allowed can be passed to the real shell by recursively invoking /bin/bash -c <string>. The reject case can print an error message and terminate (including the case when -c is missing).
You have to be careful how you write this. I recommend writing only positive matches which allow only very specific things, and disallow everything else.
Note: if you are root, you can still log into this account by overriding the shell in the su command, like this su -s /bin/bash foo. (Substitute shell of choice.) Non-root cannot do this.
Here is an example script: restrict the user into only using ssh for git access to repositories under /git.
#!/bin/sh
if [ $# -ne 2 ] || [ "$1" != "-c" ] ; then
printf "interactive login not permitted\n"
exit 1
fi
set -- $2
if [ $# != 2 ] ; then
printf "wrong number of arguments\n"
exit 1
fi
case "$1" in
( git-upload-pack | git-receive-pack )
;; # continue execution
( * )
printf "command not allowed\n"
exit 1
;;
esac
# Canonicalize the path name: we don't want escape out of
# git via ../ path components.
gitpath=$(readlink -f "$2") # GNU Coreutils specific
case "$gitpath" in
( /git/* )
;; # continue execution
( * )
printf "access denied outside of /git\n"
exit 1
;;
esac
if ! [ -e "$gitpath" ] ; then
printf "that git repo doesn't exist\n"
exit 1
fi
"$1" "$gitpath"
Of course, we are trusting that these Git programs git-upload-pack and git-receive-pack don't have holes or escape hatches that will give users access to the system.
That is inherent in this kind of restriction scheme. The user is authenticated to execute code in a certain security domain, and we are kludging in a restriction to limit that domain to a subdomain. For instance if you allow a user to run the vim command on a specific file to edit it, the user can just get a shell with :!sh[Enter].
What you are looking for is called Restricted Shell. Bash provides such a mode in which users can only execute commands present in their home directories (and they cannot move to other directories), which might be good enough for you.
I've found this thread to be very illustrative, if a bit dated.
Why don't you write your own login-shell? It would be quite simple to use Bash for this, but you can use any language.
Example in Bash
Use your favorite editor to create the file /root/rbash.sh (this can be any name or path, but should be chown root:root and chmod 700):
#!/bin/bash
commands=("man" "pwd" "ls" "whoami")
timestamp(){ date +'%Y-%m-%s %H:%M:%S'; }
log(){ echo -e "$(timestamp)\t$1\t$(whoami)\t$2" > /var/log/rbash.log; }
trycmd()
{
# Provide an option to exit the shell
if [[ "$ln" == "exit" ]] || [[ "$ln" == "q" ]]
then
exit
# You can do exact string matching for some alias:
elif [[ "$ln" == "help" ]]
then
echo "Type exit or q to quit."
echo "Commands you can use:"
echo " help"
echo " echo"
echo "${commands[#]}" | tr ' ' '\n' | awk '{print " " $0}'
# You can use custom regular expression matching:
elif [[ "$ln" =~ ^echo\ .*$ ]]
then
ln="${ln:5}"
echo "$ln" # Beware, these double quotes are important to prevent malicious injection
# For example, optionally you can log this command
log COMMAND "echo $ln"
# Or you could even check an array of commands:
else
ok=false
for cmd in "${commands[#]}"
do
if [[ "$cmd" == "$ln" ]]
then
ok=true
fi
done
if $ok
then
$ln
else
log DENIED "$cmd"
fi
fi
}
# Optionally show a friendly welcome-message with instructions since it is a custom shell
echo "$(timestamp) Welcome, $(whoami). Type 'help' for information."
# Optionally log the login
log LOGIN "$#"
# Optionally log the logout
trap "trap=\"\";log LOGOUT;exit" EXIT
# Optionally check for '-c custom_command' arguments passed directly to shell
# Then you can also use ssh user#host custom_command, which will execute /root/rbash.sh
if [[ "$1" == "-c" ]]
then
shift
trycmd "$#"
else
while echo -n "> " && read ln
do
trycmd "$ln"
done
fi
All you have to do is set this executable as your login shell. For example, edit your /etc/passwd file, and replace your current login shell of that user /bin/bash with /root/rbash.sh.
This is just a simple example, but you can make it as advanced as you want, the idea is there. Be careful to not lock yourself out by changing login shell of your own and only user. And always test weird symbols and commands to see if it is actually secure.
You can test it with: su -s /root/rbash.sh.
Beware, make sure to match the whole command, and be careful with wildcards! Better exclude Bash-symbols such as ;, &, &&, ||, $, and backticks to be sure.
Depending on the freedom you give the user, it won't get much safer than this. I've found that often I only needed to make a user that has access to only a few relevant commands, and in that case this is really the better solution.
However, do you wish to give more freedom, a jail and permissions might be more appropriate. Mistakes are easily made, and only noticed when it's already too late.
You should acquire `rssh', the restricted shell
You can follow the restriction guides mentioned above, they're all rather self-explanatory, and simple to follow. Understand the terms `chroot jail', and how to effectively implement sshd/terminal configurations, and so on.
Being as most of your users access your terminals via sshd, you should also probably look into sshd_conifg, the SSH daemon configuration file, to apply certain restrictions via SSH. Be careful, however. Understand properly what you try to implement, for the ramifications of incorrect configurations are probably rather dire.
GNU Rush may be the most flexible and secure way to accomplish this:
GNU Rush is a Restricted User Shell, designed for sites that provide limited remote access to their resources, such as svn or git repositories, scp, or the like. Using a sophisticated configuration file, GNU Rush gives you complete control over the command lines that users execute, as well as over the usage of system resources, such as virtual memory, CPU time, etc.
You might want to look at setting up a jail.
[Disclosure: I wrote sshdo which is described below]
If you want the login to be interactive then setting up a restricted shell is probably the right answer. But if there is an actual set of commands that you want to allow (and nothing else) and it's ok for these commands to be executed individually via ssh (e.g. ssh user#host cmd arg blah blah), then a generic command whitelisting control for ssh might be what you need. This is useful when the commands are scripted somehow at the client end and doesn't require the user to actually type in the ssh command.
There's a program called sshdo for doing this. It controls which commands may be executed via incoming ssh connections. It's available for download at:
http://raf.org/sshdo/ (read manual pages here)
https://github.com/raforg/sshdo/
It has a training mode to allow all commands that are attempted, and a --learn option to produce the configuration needed to allow learned commands permanently. Then training mode can be turned off and any other commands will not be executed.
It also has an --unlearn option to stop allowing commands that are no longer in use so as to maintain strict least privilege as requirements change over time.
It is very fussy about what it allows. It won't allow a command with any arguments. Only complete shell commands can be allowed.
But it does support simple patterns to represent similar commands that vary only in the digits that appear on the command line (e.g. sequence numbers or date/time stamps).
It's like a firewall or whitelisting control for ssh commands.
And it supports different commands being allowed for different users.
Another way of looking at this is using POSIX ACLs, it needs to be supported by your file system, however you can have fine-grained tuning of all commands in linux the same way you have the same control on Windows (just without the nicer UI). link
Another thing to look into is PolicyKit.
You'll have to do quite a bit of googling to get everything working as this is definitely not a strength of Linux at the moment.