Simple commands not found bash (#!/usr/bin/expect) - linux

I've recently started using bash to automate a windows rescue disk with chntpw. I'm trying to set up the program to use the expect command to listen for certain chntpw dialog questions and input the right answers without any user input. For some reason after setting up the bash script to use #!/usr/bin/expect rather than #!/bin/bash then many standard terminal commands are no longer understood.
I'm running the script by typing this into terminal:
user#kali:~/Desktop/projects/breezee$ bash breezee1.sh
The terminal output is as follows:
BREEZEE 1.0
Welcome to BREEZEE
breezee1.sh: line 9: fdisk: command not found
[Select] /dev/:
Here is my code:
#!/usr/bin/expect
clear
echo "BREEZEE 1.0"
echo "Welcome to BREEZEE"
fdisk -l
#list partitions
echo -n "[Select] /dev/:"
#ask user to choose primary windows partition
read sda
clear
echo /dev/$sda selected
umount /dev/$sda
sudo ntfsfix /dev/$sda
sudo mount -t ntfs-3g -o remove_hiberfile /dev/$sda /mnt/
cd /mnt/Windows/System32/config
clear
chntpw -l SAM #list accounts on windows partition
chntpw -u Administrator SAM
#now supply chntpw with values to run the password clear (this answers the prompts)
expect '> '
send '1\r'
expect '> '
send '2\r'
expect '> '
send '3\r'
expect ': '
send 'y\r'
expect '> '
send 'q\r'
expect ': '
send 'y\r'
clear
echo "Operation Successfull!"
chntpw -l SAM #list accounts on windows partition
In short, I'm trying to use standard bash/terminal commands alongside the expect commands. I'm probably going about this all wrong, so please correct me as I've been troubleshooting this for about three days and haven't gotten far :(

When you specify the application that should run your script, you can only use the scripting language that application will understand.
Clearly, Expect is not bash, and does not understand bash commands.
i suggest you separate those two scripts. Write the first part for !#/bin/bash, the second for Expect. Make the first script invoke the second script and redirect it to chntpw.

expect uses tcl not bash. So you can write your script in TCL when you use #!/usr/bin/expect.
For example, echo "BREEZEE 1.0" should be written as:
puts "BREEZEE 1.0"
And you should use exp_send instead of send.
From expect manual:
exp_send is an alias for send. If you are using Expectk or some other variant of Expect in the Tk environment, send is defined by Tk for an entirely different purpose. exp_send is provided for compatibility between environments. Similar aliases are provided for other Expect's other send commands.

Related

Linux: Checking if a user has a shell or not

I'm writing a test script in python where I use subprocess to run various terminal commands and check the result. One of the things I want to check is if the user "games" doesn't have a shell. I don't want to log in as games(which I think is impossible anyway), but have the ability to run this command as root. Is there any single bash command I can use to check what shells another user has(or doesn't have)?
I'm able to use the command "cat /etc/shells/" to check what shells I have available, I wanted to use this to search another user but I'm not sure how to do it, if it's even possible.
You may use "su", and check the return code:
root#shinwey:# su games
This account is currently not available.
root#pifa:/home/kalou/t# echo $?
1
or print the string "NO_SHELL":
root#shinwey:# su games 2>&1 > /dev/null || echo NO_SHELL
NO_SHELL

How can I ingore whitespaces inside an echo command when usual methods don't seem to work

Hi I'm pretty new to linux/bash in general and I'm having a some trouble making a script for my coworker. Idea of this script is to help automate coworkers IP table entries (don't use IP tables myself so no idea how it works, just working as per his instructions). Program is going to ask a few questions, form an entry and then add it to a file on a different host. After the file is written it will also run
"systemctl reload iptables.service" and "systemctl status iptables". I tested that pwd was atleast working where I was planning to put these.
The code worked fine with a single word in place of table_entry variable, I was able to write something to a file in my host computer with a different user.
Problem is that the "table_entry" variable is going to have whitespaces in it and the sudo su -c command gets the second word as input (atleast that's what I think is happening) and I get an error sudo: INPUT: command not found. the "INPUT" coming from my case sentence
I tried to put the "table_entry" variable in "{$table_entry}" , "$table_entry" and {$table_entry} forms but they didn't work.
I removed some of the code to make it more readable (mostly case sentences for variables and asking for host and username).
#!/bin/bash
echo -e "Which ports?
1. INPUT
2. OUTPUT
3. Forward"
read opt_ch
case $opt_ch in
1) chain="INPUT" ;;
2) chain="OUTPUT" ;;
3) chain="FORWARD" ;;
*) echo -e "Wrong Option Selected!!!"
esac
table_entry="-A $chain "#-s $ip_source -d $ip_dest
ssh -t user#host "sudo table_entry=$table_entry su -c 'echo $table_entry >> /home/user/y.txt'"
#^ this line will later also include the systemctl commands separated with ";" ^
I tested few different methods how to do this script overall, heredoc(didn't get input to work very well), Ansible(Didn't really seem like a great tool for this job), Python(can't install new modules to the enviroment). So this is the best solution I came up with bearing my limited skillset.
Edit: I also realise this is propably not the smartest way to do this script, but it's the only one I have gotten to work this far, that can also ask for a password from the user when doing su command. I'm not knowledgeable in transferring passwords safely in linux enviroment or in general, so I like to let linux handle the passwords for me.
This a problem with dealing with nested quoting - which is really quite the annoying problem to solve.
This case seems like you could do this with quotes inside the string - your example would become
ssh -t user#host "sudo table_entry='$table_entry' su -c 'echo \"$table_entry\" >> /home/user/y.txt'"
It seems to me the table_entry='$table_entry' is redundant though, this should work:
ssh -t user#host "sudo su -c 'echo \"$table_entry\" >> /home/user/y.txt'"
Your comment (denoted with #) is getting concatenated with the table_entry string you're trying to form. Try adding a space like this:
table_entry="-A $chain " #-s $ip_source -d $ip_dest
Then table_entry gets assigned correctly. I was using KWrite to edit your bash script, and it does text highlighting that quickly showed me the problem.

Piping Text into a bash script

When my User logs in, I need to enter the following manually so I am trying to create a script to do it for me
. oraenv
The app asks me for input so I enter "M40" (same text every time)
Then I have to run a linux app to launch my work environment.
So how do I automatically enter M40 followed by an enter key
The oraenv script is prompting for a value for ORACLE_SID, so you can set that yourself in a .profile or elsewhere.
export ORACLE_SID=M40
It also has a flag you can set to make it non-interactive:
ORAENV_ASK=NO
Regarding piped input specifically, the script would have to be written to handle it, for example using read or commands such as cat without a filename. See Pipe input into a script for more details. However, this is not how the standard oraenv is coded (assuming that is the script you are using).
I am not sure if anyone of these operations helps you.
echo M40 | . oraenv
This one uses echo pipe.
printf M40 | . oraenv
This one uses printf for pipe. Using echo is different from using printf in some situations, however I don't know their actual difference.
. oraenv <<< M40
This one uses Here String (Sorry for using ABS as reference), a stripped-down form of Heredoc.
. oraenv < <(echo M40)
This one uses Process Substitution, you may see https://superuser.com/questions/1059781/what-exactly-is-in-bash-and-in-zsh for the difference between this one and the above one.
expect -c "spawn . oraenv; expect \"nput\"; send \"M40\r\n\"; interact"
This one uses expect to do automatic input, it has more extensibility in many situations. Note to change the expect \"nput\" part with your actual situation.

iLO3: Multiple SSH commands

is there a way to run multiple commands in HPs integrated Lights-Out 3 system via SSH? I can login to iLO and run a command line by line, but I need to create a small shell-script, to connect to iLO and to run some commands one by one.
This is the line I use, to get information about the iLO-version:
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version"
Now, how can I do something like this?
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version" "show /map1 license" "start /system1"
This doesn't work, because iLO thinks it's all one command. But I need something to login into iLO, run these commands and then exit from iLO. It takes too much time to run them one after the other because every login into iLO-SSH takes ~5-6 seconds (5 commands = 5*5 seconds...).
I've also tried to seperate the commands directly in iLO after manual login but there is no way to use multiple commands in one line. Seems like one command is finished by pressing return.
iLO-SSH Version is: SM-CLP Version 1.0
The following solutions did NOT work:
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version; show /map1 license; start /system1"
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version && show /map1 license && start /system1"
This Python module is for HP iLO Management. check it out
http://pypi.python.org/pypi/python-hpilo/
Try putting your commands in a file (named theFile in this example):
version
show /map1 license
start /system1
Then:
ssh -i dsa_key administrator#iLO-IP < theFile
Semicolons and such won't work because you're using the iLO shell on the other side, not a normal *nix shell. So above I redirect the file, with newlines intact, as if you were typing all that into the session by hand. I hope it works.
You are trying to treat iLO like it's a normal shell, but its really HP's dopy interface.
That being said, the easiest way is to put all the commands in a file and then pipe it to ssh (sending all of the newline characters):
echo -e "version\nshow /map1 license\nstart /system1" | /usr/bin/ssh -i dsa_key administrator#<iLO-IP>
That's a messy workaround, but would you might fancy using expect? Your script in expect would look something like that:
# Make an ssh connection
spawn ssh -i dsa_key administrator#<iLO-IP>
# Wait for command prompt to appear
expect "$"
# Send your first command
send "version\r"
# Wait for command prompt to appear
expect "$"
# Send your second command
send "show /map1 license\r"
# Etc...
On the bright side, it's guaranteed to work. On the darker side, it's a pretty clumsy workaround, very prone to breaking if something goes not the way it should (for example, command prompt character would appear in version output, or something like that).
I'm on the same case and wish to avoid to run a lot of plink commands. So I've seen you can add a file with the -m option but apparently it executes just one command at time :-(
plink -ssh Administrator#AddressIP -pw password -m test.txt
What's the purpose of the file ? Is there a special format for this file ?
My current text file looks like below:
set /map1/oemhp_dircfg1 oemhp_usercntxt1=CN=TEST
set /map1/oemhp_dircfg1 oemhp_usercntxt2=CN=TEST2
...
Is there a solution to execute these two commands ?
I had similar issues and ended up using the "RIBCL over HTTPS" interface to the iLO. This has advantages in that it is much more responsive than logging in/out over ssh.
Using curl or another command-line HTTP client try:
USERNAME=<YOUR_ILO_USERNAME>
PASSWORD=<YOUR_ILO_PASSWORD>
ILO_URL=https://<YOUR_ILO_IP>/ribcl
curl -k -X POST -d "<RIBCL VERSION=\"2.0\">
<LOGIN USER_LOGIN=\"${USERNAME}\" PASSWORD=\"${PASSWORD}\">
<RIB_INFO MODE="READ">
<GET_FW_VERSION/>
<GET_ALL_LICENSES/>
</RIB_INFO>
<SERVER_INFO MODE=\"write\">
<SET_HOST_POWER HOST_POWER=\"Yes\">
</SERVER_INFO>
</LOGIN>
</RIBCL>" ${ILO_URL}
The formatting isn't exactly the same, but if you have the ability to access the iLO via HTTPS instead of only ssh, this may give you some flexibility.
More details on the various RIBCL commands and options may be found at HP iLO 3 Scripting Guide (PDF).

How to restrict SSH users to a predefined set of commands after login?

This is a idea for a security. Our employees shall have access to some commands on a linux server but not all. They shall e.g. have the possibility to access a log file (less logfile) or start different commands (shutdown.sh / run.sh).
Background information:
All employees access the server with the same user name: Our product runs with "normal" user permissions, no "installation" is needed. Just unzip it in your user dir and run it. We manage several servers where our application is "installed". On every machine there is a user johndoe. Our employees sometimes need access to the application on command line to access and check log files or to restart the application by hand. Only some people shall have full command line access.
We are using ppk authentication on the server.
It would be great if employee1 can only access the logfile and employee2 can also do X etc...
Solution:
As a solution I'll use the command option as stated in the accepted answer. I'll make my own little shell script that will be the only file that can be executed for some employees. The script will offer several commands that can be executed, but no others. I'll use the following parameters in authorized_keys from as stated here:
command="/bin/myscript.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty
ssh-dss AAAAB3....o9M9qz4xqGCqGXoJw= user#host
This is enough security for us. Thanks, community!
You can also restrict keys to permissible commands (in the authorized_keys file).
I.e. the user would not log in via ssh and then have a restricted set of commands but rather would only be allowed to execute those commands via ssh (e.g. "ssh somehost bin/showlogfile")
ssh follows the rsh tradition by using the user's shell program from the password file to execute commands.
This means that we can solve this without involving ssh configuration in any way.
If you don't want the user to be able to have shell access, then simply replace that user's shell with a script. If you look in /etc/passwd you will see that there is a field which assigns a shell command interpreter to each user. The script is used as the shell both for their interactive login ssh user#host as well as for commands ssh user#host command arg ....
Here is an example. I created a user foo whose shell is a script. The script prints the message my arguments are: followed by its arguments (each on a separate line and in angle brackets) and terminates. In the log in case, there are no arguments. Here is what happens:
webserver:~# ssh foo#localhost
foo#localhost's password:
Linux webserver [ snip ]
[ snip ]
my arguments are:
Connection to localhost closed.
If the user tries to run a command, it looks like this:
webserver:~# ssh foo#localhost cat /etc/passwd
foo#localhost's password:
my arguments are:
<-c>
<cat /etc/passwd>
Our "shell" receives a -c style invocation, with the entire command as one argument, just the same way that /bin/sh would receive it.
So as you can see, what we can do now is develop the script further so that it recognizes the case when it has been invoked with a -c argument, and then parses the string (say by pattern matching). Those strings which are allowed can be passed to the real shell by recursively invoking /bin/bash -c <string>. The reject case can print an error message and terminate (including the case when -c is missing).
You have to be careful how you write this. I recommend writing only positive matches which allow only very specific things, and disallow everything else.
Note: if you are root, you can still log into this account by overriding the shell in the su command, like this su -s /bin/bash foo. (Substitute shell of choice.) Non-root cannot do this.
Here is an example script: restrict the user into only using ssh for git access to repositories under /git.
#!/bin/sh
if [ $# -ne 2 ] || [ "$1" != "-c" ] ; then
printf "interactive login not permitted\n"
exit 1
fi
set -- $2
if [ $# != 2 ] ; then
printf "wrong number of arguments\n"
exit 1
fi
case "$1" in
( git-upload-pack | git-receive-pack )
;; # continue execution
( * )
printf "command not allowed\n"
exit 1
;;
esac
# Canonicalize the path name: we don't want escape out of
# git via ../ path components.
gitpath=$(readlink -f "$2") # GNU Coreutils specific
case "$gitpath" in
( /git/* )
;; # continue execution
( * )
printf "access denied outside of /git\n"
exit 1
;;
esac
if ! [ -e "$gitpath" ] ; then
printf "that git repo doesn't exist\n"
exit 1
fi
"$1" "$gitpath"
Of course, we are trusting that these Git programs git-upload-pack and git-receive-pack don't have holes or escape hatches that will give users access to the system.
That is inherent in this kind of restriction scheme. The user is authenticated to execute code in a certain security domain, and we are kludging in a restriction to limit that domain to a subdomain. For instance if you allow a user to run the vim command on a specific file to edit it, the user can just get a shell with :!sh[Enter].
What you are looking for is called Restricted Shell. Bash provides such a mode in which users can only execute commands present in their home directories (and they cannot move to other directories), which might be good enough for you.
I've found this thread to be very illustrative, if a bit dated.
Why don't you write your own login-shell? It would be quite simple to use Bash for this, but you can use any language.
Example in Bash
Use your favorite editor to create the file /root/rbash.sh (this can be any name or path, but should be chown root:root and chmod 700):
#!/bin/bash
commands=("man" "pwd" "ls" "whoami")
timestamp(){ date +'%Y-%m-%s %H:%M:%S'; }
log(){ echo -e "$(timestamp)\t$1\t$(whoami)\t$2" > /var/log/rbash.log; }
trycmd()
{
# Provide an option to exit the shell
if [[ "$ln" == "exit" ]] || [[ "$ln" == "q" ]]
then
exit
# You can do exact string matching for some alias:
elif [[ "$ln" == "help" ]]
then
echo "Type exit or q to quit."
echo "Commands you can use:"
echo " help"
echo " echo"
echo "${commands[#]}" | tr ' ' '\n' | awk '{print " " $0}'
# You can use custom regular expression matching:
elif [[ "$ln" =~ ^echo\ .*$ ]]
then
ln="${ln:5}"
echo "$ln" # Beware, these double quotes are important to prevent malicious injection
# For example, optionally you can log this command
log COMMAND "echo $ln"
# Or you could even check an array of commands:
else
ok=false
for cmd in "${commands[#]}"
do
if [[ "$cmd" == "$ln" ]]
then
ok=true
fi
done
if $ok
then
$ln
else
log DENIED "$cmd"
fi
fi
}
# Optionally show a friendly welcome-message with instructions since it is a custom shell
echo "$(timestamp) Welcome, $(whoami). Type 'help' for information."
# Optionally log the login
log LOGIN "$#"
# Optionally log the logout
trap "trap=\"\";log LOGOUT;exit" EXIT
# Optionally check for '-c custom_command' arguments passed directly to shell
# Then you can also use ssh user#host custom_command, which will execute /root/rbash.sh
if [[ "$1" == "-c" ]]
then
shift
trycmd "$#"
else
while echo -n "> " && read ln
do
trycmd "$ln"
done
fi
All you have to do is set this executable as your login shell. For example, edit your /etc/passwd file, and replace your current login shell of that user /bin/bash with /root/rbash.sh.
This is just a simple example, but you can make it as advanced as you want, the idea is there. Be careful to not lock yourself out by changing login shell of your own and only user. And always test weird symbols and commands to see if it is actually secure.
You can test it with: su -s /root/rbash.sh.
Beware, make sure to match the whole command, and be careful with wildcards! Better exclude Bash-symbols such as ;, &, &&, ||, $, and backticks to be sure.
Depending on the freedom you give the user, it won't get much safer than this. I've found that often I only needed to make a user that has access to only a few relevant commands, and in that case this is really the better solution.
However, do you wish to give more freedom, a jail and permissions might be more appropriate. Mistakes are easily made, and only noticed when it's already too late.
You should acquire `rssh', the restricted shell
You can follow the restriction guides mentioned above, they're all rather self-explanatory, and simple to follow. Understand the terms `chroot jail', and how to effectively implement sshd/terminal configurations, and so on.
Being as most of your users access your terminals via sshd, you should also probably look into sshd_conifg, the SSH daemon configuration file, to apply certain restrictions via SSH. Be careful, however. Understand properly what you try to implement, for the ramifications of incorrect configurations are probably rather dire.
GNU Rush may be the most flexible and secure way to accomplish this:
GNU Rush is a Restricted User Shell, designed for sites that provide limited remote access to their resources, such as svn or git repositories, scp, or the like. Using a sophisticated configuration file, GNU Rush gives you complete control over the command lines that users execute, as well as over the usage of system resources, such as virtual memory, CPU time, etc.
You might want to look at setting up a jail.
[Disclosure: I wrote sshdo which is described below]
If you want the login to be interactive then setting up a restricted shell is probably the right answer. But if there is an actual set of commands that you want to allow (and nothing else) and it's ok for these commands to be executed individually via ssh (e.g. ssh user#host cmd arg blah blah), then a generic command whitelisting control for ssh might be what you need. This is useful when the commands are scripted somehow at the client end and doesn't require the user to actually type in the ssh command.
There's a program called sshdo for doing this. It controls which commands may be executed via incoming ssh connections. It's available for download at:
http://raf.org/sshdo/ (read manual pages here)
https://github.com/raforg/sshdo/
It has a training mode to allow all commands that are attempted, and a --learn option to produce the configuration needed to allow learned commands permanently. Then training mode can be turned off and any other commands will not be executed.
It also has an --unlearn option to stop allowing commands that are no longer in use so as to maintain strict least privilege as requirements change over time.
It is very fussy about what it allows. It won't allow a command with any arguments. Only complete shell commands can be allowed.
But it does support simple patterns to represent similar commands that vary only in the digits that appear on the command line (e.g. sequence numbers or date/time stamps).
It's like a firewall or whitelisting control for ssh commands.
And it supports different commands being allowed for different users.
Another way of looking at this is using POSIX ACLs, it needs to be supported by your file system, however you can have fine-grained tuning of all commands in linux the same way you have the same control on Windows (just without the nicer UI). link
Another thing to look into is PolicyKit.
You'll have to do quite a bit of googling to get everything working as this is definitely not a strength of Linux at the moment.

Resources