Running shell script on instance startup - linux

I'm trying to run a bash script when my EC2 instances start up. All I want to do is start up GlassFish when the server starts. The command I'm trying to run is:
sudo /glassfish3/bin/asadmin start-domain
Which works when I enter it manually.
I have tried adding this command in a couple places with no luck:
at the end of /etc/rc.local
at the end of /etc/rc.d/rc.local
created my own script in /etc/init.d/
I have given every script 777 permissions.
Anyone have any ideas on what I'm doing wrong?

Unless oddly configured, sudo wants authentication when run. It is normally meant to be run interactively.
Assuming that the script /glassfish3/bin/asadmin is owned by root, you can set its file permissions to 6755. This does what you probably meant sudo to do. Of course, it can also be dangerous and may be a security risk.
(#jcomeau_ictx is right, incidentally. You should check logs as he suggests.)
Update for the benefit of archival: The above answer fortunately seems to have solved the OP's immediate problem, so we'll leave it at that. However, since this answer will remain archived and others may look it up later, I should add more to it.
One can change the file permissions of any executable to 6755, but such is not always a good practice. The effect of such permissions is (a) to let anyone run the executable with (b) the full privileges of the executable's owner. Sometimes, this is exactly what you want, but see: in the OP's case, /glassfish3/bin/asadmin with such permissions can now be called by anybody, with any arguments, with full root privileges. If that is not what you want, then you must take some additional care.
Several ways of taking additional care are possible. One is as follows.
Leave the executable with file permissions 755.
Write and compile a small wrapper, a program which uses execv() of unistd.h to launch the executable.
If practicable, do not let the wrapper take any arguments; otherwise, let its arguments be as restricted and inflexible as they can be. Let the wrapper strictly control the arguments passed to the executable.
Let the wrapper be owned by root, but use chown to assign it a suitable group whose membership includes no users. You may prefer to start a new group for this purpose but, if you scan the /etc/group file on your system, you are not unlikely to find an already existing group that suits. For reference, you can list commands already belonging to special-purpose groups on your system by ls -l /bin /usr/bin | grep -vE '^([^[:space:]]+[[:space:]]+){2}(root[[:space:]]+){2}' or the like.
Give the wrapper file permissions 6754, thus making it nonexecutable except to the group in question.
Admit the calling script to the group, and give the calling script file permissions 2755.
If the calling script already belongs to a group, you can probably just use the same group throughout.
Several variations of the technique are possible, and it is unlikely that you will use exactly the one listed above, but if you read the manpage and/or info entry on the chown command and learn the details of file permissions, and if you experiment a little, you should be able to craft a solution that works for you without posing a security risk.

Most probably it's JAVA_HOME issue, try using sudo -i, here is my working init script:
#!/bin/bash
# description: Glassfish Start Stop Restart
# processname: glassfish
# chkconfig: - 95 80
DOMAIN=domain555
GF_HOME=/opt/glassfish3
DOMAIN_DIR=/home/glassfish/domains
RUN_AS=glassfish
CMD_START="$GF_HOME/bin/asadmin start-domain --domaindir $DOMAIN_DIR"
CMD_STOP="$GF_HOME/bin/asadmin stop-domain --domaindir $DOMAIN_DIR"
function start() {
sudo -u $RUN_AS -i $CMD_START $DOMAIN
}
function stop() {
sudo -u $RUN_AS -i $CMD_STOP $DOMAIN
}
case $1 in
start)
start;
;;
stop)
stop;
;;
restart)
stop;
start;
;;
esac
exit 0
JAVA_HOME and PATH should be set in user's .bashrc or .bash_profile

Related

/bin/sh privilege escalation code using a fake ls - why does this work?

I recently came across this snippet on the following site:
https://www.linuxjournal.com/content/writing-secure-shell-scripts
Here's the script:
#!/bin/sh
if [ "$USER" = "root" ] ; then
/bin/cp /bin/sh /tmp/.secretshell
/bin/chown root /tmp/.secretshell
/bin/chmod 4666 root /tmp/.secretshell
fi
exec /bin/ls $*
Let's assume that the person who runs this code has low-level access to the system (i.e. they can write to /tmp/), and that the system is not 'hardened'.
In the link above, the author of the code says that, "This simple little script has created a shell that always grants its user root access to the Linux system."
The idea is that the attacker would write the script above, name it ls, and drop it in the /tmp/ directory on the system. Any user running ls (rather than /bin/ls) in /tmp/ will therefore inadvertently run this script. If the user running ls happens to be root, he/she will trigger the (malicious) code in the enclosing if/fi block. To conceal that anything harmful has happened, the directory listing that the user wants will still execute as expected due to the final exec /bin/ls $* line.
What I don't quite understand is what the final line of the if/fi block is doing. This is how I interpret the first two lines of the if/fi block:
In the line /bin/cp /bin/sh /tmp/.secretshell, the script copies the /bin/sh binary to /tmp/, renaming it to .secretshell, a hidden file. OK fine.
In the line /bin/chown root /tmp/.secretshell, the script changes the owner of .secretshell to root. OK fine.
What I don't quite understand is the line /bin/chmod 4666 root /tmp/.secretshell. As far as I know, I think this line is meant to flip the setuid bit for .secretshell, so that every time .secretshell is run, it is run as its owner (now root). This would (I suppose) give anyone running .secretshell the ability to run sh as root. But there are two things here which seem problematic:
1) How can root be inserted as the second argument to /bin/chmod, when chmod is expecting a directory or file name after the permissions argument?
2) Doesn't the *666 part of the permissions argument make .secretshell non-executable by converting its permissions mask to -rwSrw-rw-? If the intent is to execute .secretshell, how can this be desirable?
Thanks for your help!
The article contains several mistakes and fundamental misunderstandings about shell scripting:
You're right about the extra root, this is probably a copy-paste error.
You're right about the lack of executable permissions. The author did not test their own code.
The whole approach fails on non-Debian based systems like CentOS or macOS where sh is bash, because bash drops suid.
The author claims that ls -l $name where name='. ; /bin/rm -Rf /' will execute the rm. This is false.
The author further appears to claim that ls -l "$name" where name='. `/bin/rm -Rf /`' will execute the command. This is also false.
I would suggest taking the whole article with a grain of salt.

sudo inside of a script with a command that needs input (bash)

I want to make a script that changes screen brightness and, among others, need this command:
echo "$number" | sudo tee /sys/class/backlight/intel_backlight/brightness
The script asks me for my root password which i think is unnecessary for it only changes the brightness. I tried adding sudo -S and echo-ing the password but not only did i confuse myself with what input goes where, but the script writes out the [sudo] password for user: prompt which is anoying. How do i make the script runable by everyone (both from inside of the script and outside, i do this as an exercise to learn more)?
You might configure your system so that sudo does not ask for any password. I don't recommend doing this (put ALL=NOPASSWD: in your /etc/sudoers file at appropriate place), since it is a security hole.
But what you really want would be to make a setuid executable (BTW /usr/bin/sudo is itself a setuid executable). It is tricky to understand, and you can make huge mistakes (opening large security holes). Read also carefully execve(2) & Advanced Linux Programming. Spend several hours to understand the setuid thing (if you misunderstand it, you'll have security issues). See also credentials(7) & capabilities(7).
For security reasons, shell scripts cannot be made setuid. So you can code a tiny wrapper in C which would run the script thru execve after appropriate calls (e.g. to setresuid(2) and friends), compile that C program as a setuid executable (so chown root and chmod u+s your executable). In your particular case you don't even need to code a C program starting a shell command (you just should fopen the /sys/class/backlight/intel_backlight/brightness pseudo-file then fprintf into it, and fclose it).
Actually, I don't believe that doing all that is necessary, because you should be able to configure your system to let your screen brightness be set by non root. I have no idea how to do that precisely (but that is a different question).

Running scripts from Perl CGI programs with root permissions

I have a Perl CGI that is supposed to allow a user to select some files from a filesystem, and then send them via Rsync to a remote server. All of the HTML is generated by the Perl script, and I am using query strings and temp files to give the illusion of a stateful transaction. The Rsync part is a separate shell script that is called with the filename as an argument (the script also sends emails and a bunch of other stuff which is why I haven't just moved it into the Perl script). I wanted to use sudo without a password, and I setup sudoers to allow the apache user to run the script without a password and disabled requiretty, but I still get errors in the log about no tty. I tried then using su -c scriptname, but that is failing as well.
TD;DR Is it awful practice to use a Perl CGI script to call a Bash script via sudo, and how are you handling privilege escalation for Perl CGI scripts? Perl 5.10 on Linux 2.6 Kernel.
Relevant Code: (LFILE is a file containing the indexes for the array of all files in the filesystem)
elsif ( $ENV{QUERY_STRING} =~ 'yes' ) {
my #CMDLINE = qw(/bin/su -c /bin/scriptname.sh);
print $q->start_html;
open('TFILE', '<', "/tmp/LFILE");
print'<ul>';
foreach(<TFILE>) {
$FILES[$_] =~ s/\/.*\///g;
print "Running command #CMDLINE $FILES[$_]";
print $q->h1("Sending File: $FILES[$_]") ; `#CMDLINE $FILES[$_]` or print $q->h1("Problem: $?);
However you end up doing this, you have to be careful. You want to minimise the chance of a privilege escalation attack. Bearing that in mind….
sudo is not the only way that a user (or process) can execute code with increased privileges. For this sort of application, I would make use of a program with the setuid bit set.
Write a program which can be run by an appropriately-privileged user (root, in this case, although see the warning below) to carry out the actions which require that privilege. (This may be the script you already have, and refer to in the question.) Make this program as simple as possible, and spend some time making sure it is well-written and appropriately secure.
Set the "setuid bit" on the program by doing something like:
chmod a+x,u+s transfer_file
This means that anyone can execute the program, but that it runs with the privileges of the owner of the program, not just the user of the program.
Call the (privileged) transfer program from the existing (non-privileged) CGI script.
Now, in order to keep required privileges as low as possible, I would strongly avoid carrying out the transfer as root. Instead, create a separate user who has the necessary privileges to do the file transfer, but no more, and make this user the owner of the setuid program. This way, even if the program is open to being exploited, the exploiter can use this user's privileges, not root's.
There are some important "gotchas" in setting up something like this. If you have trouble, ask again on this site.

I want to get a tip of rm command filter by using bash script

Some weeks ago, a senior team member removed an important oracle database file(.dbf) unexpectedly. Fortunately, We could restore the system by using back-up files which was saved some days ago.
After seeing that situation, I decided to implement a solution to make atleast a double confirmation when typing rm command on the prompt. (checks more than rm -i)
Even though we aliasing rm -i as default, super speedy keyboardists usually make mistakes like that member, including me.
At first, I replaced(by using alias) basic rm command to a specific bash script file which prints and confirms many times if the targets are related on the oracle database paths or files.
simply speaking, the script operates as filter before to operate rm. If it is not related with oracle, then rm will operate as normal.
While implementing, I thought most of features are well operated as I expected only user prompt environment except one concern.
If rm command are called within other scripts(provided oracle, other vendor modifying oracle path, installer, etc) or programs(by using system call).
How can i distinguish that situation?
If above provided scripts met modified rm, That execution doesn't go ahead anymore.
Do you have more sophisticated methods?
I believe most of reader can understand my lazy explanation.
If you couldn't get clear scenery from above, let me know. I will elaborate more.
We read at man bash:
Aliases are not expanded when the shell is not interactive, unless the
expand_aliases shell option is set using shopt.
Then if you use alias to make rm invoke your shell script, other scripts won't use it by default. If it's what you want, then you're already safe.
The problem is if you want your version of rm to be invoked by scripts and do something smart when it happens. Alias is not enough for the former; even putting your rm somewhere under $PATH is not enough for programs explicitly calling /bin/rm. And for programs that aren't shell scripts, unlink system call is much more likely to be used than something like system("rm ...").
I think that for the whole "safe rm" thing to be useful, it should avoid prompts even when invoked interactively. Every user will develop the habit of saying "yes" to it, and there is no known way around that. What might work is something that moves files to recycle bin instead of deletion, making damage easy to undo (as I seem to recall, there were ready to use solutions for this).
The answer is into the alias manpage:
Note aliases are not expanded by default in non-interactive
shell, and it can be enabled by setting the expand_aliases shell
option using shopt.
Check it by yourself with man alias ;)
Anyway, i would do it in the same way you've chosen
To distinguish the situation: You can create an env variable say, APPL, which will be set to say export APPL="DATABASE . In your customized rm script, perform the double checkings only if the APPL is DATABASE (which indicates a database related script), not otherwise which means the rm call is from other scripts.
If you're using bash, you can export your shell function, which will make it available in scripts, too.
#!/usr/bin/env bash
# Define a replacement for `rm` and export it.
rm() { echo "PSYCH."; }; export -f rm
Shell functions take precedence over builtins and external utilities, so by using just rm even scripts will invoke the function - unless they explicitly bypass the function by invoking /bin/rm ... or command rm ....
Place the above (with your actual implementation of rm()) either in each user's ~/.bashrc file, or in the system-wide bash profile - sadly, its location is not standardized (e.g.: Ubuntu: /etc/bash.bashrc; Fedora /etc/bashrc)

Automatically invoking gksudo like UAC

This is about me being stressed by playing the game "type a command and remember to prepend sudo or your fingers will get slapped".
I am wondering if it is possible somehow to configure my Linux system or shell such that when I forget to type e.g. "sudo apt-get install emacs", instead of just telling me that I did something wrong, gksudo would get launched, allowing me to acknowledge my credentials and get on moving. Just like UAC does on windows.
Googling hasn't helped me yet..
So is this possible? Did I miss something? Or am I asking for a square circle?
Edit 2010 July 25th: Thanks everyone for your interrest. Unfortunately, Daenyth and bmargulies answers and explanations are what I anticipated/feared since it was impossible for me to google-up a solution prior to submitting this question. I hope that some nice person will someday provide an effective solution for this.
BR,
Christian
Linux doesn't allow for this. Unlike Windows, where any program can launch a dialog box, and UAC is in the kernel, Linux programs aren't necessarily GUI-capable, and sudo is not, in this sense, in the kernel. A program cannot make a call to elevate privilege (unless it was launched with privilege to begin with and intentionally setuid'd down). sudo is a separate executable with setuid privilege, which checks for permission. If it likes what it sees, it forks the shell to execute the command line. This can't be turned inside out.
As suggested in other posts, you may be able to come up with some 'shell game' to arrange to run sudo for you for some enumerated list of commands, but that's all you are going to get.
You can do what you want with a preexec hook function, similar to the command-not-found package.
There's no way to do this given the current linux software stack. Additionally, MS has a patent on this behavior -- present a user interface identifying an account having a right to permit a task in response to the task being prohibited based on a user's current account not having that right.
I don't think this really works in a general way (automatically deciding which application needs admin rights). However you could make aliases like this for every application:
alias alias apt-get='gksudo apt-get'
If you now enter apt-get install firefox the gnome asks for the admin password. You can store the commands in ~./bashrc
You could use a shell script like the following:
#!/bin/bash
$#
if [ $? -ne 0 ]; then
sudo $# # or "gksudo $#"
fi
This will run a command given in the arguments with a sudo prefix if the command came back with a non-zero return code (i.e. if it failed).
Use it as in "SCRIPT_NAME apt-get install emacs" for example. You may save it somewhere in your $PATH and set it as an alias like this (if you saved it as do_sudo):
alias apt-get='do_sudo apt-get'
Edit: That does not work for programs like synaptic which do work for non-root users but will give them less privileges. However, if the application fails when invoked without root privileges (like apt-get does) this works fine.
In the case where you want to always run a command as root but might already be root, you can solve this by wrapping a little bash script around it:
#!/bin/bash
if [ $EUID = 0 ]; then
"$#"
else
gksudo "$#"
fi
If you call this something like alwaysroot.bash and place it in the right spot on your PATH, then you can call your other program like this:
alwaysroot.bash otherprogram -arguments...
It even handles arguments with spaces in correctly.

Resources