I'm trying to get Rails 4.1 to receive bounceback emails but it's been really difficult to even get to this point. I can run the command below in an SSH console when logged in as root, but when I put it in my /etc/valiases file, I get a bounceback from the script saying "the following addresses failed".
runuser -l useraccount -c "cd /home/useraccount/rails_deployments/dev.www/current/bin && rails runner -e development 'EBlast.receive(STDIN.read)'"
/etc/valiases/dev.mydomain.com
eblast-bounce#dev.mydomain.com: "|runuser -l useraccount -c "cd /home/useraccount/rails_deployments/dev.www/current/bin && rails runner -e development 'EBlast.receive(STDIN.read)'""
I've also tried escaping the double-quotes to no avail.
I need to run as useraccount because the RVM environment variables don't exist for root. Running the 1st command in an SSH console when logged in as root works, but not when exim receives an email.
You can't doublequote inside of doublequotes without doing some escaping. Once you start escaping quotes, it can get complicated knowing when you also need to escape other characters as well. Your example doesn't appear to get too complicated, but I suggest a different method.
IMHO you should create a shell script, for example eblast-bounce-script, with the piped commands you want to run. Then set your alias to:
eblast-bounce#dev.mydomain.com: "|/path/to/eblast-bounce-script"
Make sure the make the script executable, and runnable by the user that exim will be calling it as. If you make the script mode 755, owned by root, that should be sufficient.
There are a few things I had to do to work around the problem:
1) Move the runner script into its own file as Todd suggested; nested quotes were causing the script to fail to run.
2) Make the file executable; the permissions were already set to 755.
3) Even though exim was using my username to execute the script, the environment variables such as PATH and HOME were not set at all! This caused ruby to be an unknown command. This caused many other issues because most of the app relies upon RVM and its gemsets. So I couldn't get ruby to run, much less rails. Even if I were to explicitly call the ruby wrapper, spring would break because $HOME wasn't set. Just a cascade of issues because the user environment wasn't being set. I also couldn't just issue su - username -c 'whatever' because the account that exim was using didn't have authority to use su.
So the working setup looks like this:
/etc/valiases/dev.mydomain.com
eblast-bounce#dev.mydomain.com: "|/bin/bash -l -c '/home/useraccount/rails_deployments/dev.www/current/script/receive_eblast_bounce'"
*: ":fail: No Such User Here"
/home/useraccount/rails_deployments/dev.www/current/script/receive_eblast_bounce
D=`pwd`
HOME=/home/useraccount
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
if [[ -s "/home/useraccount/.rvm/scripts/rvm" ]] ; then
source "/home/useraccount/.rvm/scripts/rvm"
fi
cd /home/useraccount/rails_deployments/dev.www/current
./bin/rails runner -e development 'EBlast.receive(STDIN.read)'
cd $D
I'm now having problems with ActionMailer using SSL when it shouldn't, and I don't know if that's related to something I did here, but at least it executes the rails script.
Related
I've been using letsencrypt to generate SSL certificates for my site, more specifically letsencrypt_webfaction. When I run this command in my project, it works
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
However, when I run the same command in a bash script, I get the error
generate_certificate.sh: line 2: letsencrypt_webfaction: command not found
I made sure I had all possible permissions on the bash script using chmod 777 generate_certificate.sh, but still nothing. On top of that I have a bash script that runs right before that, which simply restarts Apache, and that works fine.
I read other S.O articles, such as this one, and tried running dos2unix script.sh, which did run successfully, but when I tried running the bash script again, it didn't work.
Restart Apache Script
#!/bin/bash
../apache2/bin/./restart
#END
Generate SSL Script
#!/bin/bash
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
#END
I'm a python developer, and don't have much experience with Ruby, so excuse my ignorance, but the letsencrypt_webfaction command is a function in my bash profile.
~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
function letsencrypt_webfaction {
PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}
eval "$(rbenv init -)"
PATH=$PATH:$HOME/bin
export PATH
export PATH="$HOME/.rbenv/bin:$PATH"
export TMPDIR="/home/doc4design/src/tmp"
By default, shell functions are only available in the shell they were defined in; they're not inherited by subprocesses. Your .bash_profile is only run by the login shell, not shells that run as subprocesses (e.g. to run scripts).
Option 1: In bash, you can run export -f letsencrypt_webfaction in the defining shell (i.e. in your .bash_profile), and it'll be inherited by subprocesses (provided they're also running bash).
Option 2: You can define the function in your .bashrc instead of .bash_profile, and since you run .bashrc from .bash_profile it'll get defined in all your bash shells.
Option 3: Just use the full command in the script. This would be my preference, since it makes the script more independent. Having a script depend on a shell function that's defined in a completely different place is fragile (as you're experiencing) and just a bit weird.
While I'm at it, here are some general scripting recommendations:
In most contexts, you should put double-quotes around variable references (and strings that contain variable references) to avoid weird effects from word splitting and wildcard expansion. The right side of an assignment is one place it's ok to leave them off (e.g. PATH=$PATH:$HOME/bin and PATH="$PATH:$HOME/bin" are both ok), but I tend to recommend using quotes everywhere as it's hard to keep track of where it's safe to leave them off and where it's dangerous. For the same reason, you should almost always use "$#" instead of $* (as in the letsencrypt_webfaction function).
shellcheck.net is really good at spotting errors like this, so I recommend running your shell scripts through it and acting on its suggestions.
Using the function keyword to define a function is nonstandard; the standard syntax is to use () after the function name, like this:
letsencrypt_webfaction() {
PATH="$PATH:$GEM_HOME/bin" GEM_HOME="$HOME/.letsencrypt_webfaction/gems" RUBYLIB="$GEM_HOME/lib" ruby2.2 "$HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction" "$#"
}
The function I just gave still may not work right, since it (re)defines GEM_HOME after using it. The entire line gets parsed (and pre-existing variable definitions expanded), then the variables defined as prefixes to the command get included in the environment of the command. This means that the ruby script gets the updated value of GEM_HOME, but the updated values of PATH and RUBYLIB are based on whatever value GEM_HOME had when the function was run. I'm pretty sure this is not what you intended.
In the restart apache script, you use a relative path to the restart command. This will be evaluated relative to the working directory of the process that runs the script, not relative to the script's location. This could be anywhere.
I am loading a script (whiptail) when the root user logs into their Linux server, which works fine. The thing is, now, when I attempt to run other scripts from the command prompt (or crontab) the initial script is loaded instead, and it looks like the script that I want to run is not.
This is what ~/.profile looks like:
if [ "$BASH" ]; then
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
fi
mesg n
source /root/menu.sh
So, when I try to run bash -lc 'ruby some/other/script.rb I'm taken into the script that runs at the end of ~/.profile which is menu.sh. How can I keep this from happening?
Here's what I need to have happen in the long run:
The server boots up and takes the user to /root/menu.sh
There are background scripts that run via crontab such as a check in script, job script, etc.
Best-practices: Don't use a login shell unless you need one
When you pass the -l argument to bash, you're telling it to behave as a login shell; this includes running the user's .profile.
If you don't want that behavior, don't pass -l. Thus:
bash -c 'ruby some/other/script.rb'
That said, there's no advantage to doing that over just invoking ruby directly, without any enclosing shell:
ruby some/other/script.rb
If you must use a login shell...
If you want other effects of running the user's .profile, you might set an environment variable to indicate that you want to bypass this behavior:
# in the user's login scripts
[ -n "$skip_menu" ] || source /root/menu.sh
...and then...
skip_menu=1 bash -lc '...your command here...'
...or, if being executed without an enclosing shell...
env skip_menu=1 bash -lc '...your command here...'
We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh
I am trying to create an alias that will execute a script. When i cd into the directory where the script is located... lets say /usr/local/bin/startscript then the script runs as expected and starts the application i want it to.
SO. i went into my bashrc file and added an alias
alias startscript='/usr/local/bin/startscript'
The goal is to be able to run the script by simply typing "startscript" from any directory.
However, when i try to use the alias to run the script, it does not work properly as the application that should start, does not.
My script starts with
#!/bin/sh
and then goes from there
any ideas? Thanks
SCRIPT:
#!/bin/sh
#- Check for user 'user'
if [[ "`whoami`" != "user" ]]; then
echo "This script can only be executed by user 'user'."; exit
fi
. /usr/local/bin/etctrx/startscriptdirectory/startscriptsetup
#- Kill manager to avoid multiple processes
pkill -f 'JavaApp.jar'
#- Start
nohup java -classpath /usr/local/bin/etctrx/startscriptdirectory/RequiredJars/ojdbc5.jar:/usr/local/bin/etctrx/startscriptdirectory/RequiredJars/activation.jar:/usr/local/bin/etctrx/startscriptdirectory/RequiredJars/mail.jar -jar /usr/local/bin/etctrx/startscriptdirectory/JavaApp.jar > ${JAVAAPPLOGS}/startscript.log 2>&1 &
If the script runs as expected while in /usr/local/bin by simply typing startscript, but from another directory the script runs (does not return an error), but doesn't produce the desired results, then the issue is with how you reference the application from within the script.
As others have noted, you shouldn't need an alias for something in /usr/local/bin and if it runs from that directory, obviously your executable permissions are correct too. If the application you're trying to run is also in /usr/local/bin then your script probably assumes it's in the same directory, which wouldn't be the case elsewhere, so you would need to either ad a cd to /usr/local/bin within the script or specify the full application path.
I am able to call the script if i do this, but it still won't give me the
results I want,(application being started) like i do when I run the script from
the directory it lives in
It would appear that the "application" in question is in the same directory as the script, /usr/local/bin, which we have established is already on your PATH. For the script to run correctly but not the application means you might be calling the application wrong, for example
./application
This would fail unless you were calling from /usr/local/bin. Fix would be like this
application
I would like my root-requiring bash script to be run from IntelliJ/WebStorm, asking me for the root password when I run it. Having my root password hardcoded in the script is a bad idea of course.
IntelliJ/WebStorm actually has a $Prompt$ macro for reasons like this, which prompts you and uses your input as a value.
So I tried using $Prompt$ along with echo YOURPASSWORD | sudo -S yourcommand as described in use-sudo-with-password-as-parameter.
Then I pass passwd & script to run to a sudorun.sh script echo -e $1 | sudo -S $2 $3 $4 (since echo can't be be the 'program' line) which although works on the CLI, it fails to read echo-stdin on the IntelliJ console.
Ideally, I would like the solution to be configured solely from within IntelliJ and not require specific OS configuration changes outside of IntelliJ.
Perhaps there are other ways to deal with this, so lets improvise!
I, too, faced the same issue, but I work with sensitive data on my development machine and removing the password requirement for sudoers just isn't an option.
I was able to resolve this issue by launching the actual WebStorm application from the command line using the sudo command as follows:
sudo /Applications/WebStorm.app/Contents/MacOS/webide
Once WebStorm/PhpStorm are launched this way, you can run a script with root access without supplying root credentials.
Use the NOPASSWD feature of sudo. Add a rule like so to sudoers (via visudo or similar):
someuser ALL = NOPASSWD: /usr/bin/interesting_program
%somegroup ALL = NOPASSWD: /usr/bin/interesting_program
I find myself automating a lot of my workflow, and running into the same issue. I don't want to punch a hole in my sudoer permissions, and I don't want to run my IDE as root either. A good solution that I've found is gksudo, on Ubuntu and many other Linux variants you'll find it installed by default. What gksudo does is it allows you to prompt the user(yourself) to input your password with a graphic overlay, much like Ubuntu/KDE/etc. do when you need to be root to perform an operation such as an update.
This will then prompt you to provide your password to escalate privilege, then execute a given command/program as root.
In the Edit Tool Window simply:
Set the Program to /usr/bin/gksudo
gksudo may be located at a different path, try: whereis gksudo to find its path
Set Parameters to all commands you want to execute in quotes
Ex. "mongod --fork --config /etc/mongodb.conf; service elasticsearch start"
Make sure you have the quotes!
Set a working directory(if needed)