Shell script hangs when i switch to bash - Linux [duplicate] - linux

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm very very new to Linux(coming from windows) and trying to write a script that i can hopefully execute over multiple systems. I tried to use Python for this but fount it hard too. Here is what i have so far:
cd /bin
bash
source compilervars.sh intel64
cd ~
exit #exit bash
file= "~/a.out"
if[! -f "$file"]
then
icc code.c
fi
#run some commands here...
The script hangs in the second line (bash). I'm not sure how to fix that or if I'm doing it wrong. Please advice.
Also, any tips of how to run this script over multiple systems on the same network?
Thanks a lot.

What I believe you'd want to do:
#!/bin/bash
source /bin/compilervars.sh intel64
file="$HOME/a.out"
if [ ! -f "$file" ]; then
icc code.c
fi
You would put this in a file and make it executable with chmod +x myscript. Then you would run it with ./myscript. Alternatively, you could just run it with bash myscript.
Your script makes little sense. The second line will open a new bash session, but it will just sit there until you exit it. Also, changing directories back and forth is very seldom required. To execute a single command in another directory, one usually does
( cd /other/place && mycommand )
The ( ... ) tells the shell that you'd like to do this in a sub-shell. The cd happens within that sub-shell and you don't have to cd back after it's done. If the cd fails, the command will not be run.
For example: You might want to make sure you're in $HOME when you compile the code:
if [ ! -f "$file" ]; then
( cd $HOME && icc code.c )
fi
... or even pick out the directory name from the variable file and use that:
if [ -f "$file" ]; then
( cd $(dirname "$file") && icc code.c )
fi
Assigning to a variable needs to happen as I wrote it, without spaces around the =.
Likewise, there needs to be spaces after if and inside [ ... ] as I wrote it above.
I also tend to use $HOME rather than ~ in scripts as it's more descriptive.

A shell script isn't a record of key strokes which are typed into a terminal. If you write a script like this:
command1
bash
command2
it does not mean that the script will switch to bash, and then execute command2 in the different shell. It means that bash will be run. If there is a controlling terminal, that bash will show you a prompt and wait for a command to be typed in. You will have to type exit to quit that bash. Only then will the original script then continue with command2.
There is no way to switch a script to a different shell halfway through. There are ways to simulate this. A script can re-execute itself using a different shell. In order to do that, the script has to contain logic to detect that it is being re-executed, so that it can prevent re-executing itself again, and to skip some code that shouldn't be run twice.
In this script, I implemented such a re-execution hack. It consists of these lines:
#
# The #!/bin/sh might be some legacy piece of crap,
# not even up to 1990 POSIX.2 spec. So the first step
# is to look for a better shell in some known places
# and re-execute ourselves with that interpreter.
#
if test x$txr_shell = x ; then
for shell in /bin/bash /usr/bin/bash /usr/xpg4/bin/sh ; do
if test -x $shell ; then
txr_shell=$shell
break
fi
done
if test x$txr_shell = x ; then
echo "No known POSIX shell found: falling back on /bin/sh, which may not work"
txr_shell=/bin/sh
fi
export txr_shell
exec $txr_shell $0 ${#+"$#"}
fi
The txr_shell variable (not a standard variable, my invention) is how this logic detects that it's been re-executed. If the variable doesn't exist then this is the original execution. When we re-execute we export txr_shell so the re-executed instance will then have this environment variable.
The variable also holds the path to the shell; that is used later in the script; it is passed through to a Makefile as the SHELL variable, so that make build recipes use that same shell. In the above logic, the contents of txr_shell don't matter; it's used as Boolean: either it exists or it doesn't.
The programming style in the above code snippet is deliberately coded to work on very old shells. That is why test x$txr_shell = x is used instead of the modern syntax [ -z "$txr_shell" ], and why ${#+"$#"} is used instead of just "$#".
This style is no longer used after this point in the script, because the
rest of the script runs in some good, reasonably modern shell thanks to the re-execution trick.

Related

how to get the Unix shell executable name for a script marked as executable and bin/bash shebang [duplicate]

I'm writing a bash script and it throws an error when using "sh" command in Ubuntu (it seems it's not compatible with dash, I'm learning on this subject). So I would like to detect if dash is being used instead of bash to throw an error.
How can I detect it in a script context?. Is it even possible?
You can check for the presence of shell-specific variables:
For instance, bash defines $BASH_VERSION.
Since that variable won't be defined while running in dash, you can use it to make the distinction:
[ -n "$BASH_VERSION" ] && isBash=1
Afterthought: If you wanted to avoid relying on variables (which, conceivably, could be set incorrectly), you could try to obtain the ultimate name of the shell executable running your script, by determining the invoking executable and, if it is a symlink, following it to its (ultimate) target.
The shell function getTrueShellExeName() below does that; for instance, it would return 'dash' on Ubuntu for a script run with sh (whether explicitly or via shebang #!/bin/sh), because sh is symlinked to dash there.
Note that the function's goal is twofold:
Be portable:
Work with all POSIX-compatible (Bourne-like) shells,
across at least most platforms, with respect to what utilities and options are used - see caveats below.
Work in all invocation scenarios:
sourced (whether from a login shell or not)
executed stand-alone, via the shebang line
executed by being passed as a filename argument to a shell executable
executed by having its contents piped via stdin to a shell executable
Caveats:
On at least one platform - macOS - sh is NOT a symlink, even though it is effectively bash. There, the function would return 'sh' in a script run with sh.
The function uses readlink, which, while not mandated by POSIX, is present on most modern platforms - though with differing syntax and features. Therefore, using GNU readlink's -f option to find a symlink's ultimate target is not an option.
(The only modern platform I'm personally aware of that does not have a readlink utility is HP-UX - see https://stackoverflow.com/a/24114056/45375 for a recursive-readlink implementation that should work on all POSIX platforms.)
The function uses the which utility (except in zsh, where it's a builtin), which, while not mandated by POSIX, is present on most modern platforms.
Ideally, ps -p $$ -o comm= would be sufficient to determine the path of the executable underlying the process, but that doesn't work as intended when directly executing shell scripts with shebang lines on Linux, at least when using the ps implementation from the procps-ng package, as found on Ubuntu, for instance: there, such scripts report the script's file name rather than the underlying script engine's.Tip of the hat to ferdymercury for his help.
Therefore, the content of special file /proc/$$/cmdline is parsed on Linux, whose first NUL-separated field contains the true executable path.
Example use of the function:
[ "$(getTrueShellExeName)" = 'bash' ] && isBash=1
Shell function getTrueShellExeName():
getTrueShellExeName() {
local trueExe nextTarget 2>/dev/null # ignore error in shells without `local`
# Determine the shell executable filename.
if [ -r /proc/$$/cmdline ]; then
trueExe=$(cut -d '' -f1 /proc/$$/cmdline) || return 1
else
trueExe=$(ps -p $$ -o comm=) || return 1
fi
# Strip a leading "-", as added e.g. by macOS for login shells.
[ "${trueExe#-}" = "$trueExe" ] || trueExe=${trueExe#-}
# Determine full executable path.
[ "${trueExe#/}" != "$trueExe" ] || trueExe=$([ -n "$ZSH_VERSION" ] && which -p "$trueExe" || which "$trueExe")
# If the executable is a symlink, resolve it to its *ultimate*
# target.
while nextTarget=$(readlink "$trueExe"); do trueExe=$nextTarget; done
# Output the executable name only.
printf '%s\n' "$(basename "$trueExe")"
}
Use $0 (that is the name of the executable of the shell being called).The command for example
echo $0
gives
/usr/bin/dash
for the dash and
/bin/bash
for a bash.The parameter substitution
${0##*/}
gives just 'dash' or 'bash'. This can be used in a test.
An alternative approach might be to test if a shell feature is available, for example to give an idea...
[[ 1 ]] 2>/dev/null && echo could be bash || echo not bash, maybe dash
echo $0 and [[ 1 ]] 2>/dev/null && echo
could be bash || echo not bash, maybe bash worked for me running Ubuntu 19.
Done slight Pascal, Fortran and C in school, but need to become fluent in shell script.

Running multiple scripts from bash on parallel without printing output to console

Let assume I have multiple files paths for run from terminal,
I want them to be run on parallel in the background without printing their output to the console screen. (Their output should be saved to some other log path which is defined in the python file itself).
The paths are in this format:
/home/Dan/workers/1/run.py
/home/Dan/workers/2/run.py etc.
When I try to do something like running one worker in background it seems to work:
In example: cd /home/Dan/workers/1/
and python run.py > /dev/null 2>&1 &
And ps -ef |grep python , indeed show the script running on background without printing to console, but printing to its predefined log path.
However, when I try to launch them all via bash script, I've no python scripts run after the following code:
#!/bin/bash
for path in /home/Dan/workers/*
do
if [-f path/run.py ]
then
python run.py > /dev/null 2>&1 &
fi
done
Any idea what is the difference?
In bash script I try to launch many scripts one after another just like I did for only one script.
#!/bin/bash
for path in /home/Dan/workers/*
do # VV V-added ${}
if [ -f "${path}/run.py" ]
then
python "${path}/run.py" > /dev/null 2>&1 &
# or (cd $path; python run.py > /dev/null 2>&1 &) like Terje said
fi
done
wait
Use ${path} instead of just path. path is the name of the variable, but what you want when you are testing the file is the value that is stored in path. To get that, prefix with $. Note that $path will also work in most situations, but if you use ${path} you will be more clear about exactly which variable you mean. Especially when learning bash, I recommend sticking with the ${...} form.
Edit: Put the whole name in double-quotes in case ${path} contains any spaces.
#!/bin/bash
There is nothing bash-specific about this script; write #! /bin/sh instead. (Don't write bash-specific scripts, ever; if a bash-specific feature appears to be the easiest way to solve a problem, that is your cue to rewrite the entire thing in a better programming language instead.)
for path in /home/Dan/workers/*
do
This bit is correct.
if [-f path/run.py ]
... but this is wrong. Shell variables are not like variables in Python. To use (the jargon term is "expand") a shell variable you have to put a $ in front of it. Also, you need to put double quotation marks around the entire "word" containing the shell variable to be expanded, or "word splitting" will happen, which you don't want. (There are cases where you want word splitting, and then you leave the double quotes out, but only do that when you know you want word splitting.) Also also, there needs to be a space between [ and -f. Putting it all together, this line should read
if [ -f "$path/run.py" ]
.
then
python run.py > /dev/null 2>&1 &
fi
The run.py on this line should also read "$path/run.py". It is possible, depending on what each python script does, that you instead want the entire line to read
( cd "$path" && exec python run.py > /dev/null 2>&1 ) &
I can't say for sure without knowing what the scripts do.
done
There should probably be another line after this reading just
wait
so that the outer script does not terminate until all the workers are done.
One important difference is that your bash script does not cd into the subdirectories before calling run.py
The second last line should be
python ${path}/run.py > /dev/null 2>&1 &
or
(cd $path; python run.py > /dev/null 2>&1 &)
What I would suggest is, not to invoke via loop. Your path suggests that you will be having multiple scripts. Actually the loop statement which you have included invokes each python script sequentially. It will run second one only when first one is over.
Better add
#!/bin/python on the top in your python scripts (or whichever path your python is installed)
and then run
/home/Dan/workers/1/run.py && /home/Dan/workers/2/run.py > /dev/null 2>&1 &

How can I write a bash script that sets a variable that's available to the user in the terminal? [duplicate]

This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.

Bash: Only allow script to run by being called from another script

We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh

80 chars limits on script executable path over nfs [duplicate]

I'm trying to execute python scripts automatically generated by zc.buildout so I don't have control over them. My problem is that the shebang line (#!) is too long for either bash (80 character limit) or direct execution (some Linux kernel constant I don't know).
This is an example script to help you reproduce my problem:
#!/././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././././bin/bash
echo Hola!
How can be bash or the kernel configured to allow for bigger shebang lines?
Limited to 127 chars on 99.9% of systems due to kernel compile time buffer limit.
It's limited in the kernel by BINPRM_BUF_SIZE, set in include/linux/binfmts.h.
If you don't want to recompile your kernel to get longer shebang lines, you could write a wrapper:
#!/bin/bash
if [[ $# -eq 0 ]]; then
echo "usage: ${0##*/} script [args ...]"
exit
fi
# we're going to expand a variable *unquoted* to use word splitting, but
# we don't want to have path expansion effects, so turn that off
set -f
shebang=$(head -1 "$1")
if [[ $shebang == '#!'* ]]; then
interp=( ${shebang#\#!} ) # use an array in case a argument is there too
else
interp=( /bin/sh )
fi
# now run it
exec "${interp[#]}" "$#"
and then run the script like: wrapper.sh script.sh
Updated #glenn jackman's script to support passing in command line arguments.
Incidentally, I ran into this problem when creating a python virtualenv inside of a very deep directory hierarchy.
In my case, this was a virtualenv created inside a Mesos framework dir.
The extra long shebang rendered calling xxx/.../venv/bin/pip useless.
The wrapper script proved most useful.
#!/usr/bin/env bash
script="$1"
shebang=$(head -1 "$script")
# use an array in case a argument is there too
interp=( ${shebang#\#!} )
# now run it, passing in the remaining command line arguments
shift 1
exec "${interp[#]}" "$script" "${#}"

Resources