So I was trying to create a script on bash shell, I came to know that the script doesn't run on ksh or dash shells. So my question is how you make a script to run on all 3 (bash, dash & ksh) shells.
In order to write a script that is guaranteed to be portable between the various shells, the script must be POSIX Shell compliant. POSIX is a minimum set of builtins and commands that all conforming shells must support. Ash, Dash, Zsh, Bash, Ksh, etc.. are all shells capable of running scripts that are POSIX compliant.
What shells like Bash do is add nice features which make the shell more capable, like additional parameter expansions for conversion to upper/lower case, substring replacement, etc.. and new builtins like [[ ... ]] that provide regex matching capabilities, etc.. While this makes Bash more capable, it also means scripts written using "Bashisms" are no longer able to run under all other shells. Ash, Dash and other minimal shells have no idea how to handle the features added by Bash, Ksh or Zsh and therefore fail.
To write truly portable scripts, you must limit the content to that provided by the POSIX command language.
You need something file like this:
#!/bin/bash #isn't a simple comment
echo "hello bash"
#!/bin/sh #isn't a simple comment
echo "hello sh"
#!/bin/ksh #isn't a simple comment
echo "hello ksh"
( #!) it's called shebang tells the shell what program to interpret the script
called this file as you better prefert (file.bsk), but don't forget give it execute permission it with :
chmod +x file.bsk
then run ./file.bsk
Some commands or utilities are not available in all shells or they might have different behavior in different shell. If you know which command run on which shell or gives you desired output you can write shell specific commends as below
bash -c 'echo bash'
ksh -c 'echo ksh'
All other commands that are common to all shell can be written in normal way.
I have a perl script with shebang as
#!/usr/bin/env perl
I want this script to print each line as it is executed. So I installed Devel::Trace and changed script shebang to
#!/usr/bin/env perl -d:Trace
But this gives error as it is not a valid syntax.
What should I do to use both env functionality and tracing functionality?
This is one of those things that Just Doesn't Work™ on some systems, notably those with a GNU env.
Here's a sneaky workaround mentioned in perlrun that I've (ab)used in the past:
#!/bin/sh
#! -*-perl-*-
eval 'exec perl -x -wS $0 ${1+"$#"}'
if 0;
print "Hello, world!\n";
This will find perl on your PATH and you can add whatever other switches you'd like to the command line. You can even set environment variables, etc. before perl is invoked. The general idea is that sh runs the eval, but perl doesn't, and the extra gnarly bits ensure that Perl finds your program correctly and passes along all the arguments.
#!/bin/sh
FOO=bar; export FOO
#! -*-perl-*-
eval 'exec perl -d:Trace -x -wS $0 ${1+"$#"}'
if 0;
$Devel::Trace::TRACE = 1;
print "Hello, $ENV{FOO}!\n";
If you save the file with a .pl extension, your editor should detect the correct file syntax, but the initial shebang might throw it off. The other caveat is that if the Perl part of the script throws an error, the line number(s) might be off.
The neat thing about this trick is that it works for Ruby too (and possibly some other languages like Python, with additional modifications):
#!/bin/sh
#! -*-ruby-*-
eval 'exec ruby -x -wS $0 ${1+"$#"}' \
if false
puts "Hello, world!"
Hope that helps!
As #hek2mgl comments above, a flexible way of doing that is using a shell wrapper, since the shebang admits a single argument (which is going to be perl). A simple wraper would be this one
#!/bin/bash
env perl -d:Trace "$#"
Which you can use then like this
#!./perltrace
or you can create similar scripts, and put them wherever perl resides.
First, the shebang line is handled differently depending on the OS. I'm talking about GNU/Linux here, the leading operating system. ;)
The shebang line will be split only in two parts, the interpreter (usr/bin/perl) and the second argument which is supposed to prepend be the filename argument which itself will be append automatically when executing the shebanged file. Some interpreters need that. Like #!/usr/bin/awk -f for example. -f is needed in front of the filename argument.
Perl doesn't need the -f to pass the perl file name, meaning it works like
perl file.pl
instead of
perl -f file.pl
That gives you basically room for one argument switch that you can choose, like
#!/usr/bin/perl -w
to enable warnings. Furthermore, since perl is using getopt() to parse the command line arguments, and getopt() does not require argument switches to be separated by spaces, you pass even multiple switches as long as you don't separate them, like this:
#!/usr/bin/perl -Xw
Well, as soon as an option takes a value, like -a foo that doesn't work any more and such options can't be passed at all. No chance.
A more flexible way is to use a shell wrappper like this:
#!/bin/bash
exec perl -a -b=123 ... filename.pl
PS: Looking at your question again, you have been asking how to use perl switches together with /usr/bin/env perl. No chance. If you pass an option to Perl, like /usr/bin/env perl -w, Linux would try to open the interpreter 'perl -w'. No further splitting.
You can use the -S option of env to pass arguments. For example:
#!/usr/bin/env -S perl -w
works as expected
This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
Someone can explain me why when I copy and paste the following command in the terminal it displays the colorful test correctly, but when I run it via sh myscript.sh it does not display the colored text?
blue='\e[1;34m'
NC='\e[0m'
echo -e "${blue}Test${NC}"
EDIT
Sudo is not the problem. If I copy the above and paste directly into the terminal, everything works. If you run through file, sh myscript.sh not work
Probably because sh isn't bash on your system.
$ file /bin/sh
/bin/sh: symbolic link to `dash'
Try
bash myscript.sh
Your interactive shell seems to be GNU Bash, while sh is a generic POSIX shell, which actually may be dash, busybox sh or something else. The problem is that neither -e option for echo nor \e are POSIX-compliant.
But you can easily use printf instead of echo -e (do not forget to explicitly specify newline character \n) and \033 instead of \e:
blue='\033[1;34m'
NC='\033[0m'
printf "${blue}%s${NC}\n" 'Test'
Or, of course, you can just use bash (as Elliott Frisch suggested) if you are sure that it would be available on target system.
Also I should point out, that what you done is not right way to run shell scripts at all. If you’re writing a standalone script, then you’d better to use hashbang and set execution bit to file.
$ cat myscript
#!/bin/sh
blue='\033[1;34m'
NC='\033[0m'
printf "${blue}%s${NC}\n" 'Test'
$ chmod +x myscript
$ ./myscript
But if you’re writing a command sequence (a macros, if you will) for interactive shell, there is source (or simply .) command:
$ source myscript
(Then all of above about POSIX-compliance does not matter of course.)
I'm trying to declare a function in tcsh and to call it.
#! /bin/tcsh -f
helloWorld () {
echo "a"
}
helloWorld
I'm getting the following error:
< 512 mews2895 ~/tmp/script> 1.sh
Badly placed ()'s.
Does anyone here what the problem might be?
Thanks
tcsh does not support functions.
Best solution: Use a shell that does, such as bash.
If you must use tcsh for some reason, aliases will solve your immediate problem, but are much weaker than functions.
alias helloWorld 'echo "a"'
Another possible solution is to invoke a separate script. (You'll have to ensure that the invoked script is in your $PATH.)
There are not functions in tcsh. So I see 2 options:
Use aliases:
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.bpxa500/alias.htm
Use goto. (People tend to criticize go-to, but It actually depends on the context).
There is an other option, use source if you want to organize your code with multiple files:
To run a shell script in your current environment, without creating a
new process, use the source command. You could run the calculate shell
script this way: source calculate Should you want to use a shell
script that updates a variable in the current environment, run it with
the source command.
src: OS/390 UNIX System Services tcsh (C Shell) Kit Support Guide - IBM
I think that 'use a different shell' should not be a valid response.
Regards,
Pablo
Try below code for functions usage in tcsh
#! /bin/tcsh -f
goto helloWorld
helloWorld:
echo "a"
Although the C Shell lacks functions, aliases serve as workaround. However, pipes and I/O redirection don't work well with multi-line aliases, except if eval is issued. To avoid eval, have the script in a variable and source it from a FIFO:
setenv qscr 'if -e $1 then\
echo OK\
else\
echo Not OK\
endif'
mkfifo ~/qscr
alias qscr '( echo "$qscr:q" > ~/qscr & ) ; source ~/qscr'
Or have it in an alias alone, with echo:
alias qscr '( echo '\''if -e $1 then\\
echo OK\\
else\\
echo Not OK\\
endif'\'' > ~/qscr & ) ; source ~/qscr'
mkfifo ~/qscr