Hide a bash function internals - linux

My .bashrc looks something like this...
export PERL5LIB="/tools/perl/Linux/${PLAT}/lib/perl5/5.10.0/${PLAT}-thread-multi"
export PERL5LIB="${PERL5LIB}:/tools/perl/Linux/${PLAT}/lib/perl5/5.10.0"
function dev {
export PERL5LIB="/dev/tools/perl/Linux/${PLAT}/lib/perl5/5.10.0/${PLAT}-thread-multi"
export PERL5LIB="${PERL5LIB}:/dev/tools/perl/Linux/${PLAT}/lib/perl5/5.10.0"
}
The problem is that when I grep for PERL5LIB is see everything.
> env | grep PERL
PERL5LIB=/tools/perl/Linux/x86_64/lib/perl5/5.10.0/x86_64-thread-multi:/tools/perl/Linux/x86_64/lib/perl5/5.10.0
export PERL5LIB="/dev/tools/perl/Linux/${PLAT}/lib/perl5/5.10.0/${PLAT}-thread-multi";
export PERL5LIB="${PERL5LIB}:/dev/tools/perl/Linux/${PLAT}/lib/perl5/5.10.0";
So it's picking up the stuff inside of my "dev" function. Is there a way to hide the contents of a function? Or do I just need to get used to getenv.. Old habits are hard to break..

Run type env at your bash prompt, and provide the output; for me, this indicates that env is /usr/bin/env, a separate executable; such executables have no way to know anything about functions or non-exported variables.
That said, without fixing the underlying problem (the likely cause being use of a bash built-in, function or alias in place of /usr/bin/env, which the output of the type command will show), there's a workaround available: env | grep '^PERL'; the caret will emit only lines beginning with PERL (as opposed to PERL anywhere in the line), and function contents are indented in output of set (which appears to be running in place of env; again, type env should give a clue to the cause).
One point of clarification: set is a bash builtin which, when run with no arguments, dumps defined variables (environment or otherwise) and functions; when run with arguments, it has some other, completely different (and POSIX-specified) behaviors. env, as an external program, has no access to unexported variables or to functions defined within the shell that calls it.
(set is actually not bash-specific, but rather is specified by POSIX to dump all shell variables; its additional functionality of dumping function definitions is to my knowledge an extension beyond the letter of the standard).

Try:
( set -o posix ; set )

Related

Linux environment variables referencing other variables

I am using a boot script from a network vendor, I am using this on RedHat 7.2 The start script sets up the environment with several variables, however I don't think these variables are set-up correctly.
I have added the start-up script to /etc/environment and I can see that the variables are defined and available to all users.
This is an example of how the variables are defined in the script:
export V1=/opt/nameofsupplier/sdk/CentOS-RHEL-7-x86_64
export V2=${V1}/lib/cam
There are many more, if I try this from a terminal:
cd $V1
It works fine, however if I try:
cd $V2
I get:
base: cd $V1/lib/cam: No such file or directory
The path is valid, and if I do this in the shell:
export V2=${V1}/lib/cam
cd $V2
It works without any error, how do I fix the script?
You may be right in suspecting an ill-definition of these variables.
/etc/environment can only contain variable definitions - it is not executed like a normal script (see its documentation here, which says Variable expansion does not work in /etc/environment.), so no variable expansion of V1 in the definition of V2 takes place. Therefore V2 is not correctly defined.
Try to source /etc/environment lines in the system-wide /etc/profile (or its equivalent, depending on the shells of your users) or in specific users' ~/.profiles.As the last resort you can just plain copy the respective lines of /etc/environment to the above mentioned scripts (but this will make it harder to maintain).
You could also correct the definitions in /etc/environment not to rely on expansion, i.e. like this:
export V2=/opt/nameofsupplier/sdk/CentOS-RHEL-7-x86_64/lib/cam
(assuming there are not too much of them to be corrected). But this will also be hard to maintain.

Aliases are not executed in Shell script

In my bashrc file I have n number of alias. But, If I execute via shell script,
it will not give expected output. Why it will be like this. Is there any way to
solve this problem.
Thanks in advance.
Aliases (as set using alias name=value) are only used in an interactive context, i. e. when the user types something on the command line. They are never executed by a script (unless a non-interactive shell is explicitly tweaked to do this using the shopt -s expand_aliases):
#!/bin/bash
alias ttt=date
ttt # will fail!
Sourcing a configuration script which defines aliases will not change anything about this. Scripts simply will not execute aliases.
To achieve what you want, rewrite your aliases as shell functions:
#!/bin/bash
ttt() {
date
}
ttt # will succeed!
Shell functions can replace aliases completely but there are some more things to know and consider:
You can even export shell functions so that child shells also have them. Use export -f ttt for this.
Shell functions can override other commands so they can interfere in the behaviour of scripts (unlike aliases which are never executed in scripts). Keep this in mind in case you plan to override things like cd or ls.
An overridden built-in of the shell (e. g. cd) can still be reached by calling it as builtin cd /my/direc/tory.
Argument handling is quite different from aliases (and much more powerful).

executing a linux command from a TCL file

I want to run a command inside a tcl file. According to Unix examples, I wrote:
....
exec export LD_LIBRARY_PATH=/opt/gcc-4.1.2-built/lib64
puts $gofile "#!/bin/bash
....
However I get this error:
couldn't execute "export": no such file or directory
while executing
"exec export LD_LIBRARY_PATH=/opt/gcc-4.1.2-built/lib64"
If i remove that exec line, there is no error.
To set an environment variable, you don't use exec but rather just write to the appropriate element of the global env associative array (the :: is “this is a global variable” and can be omitted if you're writing a top-level script):
set ::env(LD_LIBRARY_PATH) "/opt/gcc-4.1.2-built/lib64"
Then you can just exec and the value will be inherited correctly:
puts $gofile "#!/bin/bash
...."
(I'm a little surprised that you're passing in a multi-line script like that, but if it works for you, that's cool. Still, I find that if I'm doing that it's usually better to split things into multiple files. It reduces the amount of head-scratching and confusion since you don't end up fighting with more levels of quoting than the minimum.)

How can I define a bash alias as a sequence of multiple commands? [duplicate]

This question already has answers here:
Multiple commands in an alias for bash
(10 answers)
Closed 4 years ago.
I know how to configure aliases in bash, but is there a way to configure an alias for a sequence of commands?
I.e say I want one command to change to a particular directory, then run another command.
In addition, is there a way to setup a command that runs "sudo mycommand", then enters the password? In the MS-DOS days I'd be looking for a .bat file but I'm unsure of the linux (or in this case Mac OSX) equivalent.
For chaining a sequence of commands, try this:
alias x='command1;command2;command3;'
Or you can do this:
alias x='command1 && command2 && command3'
The && makes it only execute subsequent commands if the previous returns successful.
Also for entering passwords interactively, or interfacing with other programs like that, check out expect. (http://expect.nist.gov/)
You mention BAT files so perhaps what you want is to write a shell script. If so then just enter the commands you want line-by-line into a file like so:
command1
command2
and ask bash to execute the file:
bash myscript.sh
If you want to be able to invoke the script directly without typing "bash" then add the following line as the first line of the file:
#! /bin/bash
command1
command2
Then mark the file as executable:
chmod 755 myscript.sh
Now you can run it just like any other executable:
./myscript.sh
Note that unix doesn't really care about file extensions. You can simply name the file "myscript" without the ".sh" extension if you like. It's that special first line that is important. For example, if you want to write your script in the Perl programming language instead of bash the first line would be:
#! /usr/bin/perl
That first line tells your shell what interpreter to invoke to execute your script.
Also, if you now copy your script into one of the directories listed in the $PATH environment variable then you can call it from anywhere by simply typing its file name:
myscript.sh
Even tab-completion works. Which is why I usually include a ~/bin directory in my $PATH so that I can easily install personal scripts. And best of all, once you have a bunch of personal scripts that you are used to having you can easily port them to any new unix machine by copying your personal ~/bin directory.
it's probably easier to define functions for these types of things than aliases, keeps things more readable if you want to do more than a command or two:
In your .bashrc
perform_my_command() {
pushd /some_dir
my_command "$#"
popd
}
Then on the command line you can simply do:
perform_my_command my_parameter my_other_parameter "my quoted parameter"
You could do anything you like in a function, call other functions, etc.
You may want to have a look at the Advanced Bash Scripting Guide for in depth knowledge.
For the alias you can use this:
alias sequence='command1 -args; command2 -args;'
or if the second command must be executed only if the first one succeeds use:
alias sequence='command1 -args && command2 -args'
Your best bet is probably a shell function instead of an alias if the logic becomes more complex or if you need to add parameters (though bash supports aliases parameters).
This function can be defined in your .profile or .bashrc. The subshell is to avoid changing your working directory.
function myfunc {
( cd /tmp; command )
}
then from your command prompt
$ myfunc
For your second question you can just add your command to /etc/sudoers (if you are completely sure of what you are doing)
myuser ALL = NOPASSWD: \
/bin/mycommand
Apropos multiple commands in a single alias, you can use one of the logical operators to combine them. Here's one to switch to a directory and do an ls on it
alias x="cd /tmp && ls -al"
Another option is to use a shell function. These are sh/zsh/bash commands. I don't know enough of other shells to be sure if they work.
As for the sudo thing, if you want that (although I don't think it's a good idea), the right way to go is to alter the /etc/sudoers file to get what you want.
You can embed the function declaration followed by the function in the alias itself, like so:
alias my_alias='f() { do_stuff_with "$#" (arguments)" ...; }; f'
The benefit of this approach over just declaring the function by itself is that you can have a peace of mind that your function is not going to be overriden by some other script you're sourcing (or using .), which might use its own helper under the same name.
E.g., Suppose you have a script init-my-workspace.sh that you're calling like . init-my-workspace.sh or source init-my-workspace.sh whose purpose is to set or export a bunch of environment variables (e.g., JAVA_HOME, PYTHON_PATH etc.). If you happen to have a function my_alias inside there, as well, then you're out of luck as the latest function declaration withing the same shell instance wins.
Conversely, aliases have separate namespace and even in case of name clash, they are looked up first. Therefore, for customization relevant to interactive usage, you should only ever use aliases.
Finally, note that the practice of putting all the aliases in the same place (e.g., ~/.bash_aliases) enables you to easily spot any name clashes.
you can also write a shell function; example for " cd " and "ls " combo here

Defining a variable with or without export

What is export for?
What is the difference between:
export name=value
and
name=value
export makes the variable available to sub-processes.
That is,
export name=value
means that the variable name is available to any process you run from that shell process. If you want a process to make use of this variable, use export, and run the process from that shell.
name=value
means the variable scope is restricted to the shell, and is not available to any other process. You would use this for (say) loop variables, temporary variables etc.
It's important to note that exporting a variable doesn't make it available to parent processes. That is, specifying and exporting a variable in a spawned process doesn't make it available in the process that launched it.
To illustrate what the other answers are saying:
$ foo="Hello, World"
$ echo $foo
Hello, World
$ bar="Goodbye"
$ export foo
$ bash
bash-3.2$ echo $foo
Hello, World
bash-3.2$ echo $bar
bash-3.2$
It has been said that it's not necessary to export in bash when spawning subshells, while others said the exact opposite. It is important to note the difference between subshells (those that are created by (), ``, $() or loops) and subprocesses (processes that are invoked by name, for example a literal bash appearing in your script).
Subshells will have access to all variables from the parent, regardless of their exported state.
Subprocesses will only see the exported variables.
What is common in these two constructs is that neither can pass variables back to the parent shell.
$ noexport=noexport; export export=export; (echo subshell: $noexport $export; subshell=subshell); bash -c 'echo subprocess: $noexport $export; subprocess=subprocess'; echo parent: $subshell $subprocess
subshell: noexport export
subprocess: export
parent:
There is one more source of confusion: some think that 'forked' subprocesses are the ones that don't see non-exported variables. Usually fork()s are immediately followed by exec()s, and that's why it would seem that the fork() is the thing to look for, while in fact it's the exec(). You can run commands without fork()ing first with the exec command, and processes started by this method will also have no access to unexported variables:
$ noexport=noexport; export export=export; exec bash -c 'echo execd process: $noexport $export; execd=execd'; echo parent: $execd
execd process: export
Note that we don't see the parent: line this time, because we have replaced the parent shell with the exec command, so there's nothing left to execute that command.
This answer is wrong but retained for historical purposes. See 2nd edit below.
Others have answered that export makes the variable available to subshells, and that is correct but merely a side effect. When you export a variable, it puts that variable in the environment of the current shell (ie the shell calls putenv(3) or setenv(3)).
The environment of a process is inherited across exec, making the variable visible in subshells.
Edit (with 5 year's perspective):
This is a silly answer. The purpose of 'export' is to make variables "be in the environment of subsequently executed commands", whether those commands be subshells or subprocesses. A naive implementation would be to simply put the variable in the environment of the shell, but this would make it impossible to implement export -p.
2nd Edit (with another 5 years in passing).
This answer is just bizarre. Perhaps I had some reason at one point to claim that bash puts the exported variable into its own environment, but those reasons were not given here and are now lost to history. See Exporting a function local variable to the environment.
export NAME=value for settings and variables that have meaning to a subprocess.
NAME=value for temporary or loop variables private to the current shell process.
In more detail, export marks the variable name in the environment that copies to a subprocesses and their subprocesses upon creation. No name or value is ever copied back from the subprocess.
A common error is to place a space around the equal sign:
$ export FOO = "bar"
bash: export: `=': not a valid identifier
Only the exported variable (B) is seen by the subprocess:
$ A="Alice"; export B="Bob"; echo "echo A is \$A. B is \$B" | bash
A is . B is Bob
Changes in the subprocess do not change the main shell:
$ export B="Bob"; echo 'B="Banana"' | bash; echo $B
Bob
Variables marked for export have values copied when the subprocess is created:
$ export B="Bob"; echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash &
[1] 3306
$ B="Banana"; echo '(sleep 30; echo "Subprocess 2 has B=$B")' | bash
Subprocess 1 has B=Bob
Subprocess 2 has B=Banana
[1]+ Done echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash
Only exported variables become part of the environment (man environ):
$ ALICE="Alice"; export BOB="Bob"; env | grep "ALICE\|BOB"
BOB=Bob
So, now it should be as clear as is the summer's sun! Thanks to Brain Agnew, alexp, and William Prusell.
It should be noted that you can export a variable and later change the value. The variable's changed value will be available to child processes. Once export has been set for a variable you must do export -n <var> to remove the property.
$ K=1
$ export K
$ K=2
$ bash -c 'echo ${K-unset}'
2
$ export -n K
$ bash -c 'echo ${K-unset}'
unset
export will make the variable available to all shells forked from the current shell.
As you might already know, UNIX allows processes to have a set of environment variables, which are key/value pairs, both key and value being strings.
Operating system is responsible for keeping these pairs for each process separately.
Program can access its environment variables through this UNIX API:
char *getenv(const char *name);
int setenv(const char *name, const char *value, int override);
int unsetenv(const char *name);
Processes also inherit environment variables from parent processes. Operating system is responsible for creating a copy of all "envars" at the moment the child process is created.
Bash, among other shells, is capable of setting its environment variables on user request. This is what export exists for.
export is a Bash command to set environment variable for Bash. All variables set with this command would be inherited by all processes that this Bash would create.
More on Environment in Bash
Another kind of variable in Bash is internal variable. Since Bash is not just interactive shell, it is in fact a script interpreter, as any other interpreter (e.g. Python) it is capable of keeping its own set of variables. It should be mentioned that Bash (unlike Python) supports only string variables.
Notation for defining Bash variables is name=value. These variables stay inside Bash and have nothing to do with environment variables kept by operating system.
More on Shell Parameters (including variables)
Also worth noting that, according to Bash reference manual:
The environment for any simple command or function may be augmented
temporarily by prefixing it with parameter assignments, as described
in Shell Parameters. These assignment statements affect only the
environment seen by that command.
To sum things up:
export is used to set environment variable in operating system. This variable will be available to all child processes created by current Bash process ever after.
Bash variable notation (name=value) is used to set local variables available only to current process of bash
Bash variable notation prefixing another command creates environment variable only for scope of that command.
The accepted answer implies this, but I'd like to make explicit the connection to shell builtins:
As mentioned already, export will make a variable available to both the shell and children. If export is not used, the variable will only be available in the shell, and only shell builtins can access it.
That is,
tango=3
env | grep tango # prints nothing, since env is a child process
set | grep tango # prints tango=3 - "type set" shows `set` is a shell builtin
Two of the creators of UNIX, Brian Kernighan and Rob Pike, explain this in their book "The UNIX Programming Environment". Google for the title and you'll easily find a pdf version.
They address shell variables in section 3.6, and focus on the use of the export command at the end of that section:
When you want to make the value of a variable accessible in sub-shells, the shell's export command should be used. (You might think about why there is no way to export the value of a variable from a sub-shell to its parent).
Here's yet another example:
VARTEST="value of VARTEST"
#export VARTEST="value of VARTEST"
sudo env | grep -i vartest
sudo echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"
sudo bash -c 'echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"'
Only by using export VARTEST the value of VARTEST is available in sudo bash -c '...'!
For further examples see:
http://mywiki.wooledge.org/SubShell
bash-hackers.org/wiki/doku.php/scripting/processtree
Just to show the difference between an exported variable being in the environment (env) and a non-exported variable not being in the environment:
If I do this:
$ MYNAME=Fred
$ export OURNAME=Jim
then only $OURNAME appears in the env. The variable $MYNAME is not in the env.
$ env | grep NAME
OURNAME=Jim
but the variable $MYNAME does exist in the shell
$ echo $MYNAME
Fred
By default, variables created within a script are only available to the current shell; child processes (sub-shells) will not have access to values that have been set or modified. Allowing child processes to see the values, requires use of the export command.
As yet another corollary to the existing answers here, let's rephrase the problem statement.
The answer to "should I export" is identical to the answer to the question "Does your subsequent code run a command which implicitly accesses this variable?"
For a properly documented standard utility, the answer to this can be found in the ENVIRONMENT section of the utility's man page. So, for example, the git manual page mentions that GIT_PAGER controls which utility is used to browse multi-page output from git. Hence,
# XXX FIXME: buggy
branch="main"
GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log "$branch"
will not work correctly, because you did not export GIT_PAGER. (Of course, if your system already declared the variable as exported somewhere else, the bug is not reproducible.)
We are explicitly referring to the variable $branch, and the git program code doesn't refer to a system variable branch anywhere (as also suggested by the fact that its name is written in lower case; but many beginners erroneously use upper case for their private variables, too! See Correct Bash and shell script variable capitalization for a discussion) so there is no reason to export branch.
The correct code would look like
branch="main"
export GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log -p "$branch"
(or equivalently, you can explicitly prefix each invocation of git with the temporary assignment
branch="main"
GIT_PAGER="less" git log -n 25 --oneline "$branch"
GIT_PAGER="less" git log -p "$branch"
In case it's not obvious, the shell script syntax
var=value command arguments
temporarily sets var to value for the duration of the execution of
command arguments
and exports it to the command subprocess, and then afterwards, reverts it back to the previous value, which could be undefined, or defined with a different - possibly empty - value, and unexported if that's what it was before.)
For internal, ad-hoc or otherwise poorly documented tools, you simply have to know whether they silently inspect their environment. This is rarely important in practice, outside of a few specific use cases, such as passing a password or authentication token or other secret information to a process running in some sort of container or isolated environment.
If you really need to know, and have access to the source code, look for code which uses the getenv system call (or on Windows, with my condolences, variations like getenv_s, w_getenv, etc). For some scripting languages (such as Perl or Ruby), look for ENV. For Python, look for os.environ (but notice also that e.g. from os import environ as foo means that foo is now an alias for os.environ). In Node, look for process.env. For C and related languages, look for envp (but this is just a convention for what to call the optional third argument to main, after argc and argv; the language lets you call them anything you like). For shell scripts (as briefly mentioned above), perhaps look for variables with uppercase or occasionally mixed-case names, or usage of the utility env. Many informal scripts have undocumented but discoverable assignments usually near the beginning of the script; in particular, look for the ?= default assignment parameter expansion.
For a brief demo, here is a shell script which invokes a Python script which looks for $NICKNAME, and falls back to a default value if it's unset.
#!/bin/sh
NICKNAME="Handsome Guy"
demo () {
python3 <<\____
from os import environ as env
print("Hello, %s" % env.get("NICKNAME", "Anonymous Coward"))
____
}
demo
# prints "Hello, Anonymous Coward"
# Fix: forgot export
export NICKNAME
demo
# prints "Hello, Handsome Guy"
As another tangential remark, let me reiterate that you only ever need to export a variable once. Many beginners cargo-cult code like
# XXX FIXME: redundant exports
export PATH="$HOME/bin:$PATH"
export PATH="/opt/acme/bin:$PATH"
but typically, your operating system has already declared PATH as exported, so this is better written
PATH="$HOME/bin:$PATH"
PATH="/opt/acme/bin:$PATH"
or perhaps refactored to something like
for p in "$HOME/bin" "/opt/acme/bin"
do
case :$PATH: in
*:"$p":*) ;;
*) PATH="$p:$PATH";;
esac
done
# Avoid polluting the variable namespace of your interactive shell
unset p
which avoids adding duplicate entries to your PATH.
Although not explicitly mentioned in the discussion, it is NOT necessary to use export when spawning a subshell from inside bash since all the variables are copied into the child process.

Resources