When creating Bash scripts, I have always had a line right at the start defining the PATH environment variable. I recently discovered that this doesn't make the script very portable as the PATH variable is different for different versions of Linux (in my case, I moved the script from Arch Linux to Ubuntu and received errors as various executables weren't in the same places).
Is it possible to copy the PATH environment variable defined by the login shell into the current Bash script?
EDIT:
I see that my question has caused some confusion resulting in some thinking that I want to change the PATH environment variable of the login shell with a bash script, which is the exact opposite of what I want.
This is what I currently have at the top of one of my Bash scripts:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
# Test if an internet connection is present
wget -O /dev/null google.com
I want to replace that second line with something that copies the value of PATH from the login shell into the script environment:
#!/bin/bash
PATH=$(command that copies value of PATH from login shell)
# Test if an internet connection is present
wget -O /dev/null google.com
EDIT 2: Sorry for the big omission on my part. I forgot to mention that the scripts in question are being run on a schedule through cron. Cron creates it's own environment for running the scripts which does not use the environment variables of the login shell or modify them. I just tried running the following script in cron:
#!/bin/bash
echo $PATH >> /home/user/output.txt
The result is as follows. As you can see, the PATH variable used by cron is different to the login shell:
user#ubuntu_router:~$ cat output.txt
/usr/bin:/bin
user#ubuntu_router:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
Don't touch the user's PATH at all unless you have a specific reason. Not doing anything will (basically) accomplish what you ask.
You don't have to do anything to get the user's normal PATH since every process inherits the PATH and all other environment variables automatically.
If you need to add something nonstandard to the PATH, the usual approach is to prepend (or append) the new directory to the user's existing PATH, like so:
PATH=/opt/your/random/dir:$PATH
The environment of cron jobs is pretty close to the system's "default" (for some definition of "default") though interactive shells may generally run with a less constrained environment. But again, the fix for that is to add any missing directories to the current value at the beginning of the script. Adding directories which don't exist on this particular system is harmless, as is introducing duplicate directories.
I've managed to find the answer to my question:
PATH=$PATH:$(sed -n '/PATH=/s/^.*=// ; s/\"//gp' '/etc/environment')
This command will grab the value assigned to PATH by Linux from the environment file and append it to the PATH used by Cron.
I used the following resources to help find the answer:
How to grep for contents after pattern?
https://help.ubuntu.com/community/EnvironmentVariables#System-wide_environment_variables
Related
Currently right now I am creating a script that updates the path and environment variable of the profile within my raspberry-pi
I have created a script within the /etc/profile.d/sdk.sh to create a environment variable. Now it does not updates within my env, How can I add/update my environment variable without rebooting or logging-out of the system.
My script:
SDK_SH_FILE="/etc/profile.d/sdk.sh"
EXPORT_SDK_HOME="export SDK_HOME=/edit/"
echo -e "$EXPORT_SDK_HOME" > "$SDK_SH_FILE"
It is run using: cat my-script | sudo bash
Currently it is not updating my env unless I logout or reboot the system.
After editing sdk.sh, you need to load it in the current shell with:
source /etc/profile.d/sdk.sh
You have two choices for this job:
source /etc/profile.d/sdk.sh
OR
. /etc/profile.d/sdk.sh
I have just tried out Barmar's suggestion. It works to update the current bash session. If you close the terminal window and open a new one, you need to run source again.
Also, the new values are only concatenated to the environment variable rather than replacing the old values. So, it is still better to log out and log in again.
You can update the current shell environment by sourcing a script, because that way it runs in the same shell instance, but you need to acquire privileges to update sdk.sh (so shell redirections won't work), right?
The solution is to separate the write operation that requires privileges (calling only that via sudo).
Here the UNIX toolbox comes to the rescue with tee, a program that takes files as parameters, reads from it's standard input, and copies to it's standard output and to the files specified as parameters; this being a separate program, and opening it's parameters on it's own, can be called with sudo just fine.
Solution
export SDK_HOME=/edit/
typeset -p SDK_HOME | sudo tee /etc/profile.d/sdk.sh >/dev/null
Now, you need to source this file, instead of calling it, like so:
. ./my-script.
Here I used typeset -p to avoid repeating myself, it reproduces the declarations of the specified variables.
Variables and startup files
Variables are handed down from process to process in the environment, which is a portion of memory every process gets from it's parent process.
Shells behave differently depending on how they're called; a login shell will load a user/system profile file, i.e. /etc/profile (or a similar file at the home directory), and the rest of the session will get variables from it through the environment (thus updates to the file don't matter); normally all other interactive instances of the shell load a secondary file, in the case of BASH it's $HOME/.bashrc or /etc/bashrc (which login shells don't load).
I've been using letsencrypt to generate SSL certificates for my site, more specifically letsencrypt_webfaction. When I run this command in my project, it works
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
However, when I run the same command in a bash script, I get the error
generate_certificate.sh: line 2: letsencrypt_webfaction: command not found
I made sure I had all possible permissions on the bash script using chmod 777 generate_certificate.sh, but still nothing. On top of that I have a bash script that runs right before that, which simply restarts Apache, and that works fine.
I read other S.O articles, such as this one, and tried running dos2unix script.sh, which did run successfully, but when I tried running the bash script again, it didn't work.
Restart Apache Script
#!/bin/bash
../apache2/bin/./restart
#END
Generate SSL Script
#!/bin/bash
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
#END
I'm a python developer, and don't have much experience with Ruby, so excuse my ignorance, but the letsencrypt_webfaction command is a function in my bash profile.
~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
function letsencrypt_webfaction {
PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}
eval "$(rbenv init -)"
PATH=$PATH:$HOME/bin
export PATH
export PATH="$HOME/.rbenv/bin:$PATH"
export TMPDIR="/home/doc4design/src/tmp"
By default, shell functions are only available in the shell they were defined in; they're not inherited by subprocesses. Your .bash_profile is only run by the login shell, not shells that run as subprocesses (e.g. to run scripts).
Option 1: In bash, you can run export -f letsencrypt_webfaction in the defining shell (i.e. in your .bash_profile), and it'll be inherited by subprocesses (provided they're also running bash).
Option 2: You can define the function in your .bashrc instead of .bash_profile, and since you run .bashrc from .bash_profile it'll get defined in all your bash shells.
Option 3: Just use the full command in the script. This would be my preference, since it makes the script more independent. Having a script depend on a shell function that's defined in a completely different place is fragile (as you're experiencing) and just a bit weird.
While I'm at it, here are some general scripting recommendations:
In most contexts, you should put double-quotes around variable references (and strings that contain variable references) to avoid weird effects from word splitting and wildcard expansion. The right side of an assignment is one place it's ok to leave them off (e.g. PATH=$PATH:$HOME/bin and PATH="$PATH:$HOME/bin" are both ok), but I tend to recommend using quotes everywhere as it's hard to keep track of where it's safe to leave them off and where it's dangerous. For the same reason, you should almost always use "$#" instead of $* (as in the letsencrypt_webfaction function).
shellcheck.net is really good at spotting errors like this, so I recommend running your shell scripts through it and acting on its suggestions.
Using the function keyword to define a function is nonstandard; the standard syntax is to use () after the function name, like this:
letsencrypt_webfaction() {
PATH="$PATH:$GEM_HOME/bin" GEM_HOME="$HOME/.letsencrypt_webfaction/gems" RUBYLIB="$GEM_HOME/lib" ruby2.2 "$HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction" "$#"
}
The function I just gave still may not work right, since it (re)defines GEM_HOME after using it. The entire line gets parsed (and pre-existing variable definitions expanded), then the variables defined as prefixes to the command get included in the environment of the command. This means that the ruby script gets the updated value of GEM_HOME, but the updated values of PATH and RUBYLIB are based on whatever value GEM_HOME had when the function was run. I'm pretty sure this is not what you intended.
In the restart apache script, you use a relative path to the restart command. This will be evaluated relative to the working directory of the process that runs the script, not relative to the script's location. This could be anywhere.
I am attempting to write a bash command line tool that is usable immediately after installation, i.e. in the same shell as its installation script was called. Lets say install-script.sh (designed for Ubuntu) looks like:
# Get the script's absolute path:
pushd `dirname $0` > /dev/null
SCRIPTPATH=`pwd`
popd > /dev/null
# Add lines to bash.bashrc to export the environment variable:
echo "SCRIPT_HOME=${SCRIPTPATH}" >> /etc/bash.bashrc
echo "export SCRIPT_HOME" >> /etc/bash.bashrc
# Create a new command:
cp ${SCRIPTPATH}/newcomm /usr/bin
chmod a+x /usr/bin/newcomm
The idea is that the new command newcomm uses the SCRIPT_HOME environment variable to reference the main script - which is also in SCRIPTPATH:
exec "${SCRIPT_HOME}/main-script.sh"
Now, the updated bash.bashrc hasn't been loaded into the parent shell yet. Worse, I cannot source it from within the script - which is running in a child shell. Using export to change SCRIPT_HOME in the parent shell would at best be duct-taping the issue, but even this is impossible. Also note that the installation script needs to be run using sudo so it cannot be called from the parent shell using source.
It should be possible since package managers like apt do it. Is there a robust way to patch up my approach? How is this usually done, and is there a good guide to writing bash installers?
You can't. Neither can apt.
A package manager will instead just write required data/variables to a file, which are read either by the program itself, by a patch to the program, or by a wrapper.
Good examples can be found in /etc/default/*. These are files with variable definitions, and some even helpfully describe where they're sourced from:
$ cat /etc/default/ssh
# Default settings for openssh-server. This file is sourced by /bin/sh from
# /etc/init.d/ssh.
# Options to pass to sshd
SSHD_OPTS=
You'll notice that none of the options are set in your current shell after installing a package, since programs get them straight from the files in one way or another.
The only way to modify the current shell is to source a script. That's unavoidable, so start there. Write a script that is sourced. That script will in turn call your current script.
Your current script will need to communicate with the sourced one to tell it what to change. A common way is to echo variable assignments that can be directly executed by the caller. For instance:
printf 'export SCRIPT_HOME=%q\n' "$SCRIPTPATH"
Using printf with %q ensures any special characters will be escaped properly.
Then have the sourced script eval the inner script.
eval "$(sudo install-script.sh)"
If you want to hide the sourceing of the top script you could hide it behind an alias or shell function.
Recently I created a bash script which I am supposed to run in cron.
After preparing the bash script and its normal working, I put it in Cron and found that it was failing. As as second step , I removed all the environment dependencies i.e instead of just file.txt, I specified /home/blah-blah/file.txt
I still found the script to be failing still at one step. The step was a data processing tool.
The command i executed was /bin/blah-blah/processing_tool -parameter $INDEX where $INDEX is a variable calculated within the bash script.
Third step was to add the bash profile as source at the beginning of the bash script. Voila!!!! The script started executing perfectly from cron.
My question is why is this happening even after I removed all the environment dependencies from my script. Also I have heard that sourcing a cron job to a bash profile is not recommended. If so, Is there any other way in which I can avoid doing this.
Basicly: Anything started from cron starts with a totally clean slate.
You can make no assumptions whatsoever about the content of environment variables or whichever folder is the current folder at the start of any script run from cron.
Easiest solution:
cd to the desired directory to make sure your path is in the desired location.
source /etc/profile to mak sure you get the system wide environment variables setup.
source ~myuserid/.profile to read your personal environment settings. (~/.profile won't work as that would indicate the cron user.)
Then start executing the actual script.
Of course the approach above requires the cron process to have read access to your home-dir adn it's probably doing a lot more work thatn is actually required.
Slightly more complicated: Figure out which environment variables are required by the script and anything that gets called by the script.
Explicitly export these at the beginning of the cron script.
(P.s. replace /etc/profile and ~myuserid/.profile with whatever are the corresponding files for your shell of choice.)
A cron can be thought of as a separate user. So, this "user" may not "see" or "read" the same files as you do. It is thus essential that all path names etc. be defined in the absolute.
Every script runs within its own process. So, when you run a script, you can change the $SHELL and any other variable within but it will be lost once you get out of it. My guess is that the $INDEX variable computation may have had been computed within the script successfully but its use outside of the script may have failed. Without more information about what job it was, or what you wanted to do, it is hard to tell.
There are two ways to run a cron job:
As root, you can run su -user -c < job > in root crontab.
Sourcing your profile explicitly, as you have done.
You can also set environment variables within the crontab.
As user in the user crontab, you can run it like so: "/home/blah/.profile && myScript"
That said, there HAS to be something in your environment variables (apart from file extensions) that is not present when you run the cron job. You will have to execute that script with -x flag (in bash) and then pore over the output. Using a diff between your environment variables and that of root/cron might be a pointer. Also, check if there are some utilities that are being used in your scripts whose locations are not part of the $PATH variable for cron/root.
If I run a script from /home/<user>/<dir>/script.sh, as root, the cron works pretty well. But If I run the script from /root/<dir>/script.sh (as root, again), the cron does not seem to work.
Having run afoul of various default $PATHs in the past when using 'cron', I always spell in full the absolute $PATH for each executable file and each target file. I always assume that 'cron' has NO $PATH set and has NO current-working-directory.
In other words don't use a command like
"myprocess abc*.txt"
but do it in full like
"/usr/localbin/myprocess /home/jvs/abc*.txt".
Alternatively, create a bash script which does the job, and call that bash script with a full absolute path, such as
"/usr/local/bin/myprocess_abc_txts".
If you need to have some flexibility in the script, use environment variables which are set specifically within the bash script you call with 'cron'.
I think you need to add a little more information. I'd guess it is a permissions thing though. Add the permissions of the file, the directories, and the line in your crontab so we can help. Also, if you are putting this in /root, are you running this in root's crontab?
Remember the environment - especially when run by cron rather than by root. When cron runs something, you probably don't have anything much set of your environment, unlike when you run a command via at. It is also not clear what your current directory will be. So, for commands that will be run by cron, use a script (as you're already doing) and make sure it sets enough of the environment for it to run. And make sure your environment setting code is not interactive!
On my machines, I have a mechanism such that the cron entry reads (for example):
23 1 * * 1-5 /usr/bin/ksh /work1/jleffler/bin/Cron/weekday
The weekday script in the Cron directory is a link to a standard script that first sets the environment and then runs the command /work1/jleffler/bin/weekday (in this case - it uses the name of the command to determine what to run).
The actual script in the Cron directory is:
: "$Id: runcron.sh,v 2.1 2001/02/27 00:53:22 jleffler Exp $"
#
# Commands to be performed by Cron (no debugging options)
# Set environment -- not done by cron (usually switches HOME)
. $HOME/.cronfile
base=`basename $0`
cmd=${REAL_HOME:-/real/home}/bin/$base
if [ ! -x $cmd ]
then cmd=${HOME}/bin/$base
fi
exec $cmd ${#:+"$#"}
I've been using it a while now - this version since 2001 - and it works a treat for me. I'm using a basic (Sun Solaris 10) implementation of cron; there may be new features in new versions of cron on other platforms to make some of this unnecessary. (The $REAL_HOME stuff is a weirdness of mine; pretend it says $HOME - though that makes some of the script unnecessary for you.) The .cronfile is responsible for the environment setting - it does quite a lot, but that's my problem, not yours.
It could be because you're looking for relative directories/files in the script which are located when running it from /home/ but not from /root, because /root is not in /home/root nor would it look like a users homefolder in /home/
Can you check and see if it is looking for relative files, or post the script?
On another note, why don't you just set it to run from a user's homefolder then?
Another way to run sh script is place your bash script in /usr/bin directory and simply run command bash yourscript.sh without adding /usr/bin/ directory