Where can I add to a $PATH so that it's available to all daemons? So that it's "included" or "sourced" before daemons start?
Many thanks!
One option would be /etc/profile.
I may have misread that, if you want to run something before daemons you could create a cron job or...
The system startup files are located in /etc/rc2.d. You can add a file to this directory with the commands you want to run at system startup. Suppose you want to delete some temp files at system startup, you could put a file named TempFileDel in your /etc/rc2.d with the commands to delete your temporary files, so it'll run every time system reboots.
Helo.
As shereenmotor says, usually, startup scripts are located in /etc/rc2.d, but this depends on the UNIX/Linux you run and your system's default run level.
But I'm afraid it's not that easy. The script name must follow some rules:
- There are two kind of scripts, let's say: kill scripts and start scripts. Both stored in /etc/rcX.d.
- kill scripts are executed first, after that start scripts.
- kill scripts name must start with a "K".
- start sctipts name must start with a "S".
- Following the first letter, there must be a two digit number. This lets "rc" know the order for the execution of the sctrips. rc is the "master" script which calls the others. Have a look at your /etc/inittab.
- Finally, a name of your choice.
when "rc" calls this scripts it adds a parameter: start for "S" scripts and stop for "K" scripts. This allows you to use the same script for both operations, just using links.
create a file
#!/bin/ksh
case $1 in
start)
echo Removing file...
rm /tmp/somefile;;
stop)
echo bye!;;
esac
and then
ln -s /path/to/TempFileDel /etc/rc2.d/S10TempFileDel
ln -s /path/to/TempFileDel /etc/rc2.d/K10TempFileDel
Daemons are started many different ways on different varieties of UNIX. Most of them have a way to setup the environment.
Perhaps the most fundamental is to set the environment for the init process, often through /etc/inittab. This will set the starting environment for all processes in the system.
If you have a script or a command, you could put it in /bin/ and set the appropiate owner and permisions using chmod and chown
Related
We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh
I have a simple script cmakeclean to clean cmake temp files:
#!/bin/bash -f
rm CMakeCache.txt
rm *.cmake
which I call like
$ cmakeclean
And it does remove CMakeCache.txt, but it doesn't remove cmake_install.cmake:
rm: *.cmake: No such file or directory
When I run it like:
$ . cmakeclean
it does remove both.
What is the difference and can I make this script work like an usual linux command (without . in front)?
P.S.
I am sure the both times is same script is executed. To check this I added echo meme in the script and rerun it in both ways.
Remove the -f from your #!/bin/bash -f line.
-f prevents pathname expansion, which means that *.cmake will not match anything. When you run your script as a script, it interprets the shebang line, and in effect runs /bin/bash -f scriptname. When you run it as . scriptname, the shebang is just seen as a comment line and ignored, so the fact that you do not have -f set in your current environment allows it to work as expected.
. script is short for source script which means the current shell executes the commands in the script. If there's an exit in there, the current shell will exit (and e. g. the terminal window will close).
This is typically used to modify the environment of the current shell (set variables etc.).
script asks the shell to fork itself, then exec the given script in the child process, and then wait in the father for the termination of the child. If there's an exit in the script, this will be executed by the child shell and thus only terminate this. The father shell stays intact and unaltered by this call.
This is typically used to start other programs from the current shell.
Is this about ClearCase? What did you do in your poor life where you've been assigned to work in the deepest bowels of hell?
For years, I was a senior ClearCase Administer. I haven't touched it in over a decade. My life is way better now. The sky is bluer, bird songs are more melodious, and my dread over coming to work every day is now a bit less.
Getting back to your issue: It's hard to say exactly what's going on. ClearCase does some wacky things. In a dynamic view, the ClearCase repository on Unix systems is hidden in the shell's environment. Now you see it, now you don't.
When you run a shell script, it starts up a new environment. If a particular shell variable is not imported, it is invisible that shell script. When you merely run cmakeclean from the command line, you are spawning a new shell -- one that does not contain your ClearCase environment.
When you run a shell script with a dot prefix like . cmakeclean, you are running that shell script in the current shell which contains your ClearCase environment. Thus, it can see your ClearCase view.
If you're using a snapshot view, it is possible that you have a $HOME/.bashrc that's changing directories on you. When a new shell environment runs in BASH (the default shell in MacOS X and Linux), it first runs $HOME/.bashrc. If this sets a particular directory, then you end up in that directory and not in the directory where you ran your shell script. I use to see this when I too was involved in ClearCase hell. People setup their .kshrc script (it was the days before BASH and most people used Kornshell) to setup their views. Unfortunately, this made running any other shell script almost impossible to do.
Recently I created a bash script which I am supposed to run in cron.
After preparing the bash script and its normal working, I put it in Cron and found that it was failing. As as second step , I removed all the environment dependencies i.e instead of just file.txt, I specified /home/blah-blah/file.txt
I still found the script to be failing still at one step. The step was a data processing tool.
The command i executed was /bin/blah-blah/processing_tool -parameter $INDEX where $INDEX is a variable calculated within the bash script.
Third step was to add the bash profile as source at the beginning of the bash script. Voila!!!! The script started executing perfectly from cron.
My question is why is this happening even after I removed all the environment dependencies from my script. Also I have heard that sourcing a cron job to a bash profile is not recommended. If so, Is there any other way in which I can avoid doing this.
Basicly: Anything started from cron starts with a totally clean slate.
You can make no assumptions whatsoever about the content of environment variables or whichever folder is the current folder at the start of any script run from cron.
Easiest solution:
cd to the desired directory to make sure your path is in the desired location.
source /etc/profile to mak sure you get the system wide environment variables setup.
source ~myuserid/.profile to read your personal environment settings. (~/.profile won't work as that would indicate the cron user.)
Then start executing the actual script.
Of course the approach above requires the cron process to have read access to your home-dir adn it's probably doing a lot more work thatn is actually required.
Slightly more complicated: Figure out which environment variables are required by the script and anything that gets called by the script.
Explicitly export these at the beginning of the cron script.
(P.s. replace /etc/profile and ~myuserid/.profile with whatever are the corresponding files for your shell of choice.)
A cron can be thought of as a separate user. So, this "user" may not "see" or "read" the same files as you do. It is thus essential that all path names etc. be defined in the absolute.
Every script runs within its own process. So, when you run a script, you can change the $SHELL and any other variable within but it will be lost once you get out of it. My guess is that the $INDEX variable computation may have had been computed within the script successfully but its use outside of the script may have failed. Without more information about what job it was, or what you wanted to do, it is hard to tell.
There are two ways to run a cron job:
As root, you can run su -user -c < job > in root crontab.
Sourcing your profile explicitly, as you have done.
You can also set environment variables within the crontab.
As user in the user crontab, you can run it like so: "/home/blah/.profile && myScript"
That said, there HAS to be something in your environment variables (apart from file extensions) that is not present when you run the cron job. You will have to execute that script with -x flag (in bash) and then pore over the output. Using a diff between your environment variables and that of root/cron might be a pointer. Also, check if there are some utilities that are being used in your scripts whose locations are not part of the $PATH variable for cron/root.
I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.
If I run a script from /home/<user>/<dir>/script.sh, as root, the cron works pretty well. But If I run the script from /root/<dir>/script.sh (as root, again), the cron does not seem to work.
Having run afoul of various default $PATHs in the past when using 'cron', I always spell in full the absolute $PATH for each executable file and each target file. I always assume that 'cron' has NO $PATH set and has NO current-working-directory.
In other words don't use a command like
"myprocess abc*.txt"
but do it in full like
"/usr/localbin/myprocess /home/jvs/abc*.txt".
Alternatively, create a bash script which does the job, and call that bash script with a full absolute path, such as
"/usr/local/bin/myprocess_abc_txts".
If you need to have some flexibility in the script, use environment variables which are set specifically within the bash script you call with 'cron'.
I think you need to add a little more information. I'd guess it is a permissions thing though. Add the permissions of the file, the directories, and the line in your crontab so we can help. Also, if you are putting this in /root, are you running this in root's crontab?
Remember the environment - especially when run by cron rather than by root. When cron runs something, you probably don't have anything much set of your environment, unlike when you run a command via at. It is also not clear what your current directory will be. So, for commands that will be run by cron, use a script (as you're already doing) and make sure it sets enough of the environment for it to run. And make sure your environment setting code is not interactive!
On my machines, I have a mechanism such that the cron entry reads (for example):
23 1 * * 1-5 /usr/bin/ksh /work1/jleffler/bin/Cron/weekday
The weekday script in the Cron directory is a link to a standard script that first sets the environment and then runs the command /work1/jleffler/bin/weekday (in this case - it uses the name of the command to determine what to run).
The actual script in the Cron directory is:
: "$Id: runcron.sh,v 2.1 2001/02/27 00:53:22 jleffler Exp $"
#
# Commands to be performed by Cron (no debugging options)
# Set environment -- not done by cron (usually switches HOME)
. $HOME/.cronfile
base=`basename $0`
cmd=${REAL_HOME:-/real/home}/bin/$base
if [ ! -x $cmd ]
then cmd=${HOME}/bin/$base
fi
exec $cmd ${#:+"$#"}
I've been using it a while now - this version since 2001 - and it works a treat for me. I'm using a basic (Sun Solaris 10) implementation of cron; there may be new features in new versions of cron on other platforms to make some of this unnecessary. (The $REAL_HOME stuff is a weirdness of mine; pretend it says $HOME - though that makes some of the script unnecessary for you.) The .cronfile is responsible for the environment setting - it does quite a lot, but that's my problem, not yours.
It could be because you're looking for relative directories/files in the script which are located when running it from /home/ but not from /root, because /root is not in /home/root nor would it look like a users homefolder in /home/
Can you check and see if it is looking for relative files, or post the script?
On another note, why don't you just set it to run from a user's homefolder then?
Another way to run sh script is place your bash script in /usr/bin directory and simply run command bash yourscript.sh without adding /usr/bin/ directory