Linux basics - automating a script execution - linux

When beginning to work, I have to run several commands:
source work/tools
cd work/tool
source tool
setup_tool
Off course, doing this a few times a day is really annonying, so I tried to make a bash script tool where I put these commands and put it in /user/bin to run it with command
tool
However, there is a problem. When i run the script and then try to work by typing some of the tool-based commands, it does not work.
I figured out, that it is fine, since if I make a script and then run it, the script seems to run in the same terminal window, but what it really does is, that it behaves as if it created a "hidden window" for its execution and after termination of the script, the "hidden window" terminates too. So I am asking - is there a way to automatize the source command?
I have tried using xterm -hold -e command, but it runs the programmed script in the new window. Obviously, I don't want that. How can I achieve running it in the current window?

Don't put files like that in /usr/bin. As a general rule you don't want to mess with the distribution owned locations like that. You can use /usr/local/bin if you need a system-wide location or you can create a directory in your home directory to hold things like this that are for your own usage (and add that to the $PATH).
What you've noticed is that when run as a script on its own (tool, /path/to/tool, etc.) that the script runs in its own shell session (nothing to do with terminal windows as-such) and you don't want that (as the changes the script makes don't persist to your current shell session).
What you want to do instead is "source"/run the script in your current session. Which you are already doing with that set of commands you listed (source work/tools is doing exactly that).
So instead of running tool or /path/to/tool instead use source /path/to/tool or . /path/to/tool.
As fedorqui correctly points out you don't even need a script for this anywhere as you can just make a shell function for this instead (in your normal shell startup files .bashrc, etc.) and then just run that function when you need to so that setup.
Be careful to use full paths for things when you do this though since you, presumably, want this to work no matter what directory you happen to be in when you run it.

It doesn't create a new hidden window, nor does it create a terminal. What happens is that if you're running a script, normally it runs on a new shell process. The script you're running is supposed to modify the shell environment, but if you're running the script in a new shell process, that shell process's environment is the one that gets modified, instead of your shell environment.
Scripts that needs to modify the current shell environments usually must be run with the source command. What you need to do is to run the script in the current shell. So you should do source /path/to/tool.
If you want to be able to source the script with just tool, put this in your alias file/shell startup (check your distro doc where the file is, but it's usually either .bash_aliases or .bashrc):
alias tool="source /path/to/tool"

Related

How to update the /etc/profile within the sudo bash without rebooting or logging out

Currently right now I am creating a script that updates the path and environment variable of the profile within my raspberry-pi
I have created a script within the /etc/profile.d/sdk.sh to create a environment variable. Now it does not updates within my env, How can I add/update my environment variable without rebooting or logging-out of the system.
My script:
SDK_SH_FILE="/etc/profile.d/sdk.sh"
EXPORT_SDK_HOME="export SDK_HOME=/edit/"
echo -e "$EXPORT_SDK_HOME" > "$SDK_SH_FILE"
It is run using: cat my-script | sudo bash
Currently it is not updating my env unless I logout or reboot the system.
After editing sdk.sh, you need to load it in the current shell with:
source /etc/profile.d/sdk.sh
You have two choices for this job:
source /etc/profile.d/sdk.sh
OR
. /etc/profile.d/sdk.sh
I have just tried out Barmar's suggestion. It works to update the current bash session. If you close the terminal window and open a new one, you need to run source again.
Also, the new values are only concatenated to the environment variable rather than replacing the old values. So, it is still better to log out and log in again.
You can update the current shell environment by sourcing a script, because that way it runs in the same shell instance, but you need to acquire privileges to update sdk.sh (so shell redirections won't work), right?
The solution is to separate the write operation that requires privileges (calling only that via sudo).
Here the UNIX toolbox comes to the rescue with tee, a program that takes files as parameters, reads from it's standard input, and copies to it's standard output and to the files specified as parameters; this being a separate program, and opening it's parameters on it's own, can be called with sudo just fine.
Solution
export SDK_HOME=/edit/
typeset -p SDK_HOME | sudo tee /etc/profile.d/sdk.sh >/dev/null
Now, you need to source this file, instead of calling it, like so:
. ./my-script.
Here I used typeset -p to avoid repeating myself, it reproduces the declarations of the specified variables.
Variables and startup files
Variables are handed down from process to process in the environment, which is a portion of memory every process gets from it's parent process.
Shells behave differently depending on how they're called; a login shell will load a user/system profile file, i.e. /etc/profile (or a similar file at the home directory), and the rest of the session will get variables from it through the environment (thus updates to the file don't matter); normally all other interactive instances of the shell load a secondary file, in the case of BASH it's $HOME/.bashrc or /etc/bashrc (which login shells don't load).

Why do we need execution permission although we can run any script without it using "bash script file"?

I am wondering when and why do we need execution permission in linux although we can run any script without execute permission when we execute that script using the syntax bellow?
bash SomeScriptFile
Not all programs are scripts — bash for example isn't. So you need execute permission for executable programs.
Also, when you say bash SomeScriptFile, the script has to be in the current directory. If you have the script executable and in a directory on your PATH (e.g. $HOME/bin), then you can run the script without the unnecessary circumlocution of bash $HOME/bin/SomeScriptFile (or bash ~/bin/SomeScriptFile); you can simply run SomeScriptFile. This economy is worth having.
Execute permission on a directory is somewhat different, of course, but also important. It permits the 'class of user' (owner, group, others) to access files in the directory, subject to per-file permissions also allowing that.
Executing the script by invoking it directly and running the script through bash are two very different things.
When you run bash ~/bin/SomeScriptFile you are really just executing bash -- a command interpreter. bash in turns load the scripts and runs it.
When you run ~/bin/SomeSCriptFile directly, the system is able to tell this file is a script file and finds the interpreter to run it. There is a big of magic invoking the #! on the first line to look for the right interpreter.
The reason we run scripts directly is that the user (and system) couldn't know or care of the command we are running is a script or a compiled executable.
For instance, if I write a nifty shell script called fixAllIlls and later I decide to re-write it in C, as long a I keep the same interface, the users don't have to do anything different.
To them, it is just a program to run.
edit
The operating system checks permissions first for several reasons:
Checking permissions is faster
In the days of old, you could have SUID scripts, so one needed to check the permission bits.
As a result, it was possible to run scripts that you could not actually read the contents of. (That is still true of binaries.)

Why calling a script by "scriptName" doesn't work?

I have a simple script cmakeclean to clean cmake temp files:
#!/bin/bash -f
rm CMakeCache.txt
rm *.cmake
which I call like
$ cmakeclean
And it does remove CMakeCache.txt, but it doesn't remove cmake_install.cmake:
rm: *.cmake: No such file or directory
When I run it like:
$ . cmakeclean
it does remove both.
What is the difference and can I make this script work like an usual linux command (without . in front)?
P.S.
I am sure the both times is same script is executed. To check this I added echo meme in the script and rerun it in both ways.
Remove the -f from your #!/bin/bash -f line.
-f prevents pathname expansion, which means that *.cmake will not match anything. When you run your script as a script, it interprets the shebang line, and in effect runs /bin/bash -f scriptname. When you run it as . scriptname, the shebang is just seen as a comment line and ignored, so the fact that you do not have -f set in your current environment allows it to work as expected.
. script is short for source script which means the current shell executes the commands in the script. If there's an exit in there, the current shell will exit (and e. g. the terminal window will close).
This is typically used to modify the environment of the current shell (set variables etc.).
script asks the shell to fork itself, then exec the given script in the child process, and then wait in the father for the termination of the child. If there's an exit in the script, this will be executed by the child shell and thus only terminate this. The father shell stays intact and unaltered by this call.
This is typically used to start other programs from the current shell.
Is this about ClearCase? What did you do in your poor life where you've been assigned to work in the deepest bowels of hell?
For years, I was a senior ClearCase Administer. I haven't touched it in over a decade. My life is way better now. The sky is bluer, bird songs are more melodious, and my dread over coming to work every day is now a bit less.
Getting back to your issue: It's hard to say exactly what's going on. ClearCase does some wacky things. In a dynamic view, the ClearCase repository on Unix systems is hidden in the shell's environment. Now you see it, now you don't.
When you run a shell script, it starts up a new environment. If a particular shell variable is not imported, it is invisible that shell script. When you merely run cmakeclean from the command line, you are spawning a new shell -- one that does not contain your ClearCase environment.
When you run a shell script with a dot prefix like . cmakeclean, you are running that shell script in the current shell which contains your ClearCase environment. Thus, it can see your ClearCase view.
If you're using a snapshot view, it is possible that you have a $HOME/.bashrc that's changing directories on you. When a new shell environment runs in BASH (the default shell in MacOS X and Linux), it first runs $HOME/.bashrc. If this sets a particular directory, then you end up in that directory and not in the directory where you ran your shell script. I use to see this when I too was involved in ClearCase hell. People setup their .kshrc script (it was the days before BASH and most people used Kornshell) to setup their views. Unfortunately, this made running any other shell script almost impossible to do.

Crontab Source File

Recently I created a bash script which I am supposed to run in cron.
After preparing the bash script and its normal working, I put it in Cron and found that it was failing. As as second step , I removed all the environment dependencies i.e instead of just file.txt, I specified /home/blah-blah/file.txt
I still found the script to be failing still at one step. The step was a data processing tool.
The command i executed was /bin/blah-blah/processing_tool -parameter $INDEX where $INDEX is a variable calculated within the bash script.
Third step was to add the bash profile as source at the beginning of the bash script. Voila!!!! The script started executing perfectly from cron.
My question is why is this happening even after I removed all the environment dependencies from my script. Also I have heard that sourcing a cron job to a bash profile is not recommended. If so, Is there any other way in which I can avoid doing this.
Basicly: Anything started from cron starts with a totally clean slate.
You can make no assumptions whatsoever about the content of environment variables or whichever folder is the current folder at the start of any script run from cron.
Easiest solution:
cd to the desired directory to make sure your path is in the desired location.
source /etc/profile to mak sure you get the system wide environment variables setup.
source ~myuserid/.profile to read your personal environment settings. (~/.profile won't work as that would indicate the cron user.)
Then start executing the actual script.
Of course the approach above requires the cron process to have read access to your home-dir adn it's probably doing a lot more work thatn is actually required.
Slightly more complicated: Figure out which environment variables are required by the script and anything that gets called by the script.
Explicitly export these at the beginning of the cron script.
(P.s. replace /etc/profile and ~myuserid/.profile with whatever are the corresponding files for your shell of choice.)
A cron can be thought of as a separate user. So, this "user" may not "see" or "read" the same files as you do. It is thus essential that all path names etc. be defined in the absolute.
Every script runs within its own process. So, when you run a script, you can change the $SHELL and any other variable within but it will be lost once you get out of it. My guess is that the $INDEX variable computation may have had been computed within the script successfully but its use outside of the script may have failed. Without more information about what job it was, or what you wanted to do, it is hard to tell.
There are two ways to run a cron job:
As root, you can run su -user -c < job > in root crontab.
Sourcing your profile explicitly, as you have done.
You can also set environment variables within the crontab.
As user in the user crontab, you can run it like so: "/home/blah/.profile && myScript"
That said, there HAS to be something in your environment variables (apart from file extensions) that is not present when you run the cron job. You will have to execute that script with -x flag (in bash) and then pore over the output. Using a diff between your environment variables and that of root/cron might be a pointer. Also, check if there are some utilities that are being used in your scripts whose locations are not part of the $PATH variable for cron/root.

In Linux shell do all commands that work in a shell script also work at the command prompt?

I'm trying to interactively test code before I put it into a script and was wondering if there are any things that behave differently in a script?
When you execute a script it has its own environment variables which are inherited from the parent process (the shell from which you executed the command). Only exported variables will be visible to the child script.
More information:
http://en.wikipedia.org/wiki/Environment_variable
http://www.kingcomputerservices.com/unix_101/understanding_unix_shells_and_environment_variables.htm
By the way, if you want your script to run in the same environment as the shell it is executed in, you can do it with the point command:
. script.sh
This will avoid creating a new process for you shell script.
A script runs in exactly the same way as if you typed the content in at a shell prompt. Even loops and if statements can be typed in at the shell prompt. The shell will keep asking for more until it has a complete statement to execute.
As David rightly pointed out, watch out for environment variables.
Depending on how you intend to launch your script, variables set in .profile and .bashrc may not be available. This is subject to whether the script is launched in interactive mode and whether it was a login shell. See Quick Startup File Reference.
A common problem I see is scripts that work when run from the shell but fail when run from another application (cron, nagios, buildbot, etc.) because $PATH was not set.
To test if a command/script would work in a clean session, you can login using:
ssh -t localhost "/bin/bash --noprofile --norc"
This ensures that we don't inherit any exported variables from the parent shell, and nothing from .profile or .rc.
If it works in a clean session and none of you're commands expect to be in interactive mode, then you're good to go!

Resources