Why calling a script by "scriptName" doesn't work? - linux

I have a simple script cmakeclean to clean cmake temp files:
#!/bin/bash -f
rm CMakeCache.txt
rm *.cmake
which I call like
$ cmakeclean
And it does remove CMakeCache.txt, but it doesn't remove cmake_install.cmake:
rm: *.cmake: No such file or directory
When I run it like:
$ . cmakeclean
it does remove both.
What is the difference and can I make this script work like an usual linux command (without . in front)?
P.S.
I am sure the both times is same script is executed. To check this I added echo meme in the script and rerun it in both ways.

Remove the -f from your #!/bin/bash -f line.
-f prevents pathname expansion, which means that *.cmake will not match anything. When you run your script as a script, it interprets the shebang line, and in effect runs /bin/bash -f scriptname. When you run it as . scriptname, the shebang is just seen as a comment line and ignored, so the fact that you do not have -f set in your current environment allows it to work as expected.

. script is short for source script which means the current shell executes the commands in the script. If there's an exit in there, the current shell will exit (and e. g. the terminal window will close).
This is typically used to modify the environment of the current shell (set variables etc.).
script asks the shell to fork itself, then exec the given script in the child process, and then wait in the father for the termination of the child. If there's an exit in the script, this will be executed by the child shell and thus only terminate this. The father shell stays intact and unaltered by this call.
This is typically used to start other programs from the current shell.

Is this about ClearCase? What did you do in your poor life where you've been assigned to work in the deepest bowels of hell?
For years, I was a senior ClearCase Administer. I haven't touched it in over a decade. My life is way better now. The sky is bluer, bird songs are more melodious, and my dread over coming to work every day is now a bit less.
Getting back to your issue: It's hard to say exactly what's going on. ClearCase does some wacky things. In a dynamic view, the ClearCase repository on Unix systems is hidden in the shell's environment. Now you see it, now you don't.
When you run a shell script, it starts up a new environment. If a particular shell variable is not imported, it is invisible that shell script. When you merely run cmakeclean from the command line, you are spawning a new shell -- one that does not contain your ClearCase environment.
When you run a shell script with a dot prefix like . cmakeclean, you are running that shell script in the current shell which contains your ClearCase environment. Thus, it can see your ClearCase view.
If you're using a snapshot view, it is possible that you have a $HOME/.bashrc that's changing directories on you. When a new shell environment runs in BASH (the default shell in MacOS X and Linux), it first runs $HOME/.bashrc. If this sets a particular directory, then you end up in that directory and not in the directory where you ran your shell script. I use to see this when I too was involved in ClearCase hell. People setup their .kshrc script (it was the days before BASH and most people used Kornshell) to setup their views. Unfortunately, this made running any other shell script almost impossible to do.

Related

Linux basics - automating a script execution

When beginning to work, I have to run several commands:
source work/tools
cd work/tool
source tool
setup_tool
Off course, doing this a few times a day is really annonying, so I tried to make a bash script tool where I put these commands and put it in /user/bin to run it with command
tool
However, there is a problem. When i run the script and then try to work by typing some of the tool-based commands, it does not work.
I figured out, that it is fine, since if I make a script and then run it, the script seems to run in the same terminal window, but what it really does is, that it behaves as if it created a "hidden window" for its execution and after termination of the script, the "hidden window" terminates too. So I am asking - is there a way to automatize the source command?
I have tried using xterm -hold -e command, but it runs the programmed script in the new window. Obviously, I don't want that. How can I achieve running it in the current window?
Don't put files like that in /usr/bin. As a general rule you don't want to mess with the distribution owned locations like that. You can use /usr/local/bin if you need a system-wide location or you can create a directory in your home directory to hold things like this that are for your own usage (and add that to the $PATH).
What you've noticed is that when run as a script on its own (tool, /path/to/tool, etc.) that the script runs in its own shell session (nothing to do with terminal windows as-such) and you don't want that (as the changes the script makes don't persist to your current shell session).
What you want to do instead is "source"/run the script in your current session. Which you are already doing with that set of commands you listed (source work/tools is doing exactly that).
So instead of running tool or /path/to/tool instead use source /path/to/tool or . /path/to/tool.
As fedorqui correctly points out you don't even need a script for this anywhere as you can just make a shell function for this instead (in your normal shell startup files .bashrc, etc.) and then just run that function when you need to so that setup.
Be careful to use full paths for things when you do this though since you, presumably, want this to work no matter what directory you happen to be in when you run it.
It doesn't create a new hidden window, nor does it create a terminal. What happens is that if you're running a script, normally it runs on a new shell process. The script you're running is supposed to modify the shell environment, but if you're running the script in a new shell process, that shell process's environment is the one that gets modified, instead of your shell environment.
Scripts that needs to modify the current shell environments usually must be run with the source command. What you need to do is to run the script in the current shell. So you should do source /path/to/tool.
If you want to be able to source the script with just tool, put this in your alias file/shell startup (check your distro doc where the file is, but it's usually either .bash_aliases or .bashrc):
alias tool="source /path/to/tool"

Dry-run a potentially dangerous script?

A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.

What is the difference between these two commands which are used to run shell script?

Here I have one script which exporting some necessary path in Linux. After running this script I have to run some other scripts.
I have two scripts
1 import.sh = importing paths
2 main.sh = this script do something with HCI (use for Bluetooth purpose).
when I run ./import.sh and than ./main.sh then it's giving error.
And when I run . ./import.sh and then ./main.sh then it's working fine.
So what is the diff between ./import.sh and . ./import.sh?
What happens if I run script as a super user? May be . ./ using for run script as a super user.
The difference between the two invocations is that ./import.sh is executing import.sh as a program, and . ./import.sh is evaluating it in your shell.
If "import.sh" were an ELF program (a compiled binary, not a shell script), . ./import.sh would not work.
If import.sh had a shebang at the top (like #!/bin/perl), you'd be in for a nasty surprise and a huge number of error messages if you tried to do . ./import.sh - unless the shebang happened to match your current shell, in which case it would accidentally work. Or if the Perl code were to somehow be a valid Bash script, which seems unlikely.
. ./import.sh is equivalent to source import.sh, and doesn't require that the file have the execute bit set (since it's interpreted by your already-running shell instead of spawned via exec). I assume this is the source of your error. Another difference is that ./import.sh runs in the current shell instead of a subshell, so any non-exported environment variables will affect the shell you used for the launch!
So, they're actually rather different. You usually want to ./import.sh unless you know what you're doing and understand the difference.
./import.sh executes the shell script in a new sub shell shell.
. ./import.sh executes the shell script in the current shell.
The extra . denotes the current shell.
./import.sh runs the script as a normal script - that is, in a subshell. That means it can't affect your current shell in any way. The paths it's supposed to import won't get set up in your current shell.
The extra ., which is equivalent to source, runs the script in the context of your current shell - meaning it can modify environment variables, etc. (like the paths you're trying to set up) in the current shell. From the bash man page:
. filename [arguments]
source filename [arguments]
Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename.
The . ./import.sh "sources" the script, where as simply ./import.sh just executes it.
The former allows you to modify the current environment, where the later will only affect the environment within the child execution.
The former is also equivalent to (though mostly Bash-specific):
source ./import.sh
help source yields:
source: source filename [arguments]
Execute commands from a file in the current shell.
Read and execute commands from FILENAME in the current shell. The
entries in $PATH are used to find the directory containing FILENAME.
If any ARGUMENTS are supplied, they become the positional parameters
when FILENAME is executed.
Exit Status:
Returns the status of the last command executed in FILENAME; fails if
FILENAME cannot be read.

Scripting on Linux

I am trying to create a script that will run a program on each file in a list. I have been trying to do this using a .csh file (I have no clue if this is the best way), and I started with something as simple as hello world
echo "hello world"
The problem is that I cannot execute this script, or verify that it works correctly. (I was trying to do ./testscript.csh which is obviously wrong). I haven't been able to find anything that really explains how to run C Scripts, and I'm guessing there's a better way to do this too. What do I need to change to get this to work?
You need to mark it as executable; Unix doesn't execute things arbitrarily based on extension.
chmod +x testscript.csh
Also, I strongly recommend using sh or bash instead of csh, or you will soon learn about the idiosyncrasies of csh's looping and control flow constructs (some things only work inside them if done a particular way, in particular with the single-line versions things are very limited).
You can use ./testscript.csh. You will however need to make it executable first:
chmod u+x testscript.csh
Which means set testscript to have execute permissions for the user (who ever the file is owned by - which in this case should be yourself!)
Also to tell the OS that this is a csh script you will need put
#! /path/to/csh
on the first line (where /path/to/csh is the full path to csh on your system. You can find that out by issuing the command which csh).
That should give you the behvaiour you want.
EDIT As discussed in some of the comments, you may want to choose an alternative shell to C Shell (csh). It is not the friendliest one for scripting.
You have several options.
You can run the script from within your current shell. If you're running csh or tcsh, the syntax is source testscript.csh. If you're running sh, bash, ksh, etc., the syntax is . ./testscript.sh. Note that I've changed the file name suffix; source or . runs the commands in the named file in your current shell. If you have any shell-specific syntax, this won't work unless your interactive shell matches the one used by the script. If the script is very simple (just a sequence of simple commands), that might not matter.
You can make the script an executable program. (I'm going to repeat some of what others have already written.) Add a "shebang" as the first line. For a csh script, use #!/bin/csh -f. The -f avoids running commands in your own personal startup scripts (.cshrc et al), which saves time and makes it more likely that others will be able to use it. Or, for a sh script (recommended), used #!/bin/sh (no -f, it has a completely different meaning). In either case, run chmod +x the_script, then ./the_script.
There's a trick I often use when I want to perform some moderately complex action. Say I want to delete some, but not all, files in the current directory, but the criterion can't be expressed conveniently in a single command. I might run ls > tmp.sh, then edit tmp.h with my favorite editor (mine happens to be vim). Then I go through the list of files and delete all the ones that I want to leave alone. Once I've done that, I can replace each file name with a command to remove it; in vim, :%s/.*/rm -f &/. I add a #!/bin/sh at the top save it, chmod +x foo.sh, then ./foo.sh. (If some of the file names might have special characters, I can use :%s/.*/rm -f '&'/.)

Cron does not run from /root

If I run a script from /home/<user>/<dir>/script.sh, as root, the cron works pretty well. But If I run the script from /root/<dir>/script.sh (as root, again), the cron does not seem to work.
Having run afoul of various default $PATHs in the past when using 'cron', I always spell in full the absolute $PATH for each executable file and each target file. I always assume that 'cron' has NO $PATH set and has NO current-working-directory.
In other words don't use a command like
"myprocess abc*.txt"
but do it in full like
"/usr/localbin/myprocess /home/jvs/abc*.txt".
Alternatively, create a bash script which does the job, and call that bash script with a full absolute path, such as
"/usr/local/bin/myprocess_abc_txts".
If you need to have some flexibility in the script, use environment variables which are set specifically within the bash script you call with 'cron'.
I think you need to add a little more information. I'd guess it is a permissions thing though. Add the permissions of the file, the directories, and the line in your crontab so we can help. Also, if you are putting this in /root, are you running this in root's crontab?
Remember the environment - especially when run by cron rather than by root. When cron runs something, you probably don't have anything much set of your environment, unlike when you run a command via at. It is also not clear what your current directory will be. So, for commands that will be run by cron, use a script (as you're already doing) and make sure it sets enough of the environment for it to run. And make sure your environment setting code is not interactive!
On my machines, I have a mechanism such that the cron entry reads (for example):
23 1 * * 1-5 /usr/bin/ksh /work1/jleffler/bin/Cron/weekday
The weekday script in the Cron directory is a link to a standard script that first sets the environment and then runs the command /work1/jleffler/bin/weekday (in this case - it uses the name of the command to determine what to run).
The actual script in the Cron directory is:
: "$Id: runcron.sh,v 2.1 2001/02/27 00:53:22 jleffler Exp $"
#
# Commands to be performed by Cron (no debugging options)
# Set environment -- not done by cron (usually switches HOME)
. $HOME/.cronfile
base=`basename $0`
cmd=${REAL_HOME:-/real/home}/bin/$base
if [ ! -x $cmd ]
then cmd=${HOME}/bin/$base
fi
exec $cmd ${#:+"$#"}
I've been using it a while now - this version since 2001 - and it works a treat for me. I'm using a basic (Sun Solaris 10) implementation of cron; there may be new features in new versions of cron on other platforms to make some of this unnecessary. (The $REAL_HOME stuff is a weirdness of mine; pretend it says $HOME - though that makes some of the script unnecessary for you.) The .cronfile is responsible for the environment setting - it does quite a lot, but that's my problem, not yours.
It could be because you're looking for relative directories/files in the script which are located when running it from /home/ but not from /root, because /root is not in /home/root nor would it look like a users homefolder in /home/
Can you check and see if it is looking for relative files, or post the script?
On another note, why don't you just set it to run from a user's homefolder then?
Another way to run sh script is place your bash script in /usr/bin directory and simply run command bash yourscript.sh without adding /usr/bin/ directory

Resources