When can I run a shell script with command ". shellscript" in bash,ubuntu? - linux

If I have a script "shellscript" in /usr/bin directory(It can also be a script of an installed program). When I run command "shellscript" (from anywhere , home or other directory) in terminal, it runs perfectly but when I use ". shellscript" , then also this file executes.
I know we can use ". /path/to/script/shellscript" to run it but if its in /usr/bin , can we use direct command without path?
Is it safe to run?
Can we run programs in such a way?
I need explanation. If yes then why? If Not then why? Should not then why?

The Bash shell searches the directories listed in the PATH variable in both shellscript and . shellscript cases. The main difference is that when using . (or equivalently source) to start a script, a new shell process is not created for interpreting the script. This is sometimes useful because it allows the script to define environment variables and functions that will then be available in the caller. For more details, see the Bash manual page (info bash).

Related

How do i make my own created shell work with .sh files

My teacher gave us this assignment to create our own shell. Our shell is supposed be called rshell and is supposed to work like the regular shell.
I created my own shell using C++. If you type a command like ls in my created shell it gives you a list just like how if you typed ls in the regular shell.
The problem I am facing is how do I get the .sh files or script files to work with my created shell. I noticed when I run a .sh file using my shell it does not run the .sh file through my shell. It runs it through the regular shell. How do I make .sh files run through my shell?
Change the hash-bang line of the scripts to point at your shell. For instance,
#!/usr/local/bin/rshell
Or wherever your shell executable is.
As John already said, change the shebang to point to your shell. The kernel will invoke the command in the shebang with the file itself as an argument. To demonstrate, try a file with a shebang of #!/bin/cat.
#!/bin/cat
hello world
It pretty much behaves the same as if you typed /bin/cat /path/to/file.
The shebang does not have PATH lookup capabilities, so #!yourshell would not work as a shebang. However, you can use env to do the PATH lookup as in #!/usr/bin/env yourshell. (This approach is preferred for commands that are at different paths on different systems, like python.)

Why calling a script by "scriptName" doesn't work?

I have a simple script cmakeclean to clean cmake temp files:
#!/bin/bash -f
rm CMakeCache.txt
rm *.cmake
which I call like
$ cmakeclean
And it does remove CMakeCache.txt, but it doesn't remove cmake_install.cmake:
rm: *.cmake: No such file or directory
When I run it like:
$ . cmakeclean
it does remove both.
What is the difference and can I make this script work like an usual linux command (without . in front)?
P.S.
I am sure the both times is same script is executed. To check this I added echo meme in the script and rerun it in both ways.
Remove the -f from your #!/bin/bash -f line.
-f prevents pathname expansion, which means that *.cmake will not match anything. When you run your script as a script, it interprets the shebang line, and in effect runs /bin/bash -f scriptname. When you run it as . scriptname, the shebang is just seen as a comment line and ignored, so the fact that you do not have -f set in your current environment allows it to work as expected.
. script is short for source script which means the current shell executes the commands in the script. If there's an exit in there, the current shell will exit (and e. g. the terminal window will close).
This is typically used to modify the environment of the current shell (set variables etc.).
script asks the shell to fork itself, then exec the given script in the child process, and then wait in the father for the termination of the child. If there's an exit in the script, this will be executed by the child shell and thus only terminate this. The father shell stays intact and unaltered by this call.
This is typically used to start other programs from the current shell.
Is this about ClearCase? What did you do in your poor life where you've been assigned to work in the deepest bowels of hell?
For years, I was a senior ClearCase Administer. I haven't touched it in over a decade. My life is way better now. The sky is bluer, bird songs are more melodious, and my dread over coming to work every day is now a bit less.
Getting back to your issue: It's hard to say exactly what's going on. ClearCase does some wacky things. In a dynamic view, the ClearCase repository on Unix systems is hidden in the shell's environment. Now you see it, now you don't.
When you run a shell script, it starts up a new environment. If a particular shell variable is not imported, it is invisible that shell script. When you merely run cmakeclean from the command line, you are spawning a new shell -- one that does not contain your ClearCase environment.
When you run a shell script with a dot prefix like . cmakeclean, you are running that shell script in the current shell which contains your ClearCase environment. Thus, it can see your ClearCase view.
If you're using a snapshot view, it is possible that you have a $HOME/.bashrc that's changing directories on you. When a new shell environment runs in BASH (the default shell in MacOS X and Linux), it first runs $HOME/.bashrc. If this sets a particular directory, then you end up in that directory and not in the directory where you ran your shell script. I use to see this when I too was involved in ClearCase hell. People setup their .kshrc script (it was the days before BASH and most people used Kornshell) to setup their views. Unfortunately, this made running any other shell script almost impossible to do.

In bash, what does dot command ampersand do?

I'm trying to understand a bash script I'm supposed to be maintaining and got stuck. The command is of this form:
. $APP_LOCATION/somescript.sh param1 param2 &
The line is not being called in a loop, not is any return code bening sent back to the calling script from somescript.sh
I know that the "." will make the process run in the same shell. But "&" will spawn off a different process.
That sounds contradictory. What's is really happening here? Any ideas?
The script is running in a background process, but it is a subshell, not a separately-invoked interpreter as it would be without the dot.
That is to say -- the current interpreter forks and then begins running the command (sourcing the script). As such, it inherits shell variables, not just environment variables.
Otherwise the new script's interpreter would be invoked via an execv() call, which would replace the current interpreter with a new one. That's usually the right thing, because it provides more flexibility -- you can't run anything but a script written for the same shell with . or source, after all, whereas starting a new interpreter means that your other script could be rewritten in Python, Perl, a compiled binary, etc without its callers needing to change.
(This is part of why scripts intended to be exec'd, as opposed to than libraries meant to be sourced, should not have filename extensions -- and part of why bash libraries should be .bash, not .sh, such that inaccurate information isn't provided about what kind of interpreter they can be sourced into).
TL;DR
. $APP_LOCATION/somescript.sh param1 param2 &
This sources a script as a background job in the current shell.
Sourcing a Script
In Bash, using . is equivalent to the [source builtin]. The help for the source builtin says (in part):
$ help source
source: source filename [arguments]
Execute commands from a file in the current shell.
In other words, it reads in your Bash script and evaluates it in the current shell rather than in a sub-shell. This is often important to give a script access to unexported variables.
Background Jobs
The ampersand executes the script in the background using job control. In this case, while the sourced script is evaluated in the context of the current shell, it is executed in a separate process that can be managed using job control builtins.

What is the difference between these two commands which are used to run shell script?

Here I have one script which exporting some necessary path in Linux. After running this script I have to run some other scripts.
I have two scripts
1 import.sh = importing paths
2 main.sh = this script do something with HCI (use for Bluetooth purpose).
when I run ./import.sh and than ./main.sh then it's giving error.
And when I run . ./import.sh and then ./main.sh then it's working fine.
So what is the diff between ./import.sh and . ./import.sh?
What happens if I run script as a super user? May be . ./ using for run script as a super user.
The difference between the two invocations is that ./import.sh is executing import.sh as a program, and . ./import.sh is evaluating it in your shell.
If "import.sh" were an ELF program (a compiled binary, not a shell script), . ./import.sh would not work.
If import.sh had a shebang at the top (like #!/bin/perl), you'd be in for a nasty surprise and a huge number of error messages if you tried to do . ./import.sh - unless the shebang happened to match your current shell, in which case it would accidentally work. Or if the Perl code were to somehow be a valid Bash script, which seems unlikely.
. ./import.sh is equivalent to source import.sh, and doesn't require that the file have the execute bit set (since it's interpreted by your already-running shell instead of spawned via exec). I assume this is the source of your error. Another difference is that ./import.sh runs in the current shell instead of a subshell, so any non-exported environment variables will affect the shell you used for the launch!
So, they're actually rather different. You usually want to ./import.sh unless you know what you're doing and understand the difference.
./import.sh executes the shell script in a new sub shell shell.
. ./import.sh executes the shell script in the current shell.
The extra . denotes the current shell.
./import.sh runs the script as a normal script - that is, in a subshell. That means it can't affect your current shell in any way. The paths it's supposed to import won't get set up in your current shell.
The extra ., which is equivalent to source, runs the script in the context of your current shell - meaning it can modify environment variables, etc. (like the paths you're trying to set up) in the current shell. From the bash man page:
. filename [arguments]
source filename [arguments]
Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename.
The . ./import.sh "sources" the script, where as simply ./import.sh just executes it.
The former allows you to modify the current environment, where the later will only affect the environment within the child execution.
The former is also equivalent to (though mostly Bash-specific):
source ./import.sh
help source yields:
source: source filename [arguments]
Execute commands from a file in the current shell.
Read and execute commands from FILENAME in the current shell. The
entries in $PATH are used to find the directory containing FILENAME.
If any ARGUMENTS are supplied, they become the positional parameters
when FILENAME is executed.
Exit Status:
Returns the status of the last command executed in FILENAME; fails if
FILENAME cannot be read.

In Linux shell do all commands that work in a shell script also work at the command prompt?

I'm trying to interactively test code before I put it into a script and was wondering if there are any things that behave differently in a script?
When you execute a script it has its own environment variables which are inherited from the parent process (the shell from which you executed the command). Only exported variables will be visible to the child script.
More information:
http://en.wikipedia.org/wiki/Environment_variable
http://www.kingcomputerservices.com/unix_101/understanding_unix_shells_and_environment_variables.htm
By the way, if you want your script to run in the same environment as the shell it is executed in, you can do it with the point command:
. script.sh
This will avoid creating a new process for you shell script.
A script runs in exactly the same way as if you typed the content in at a shell prompt. Even loops and if statements can be typed in at the shell prompt. The shell will keep asking for more until it has a complete statement to execute.
As David rightly pointed out, watch out for environment variables.
Depending on how you intend to launch your script, variables set in .profile and .bashrc may not be available. This is subject to whether the script is launched in interactive mode and whether it was a login shell. See Quick Startup File Reference.
A common problem I see is scripts that work when run from the shell but fail when run from another application (cron, nagios, buildbot, etc.) because $PATH was not set.
To test if a command/script would work in a clean session, you can login using:
ssh -t localhost "/bin/bash --noprofile --norc"
This ensures that we don't inherit any exported variables from the parent shell, and nothing from .profile or .rc.
If it works in a clean session and none of you're commands expect to be in interactive mode, then you're good to go!

Resources