start another bash script from main bash script - linux

I have a bash script, from which I need to start another script with source. This is working fine, but I also need to pass the 2nd script parameters.
Example:
source /usr/local/scripts/parallel.sh test --gnu
So I need to start parallel.sh with a given data-source file called test, and I also need to assign a parameter --gnu at the end. But it is not recognizing the file and the parameter.

The source command is likely not what you're looking for.
When a script is run using source it runs within the existing shell, any variables created or modified by the script will remain available after the script completes. In contrast if the script is run just as filename, then a separate subshell (with a completely separate set of variables) would be spawned to run the script.
So, unless you need access to variables or functions inside parallel.sh, just call it directly:
/usr/local/scripts/parallel.sh test --gnu
As long as the script is executable (chmod +x /usr/local/scripts/parallel.sh) and set up to work with the options you're passing ($1 will contain "test" and $2 will contain "--gnu") it should work just fine.

Related

When using PowerShell on Linux, is there a way to set the shell $PWD on exit of the PS script?

When running a PowerShell script from bash, is there a way to have PowerShell set the current directory of the calling shell up exit?
I have tried the following things (independently, not all at once)
#!/usr/bin/env pwsh
# myscript.ps1
$desiredPath = "/the/path/I/Want"
& cd $desiredPath
& Set-Location $desiredPath
& zsh -c 'cd ${clonePath}'
Unfortunately, the end result always ends up being back at the prior $PWD.
I am sure I could return the path from the script and then pipe it to another command, but I am trying to find out if there is a way to accomplish this without having to do that, as I have the scripts folder on my path so I can simply call myscript.ps1 arg1 arg2.
There is no way for a subprocess to modify the environment of its parent process without cooperation from the parent; but if you can get the parent to cooperate, you can pass back an expression for it to evaluate.
(I'm not very conversant in PowerShell so I am putting a simple Python script here instead; but you should easily see how to replace it with PowerShell.)
cd "$(python -c 'print("/tmp/fnord")')"
or more generally
eval $(python -c 'print("cd /tmp/fnord")')
but you really should avoid eval in most circumstances if at all feasible, and then make really sure you can completely trust its output if you can't avoid it.
Needless to say, the subprocess could do something a lot more complex, as long as it prints the expression you want to pass back to the parent (and nothing else) to standard output during its execution.
You can have a script before calling other scripts, which sets up the environment for the PowerShell session window temporarily while this window is still open running the scripts you would like to run. With environment variables to be set temporarily in the PowerShell session, and to change the $PWD variable, and then afterwards the file would call each of the scripts and should keep the $PWD after the first script was run and finishes, and the script to run after.

incrementing an environmental variable

I need to increment an environmental variable by these steps:
envar=1
export envar
sh script_incrementation
echo $envar
where script_incrementation contains something like this:
#! /bin/sh
envar=$[envar+1] #I've tried also other methods of incrementation
export envar
Whatever I do, after exiting the script the variable remains with its initial value 1.
THanks for your time.
A shell script executes in its own shell, so you cannot affect the outer shell unless you source it. See this question for details of that discussion.
Consider the following script, which I will call Foo.sh.
#!/bin/bash
export HELLO=$(($HELLO+1))
Suppose in the outer shell, I define an environmental variable:
export HELLO=1
If I run the script like this, it run inside its own shell and will not affect the parent.
./Foo.sh
However, if I source it, it will just execute the commands in the current shell, and will achieve the desired affect.
. Foo.sh
echo $HELLO # prints 2
Your script can not change the environment of the calling process (shell), it merely inherits it.
So, if you export foo=bar, and then invoke sh (a new process) with your script, the script will see the value of $foo (which is "bar"), and it will be able to change its own copy of it – but that is not going to affect the environment of the parent process (where you exported the variable).
You can simply source your script in the original shell, i.e. run
source increment_script.sh
or
. increment_script.sh
and that will then change the value of the variable.
This is because sourceing a script avoids spawning a new shell (process).
Another trick is to have your script output the changed environment, and then eval that output, for example:
counter=$[counter+1]
echo "counter=$counter"
and then run that as
eval `increment_script.sh`

How to make declare in a Linux shell script?

I want to put below declare in a shell script: proxy_set
declare -x https_proxy="https://192.168.220.4:8080/"
And then I execute it like below.
$ ./proxy_set
But "export" shows nothing happened.
And in another way if I execute it like this:
$ source proxy_set
Then "export" shows it works!
My question is how can I make it work without additional "source" cmd?
Thanks!
You can't. Setting variables in the environment only affects the environment of that shell and any future children it spawns; there's no way to affect the parent shell. When you run it without the source (or .), a brand new shell is started up, then the variable is set in that shell's environment, and then that shell exits, taking its environment with it.
The source reads the commands and executes them within the current shell as if you had typed them.
So if you want to set environment variables in a script, you have to source it. Alternatively, you can have a command generate shell commands as output instead of running them, and then the parent can evaluate the output of the command. Things like ssh-agent use this approach.
Try just adding:
export https_proxy="https://192.168.220.4:8080/"
Then execute your script normally.

What occurs when a file is `source`-d in Unix/Linux context?

I've seen shell scripts that include a line such as:
source someOtherFile
I know that causes the content of someOtherFile to execute, but what is the significance of source?
Follow-up questions: Can ANY script be sourced, or only certain type of scripts? Are there any side-effects other than environment variables when a script is sourced (as opposed to normally executing it)?
Running the command source on a script executes the script within the context of the current process. This means that environment variables set by the script remain available after it's finished running. This is in contrast to running a script normally, in which case environment variables set within the newly-spawned process will be lost once the script exits.
You can source any runnable shell script. The end effect will be the same as if you had typed the commands in the script into your terminal. For example, if the script changes directories, when it finishes running, your current working directory will have changed.
If you tell the shell, e.g. bash, to read a file and execute the commands in the file, it's called sourcing. The main point is, the current process (shell) does this, not a new child process.
In BASH you can use the source command or simply . to source a file.
source is a Unix command that evaluates the file following the command, as a list of commands, executed in the current context. You can also use . for sourcing the file.
source my-script.sh;
. my-script.sh;
Both commands will have the same effect.
In contrast, passing the script filename to the desired shell will run the script in a subshell, not the current context.

Crontab Source File

Recently I created a bash script which I am supposed to run in cron.
After preparing the bash script and its normal working, I put it in Cron and found that it was failing. As as second step , I removed all the environment dependencies i.e instead of just file.txt, I specified /home/blah-blah/file.txt
I still found the script to be failing still at one step. The step was a data processing tool.
The command i executed was /bin/blah-blah/processing_tool -parameter $INDEX where $INDEX is a variable calculated within the bash script.
Third step was to add the bash profile as source at the beginning of the bash script. Voila!!!! The script started executing perfectly from cron.
My question is why is this happening even after I removed all the environment dependencies from my script. Also I have heard that sourcing a cron job to a bash profile is not recommended. If so, Is there any other way in which I can avoid doing this.
Basicly: Anything started from cron starts with a totally clean slate.
You can make no assumptions whatsoever about the content of environment variables or whichever folder is the current folder at the start of any script run from cron.
Easiest solution:
cd to the desired directory to make sure your path is in the desired location.
source /etc/profile to mak sure you get the system wide environment variables setup.
source ~myuserid/.profile to read your personal environment settings. (~/.profile won't work as that would indicate the cron user.)
Then start executing the actual script.
Of course the approach above requires the cron process to have read access to your home-dir adn it's probably doing a lot more work thatn is actually required.
Slightly more complicated: Figure out which environment variables are required by the script and anything that gets called by the script.
Explicitly export these at the beginning of the cron script.
(P.s. replace /etc/profile and ~myuserid/.profile with whatever are the corresponding files for your shell of choice.)
A cron can be thought of as a separate user. So, this "user" may not "see" or "read" the same files as you do. It is thus essential that all path names etc. be defined in the absolute.
Every script runs within its own process. So, when you run a script, you can change the $SHELL and any other variable within but it will be lost once you get out of it. My guess is that the $INDEX variable computation may have had been computed within the script successfully but its use outside of the script may have failed. Without more information about what job it was, or what you wanted to do, it is hard to tell.
There are two ways to run a cron job:
As root, you can run su -user -c < job > in root crontab.
Sourcing your profile explicitly, as you have done.
You can also set environment variables within the crontab.
As user in the user crontab, you can run it like so: "/home/blah/.profile && myScript"
That said, there HAS to be something in your environment variables (apart from file extensions) that is not present when you run the cron job. You will have to execute that script with -x flag (in bash) and then pore over the output. Using a diff between your environment variables and that of root/cron might be a pointer. Also, check if there are some utilities that are being used in your scripts whose locations are not part of the $PATH variable for cron/root.

Resources