How to run a csh script from a sh script - linux

I was wondering if there is a way to source a csh script from a sh script. Below is an example of what is trying to be implemented:
script1.sh:
#!/bin/sh
source script2
script2:
#!/bin/csh -f
setenv TEST 1234
set path = /home/user/sandbox
When I run sh script1.sh, I get syntax errors generated from script2 (expected since we are using a different Shebang). Is there a way I can run script2 through script1?

Instead of source script2 run it as:
csh -f script2

Since your use case depends on retaining environment variables set by the csh script, try adding this to the beginning of script1:
#!/bin/sh
if [ "$csh_executed" -ne 1 ]; then
csh_executed=1 exec csh -c "source script2;
exec /bin/sh \"$0\" \"\$argv\"" "$#"
fi
# rest of script1
If the csh_executed variable is not set to 1 in the environment, run a csh script that sources script2 then executes an instance of sh, which will retain the changes to the environment made in script2. exec is used to avoid creating new processes for each shell instance, instead just "switching" from one shell to the next. Setting csh_executed in the environment of the csh command ensures that we don't get stuck in a loop when script1 is re-executed by the csh instance.
Unfortunately, there is one drawback that I don't think can be fixed, at least not with my limited knowledge of csh: the second invocation of script1 receives all the original arguments as a single string, rather than a sequence of distinct arguments.

You don't want source there; it runs the given script inside your existing shell, without spawning a subprocess. Obviously, your sh process can't run something like that which isn't a sh script.
Just call the script directly, assuming it is executable:
script2

The closest you can come to sourcing a script with a different executor than your original script is to use exec. exec will replace the running process space with the new process. Unlike source, however, when your exec-ed program ends, the entire process ends. So you can do this:
#!/bin/sh
exec /path/to/csh/script
but you can't do this:
#!/bin/sh
exec /path/to/csh/script
some-other-command
However, are you sure you really want to source the script? Maybe you just want to run it in a subprocess:
#!/bin/sh
csh -f /path/to/csh/script
some-other-command

You want the settings in your csh script to apply to the sh script that invokes it.
Basically, you can't do that, though there are some (rather ugly) ways you could make it work. If you execute your csh script, it will set those variables in the context of the process running the script; they'll vanish as soon as it returns to the caller.
Your best bet is simply to write a new version of your csh script as an sh script, and source or . it from the calling sh script.
You could translate your csh script:
#!/bin/csh -f
setenv TEST 1234
set path = /home/user/sandbox
to this:
export TEST=1234
export PATH=/home/user/sandbox
(csh treats the shell array variable $path specially, tying it to the environment variable $PATH. sh and its derivatives don't do that, they deal with $PATH itself directly.)
Note that a script intended to be sourced should not have a #! line at the top, since it doesn't make sense to execute it in its own process; you need to execute its contents in the context of the caller.
If maintaining two copies of the script, one to be sourced from csh or tcsh scripts and another to be sourced or .ed from sh/ksh/bash/zsh script, is not practical, there are other solutions. For example, your script can print a series of sh commands to be executed; you can then do something like
eval `./foo.csh`
(line endings will pose some issues here).
Or you can modify the csh script so it sets the required environment variables and then invokes some specified command, which could be a new interactive shell; this is inconvenient, since it doesn't set those variables in the interactive shell you're running.
If a software package requires some special environment variables to be set, it's common practice to provide scripts called, for example, setup.sh and setup.csh, so that sh/ksh/bash/zsh users can do:
. /path/to/package/setup.sh
and csh/tcsh users can do:
source /path/to/package/setup.csh
Incidentally, this command:
set path = /home/user/sandbox
in your sample script is probably not a good idea. It replaces your entire $PATH with just a single directory, which means you won't be able to execute simple commands like ls unless you specify their full paths. You'd usually want something like:
set path = ( $path /home/user/sandbox )
or, in sh:
PATH=$PATH:/home/user/sandbox

Related

When using PowerShell on Linux, is there a way to set the shell $PWD on exit of the PS script?

When running a PowerShell script from bash, is there a way to have PowerShell set the current directory of the calling shell up exit?
I have tried the following things (independently, not all at once)
#!/usr/bin/env pwsh
# myscript.ps1
$desiredPath = "/the/path/I/Want"
& cd $desiredPath
& Set-Location $desiredPath
& zsh -c 'cd ${clonePath}'
Unfortunately, the end result always ends up being back at the prior $PWD.
I am sure I could return the path from the script and then pipe it to another command, but I am trying to find out if there is a way to accomplish this without having to do that, as I have the scripts folder on my path so I can simply call myscript.ps1 arg1 arg2.
There is no way for a subprocess to modify the environment of its parent process without cooperation from the parent; but if you can get the parent to cooperate, you can pass back an expression for it to evaluate.
(I'm not very conversant in PowerShell so I am putting a simple Python script here instead; but you should easily see how to replace it with PowerShell.)
cd "$(python -c 'print("/tmp/fnord")')"
or more generally
eval $(python -c 'print("cd /tmp/fnord")')
but you really should avoid eval in most circumstances if at all feasible, and then make really sure you can completely trust its output if you can't avoid it.
Needless to say, the subprocess could do something a lot more complex, as long as it prints the expression you want to pass back to the parent (and nothing else) to standard output during its execution.
You can have a script before calling other scripts, which sets up the environment for the PowerShell session window temporarily while this window is still open running the scripts you would like to run. With environment variables to be set temporarily in the PowerShell session, and to change the $PWD variable, and then afterwards the file would call each of the scripts and should keep the $PWD after the first script was run and finishes, and the script to run after.

Difference with running a script using source and ./ [duplicate]

csh:
set a=0
echo "a is $a"
when i do ./my_script.csh output is:
a is
when i do source my_script.csh output is:
a is 0
Why is it so . As i know that ./ execution uses new shell.
That's right, ./my_script.csh starts a new shell, and uses the #! that you should have at the top of the file to select which shell to run (which should be csh in this case).
source my_script.csh runs the script in the current shell.
If the script is incorrectly run in, for example, the bash shell, set a=0 is not the syntax for setting an environment variable in bash, so the code won't work as you expected, because you're using the wrong shell.
Take a look at the #! at the top of the file. Is it correct?
check if variable "a" is set in your current shell:
set | grep '^a='
Remember that once you source script to your current shell,
all it's global variables are there until unset or you exit the current shell.
You may want to start a new shell, source the script, end exit shell to perform valid tests.
I don't know the context of your problem, but you may want to export some key variables to have their copies in every subprocess.

incrementing an environmental variable

I need to increment an environmental variable by these steps:
envar=1
export envar
sh script_incrementation
echo $envar
where script_incrementation contains something like this:
#! /bin/sh
envar=$[envar+1] #I've tried also other methods of incrementation
export envar
Whatever I do, after exiting the script the variable remains with its initial value 1.
THanks for your time.
A shell script executes in its own shell, so you cannot affect the outer shell unless you source it. See this question for details of that discussion.
Consider the following script, which I will call Foo.sh.
#!/bin/bash
export HELLO=$(($HELLO+1))
Suppose in the outer shell, I define an environmental variable:
export HELLO=1
If I run the script like this, it run inside its own shell and will not affect the parent.
./Foo.sh
However, if I source it, it will just execute the commands in the current shell, and will achieve the desired affect.
. Foo.sh
echo $HELLO # prints 2
Your script can not change the environment of the calling process (shell), it merely inherits it.
So, if you export foo=bar, and then invoke sh (a new process) with your script, the script will see the value of $foo (which is "bar"), and it will be able to change its own copy of it – but that is not going to affect the environment of the parent process (where you exported the variable).
You can simply source your script in the original shell, i.e. run
source increment_script.sh
or
. increment_script.sh
and that will then change the value of the variable.
This is because sourceing a script avoids spawning a new shell (process).
Another trick is to have your script output the changed environment, and then eval that output, for example:
counter=$[counter+1]
echo "counter=$counter"
and then run that as
eval `increment_script.sh`

Change shell and execute commands within a shell script

I am facing a situation where I from within my script I have to execute a read-only-script which changes the shell and sets some environment variables. Now I need to access these environment variables from my script.
The situation is like script-A
#!/bin/csh -f
bash
#set some environment variables A,B,C
I do not have write access to script-A and it performs a lot of configurations which are necessary for my Script-B.
I have tried script-B with
#!/bin/csh -f
./script-A
echo $A
However since the shell has changed, I am unable to access $A. Is there some work around such that I can do this.
Ideally the commands in my script-B has to be run in the new environment of script-A. While interacting manually, this is fine as I can first execute script-A and then execute the required commands. However, I have to automate the whole process.
Rewrite your own script in the same shell language as the one you need to execute so that you can execute it with the shell's source command.
If script-A is a csh script, then
source script-A
This works even if script-A contains exit statements:
$ cat x.csh
#!/bin/csh
source y.csh
echo $A - $B
cat y.csh
#!/bin/csh
set A=10
set B=20
exit 1
set B=30
$ ./x.csh
10 - 20
If script-A is in another shell, you need to rewrite script-B to match that shell
Oh, and by the way, DITCH CSH if at all possible:

Setting Enviroment Variables Dynamically on Linux

I am currently looking for a way to set enviroment variables in Linux via a simple shell script. Within the script I am currently using the 'export' command, however this only has scope within the script where system-wide scope is needed.
Is there anyway I can do this via a shell script, or will another method need to be used?
When you run a shell script, it executes in a sub-shell. What you need is to execute it in the context of the current shell, by sourcing it with:
source myshell.sh
or:
. myshell.sh
The latter is my preferred approach since I'm inherently lazy.
If you're talking about system-wide scope inasmuch as you want to affect everybody, you'll need to put your commands in a place where they're sourced at login time (or shell creation time), /etc/profile for example. Where you put your commands depends on the shell being used.
You can find out what scripts get executed by examining the man page for your shell:
man bash
The bash shell, when invoked as a login shell (including as a non-login shell but with the --login parameter), will use /etc/profile and the first of ~/.bash_profile, ~/.bash_login or ~/.profile.
Non-login bash shells will use. unless invoked with --norc or --rcfile <filename>, the files /etc/bash.bashrc and ~/.bashrc.
I'm pretty certain it's even more convoluted than that depending on how the shell is run, but that's as far as my memory stretches. The man page should detail it all.
You could have your script check for the existence of something like /var/myprog/env-vars-to-load and 'source' it then unlink it if it exists, perhaps using trap and a signal. Its hard to say, I'm not familiar with your program.
There is no way to 'inject' environmental variables into another process' address space, so you'll have to find some method of IPC which will can instruct the process on what to set.
A fundamental aspect of environment variables is that you cannot affect the environment for any process but your own and child processes that you spawn. You can't create a script that sets "system wide" environment variables that somehow become usable by other processes.
On the shell prompt:
$ source script.sh
And set the env vars in script.sh
test.sh
#!/bin/bash
echo "export MY_VAR=STACK_OVERFLOW" >> $HOME/.bashrc
. $HOME/.bashrc
sh task.sh
task.sh
#!/bin/sh
echo $MY_VAR
Add executable rights:
chmod +x test.sh task.sh
And lauch test.sh
./test.sh
Result:
STACK_OVERFLOW

Resources