Set local environmental variable within a bash script - linux

I'm trying to set an environmental variable that will persist once the script has finished running. It can go away after I end an ssh session.
Sample bash script:
# User picks an option
1) export dogs = cool
2) export dogs = not_cool
Running the script as source script.sh doesn't work since it kicks me out of my shell when ran and also requires the interactive menu so it won't work. Basically I want the user to be able to pick an option to switch between environmental variables in their shell. Is that even possible?
Source:
#!/bin/bash
set -x
show_menu(){
NORMAL=`echo "\033[m"`
MENU=`echo "\033[36m"` #Blue
NUMBER=`echo "\033[33m"` #yellow
FGRED=`echo "\033[41m"`
RED_TEXT=`echo "\033[31m"`
ENTER_LINE=`echo "\033[33m"`
echo -e "${MENU}*********************************************${NORMAL}"
echo -e "${MENU}**${NUMBER} 1)${MENU} Option 1 ${NORMAL}"
echo -e "${MENU}**${NUMBER} 2)${MENU} Option 2 ${NORMAL}"
echo -e "${MENU}*********************************************${NORMAL}"
echo -e "${ENTER_LINE}Please enter a menu option and enter or ${RED_TEXT}enter to exit. ${NORMAL}"
read opt
}
function option_picked() {
COLOR='\033[01;31m' # bold red
RESET='\033[00;00m' # normal white
MESSAGE=${#:-"${RESET}Error: No message passed"}
echo -e "${COLOR}${MESSAGE}${RESET}"
}
clear
show_menu
while [ opt != '' ]
do
if [[ $opt = "" ]]; then
exit;
else
case $opt in
1) clear;
option_picked "Option 1";
export dogs=cool
show_menu;
;;
2) clear;
option_picked "Option 2";
export dogs=not_cool
show_menu;
;;
x)exit;
;;
\n)exit;
;;
*)clear;
option_picked "Pick an option from the menu";
show_menu;
;;
esac
fi
done

The problem here is that . ./script.sh or source ./script.sh cannot run an interactive menu style script like this one. No way that I'm aware of to set local environmental variables from a bash script like I am trying to do here.

Redirect your normal echo's for user interaction to stderr (>&2)
echo the value that you want to have in your parent's environment to stdout (>&1)
if you changed your script that way you can call it like:
ENV_VAR=$( /path/to/your_script arg1 arg2 arg3 arg_whatever )
and now you have "exported" a variable to the "parent"

Try running your script with "./myscript.sh" which will use the current shell without invoking the new shell (still i doubt the hash bang might invokes the new shell).
Have a look here
Can also be solved with ~/.bashrc. The required environments can added in this file. If you need new shell with your own environment, you invoke "bash" with "bash --rcfile ".

Related

Bash module load function not working as expected

I have a batch script that I eventually want to execute on a cluster via condor_submit. The script needs to load some module via "module load matlab/R2020a". However nothing works.
The script looks like this:
#!/bin/bash
module load cudnn/8.2.0-cu11.x
module load cuda/11.2
module load matlab/R2020a
echo $PATH
echo $SHELL
#Check matlab version
echo_and_run() { echo "$*" ; "$#" ; }
matlab -e | grep "MATLAB="
echo_and_run matlab -e | grep "MATLAB="
...
setup input etc.
...
echo_and_run matlab -nodisplay -batch "......matlab commands"
When I run it from my home shell it gives me:
...
/bin/bash
MATLAB=/is/software/matlab/linux/R2014a
MATLAB=/is/software/matlab/linux/R2014a
...
Which is both not correct.
When executing this in my local shell ( source ./scriptname.sh) The output is even more confusing:
...
/bin/bash
MATLAB=/is/software/matlab/linux/R2020a
MATLAB=/is/software/matlab/linux/R2014a
...
So the matlab version updates, but only for the non "echo_and_run" execution (the first call). In the actual call it also is the default 2014 version.
What in the world is going on? I checked the $PATH variables and they are identically to my running shell. I tried sourcing ~/.bashrc at the top of the script, no difference. When I type "type module" I can see that it is a function:
module is a function
module ()
{
_module_raw "$#" 2>&1
}
Some older posts mention, that I should either run with "source" or "." (for sh) but I cannot do that, since the script is called by condor_submit eventually. Or I should find the file that defines module. However I do not know what other file (besides ~/.bashrc) that could be.
Old post: https://unix.stackexchange.com/questions/194893/why-cant-i-load-modules-while-executing-my-bash-script-but-only-when-sourcing
Edit
I am currently trying everything locally (in the login shell) which could be different from the execution shell, but even here I get this weird behaviour.
Edit II:
+ type -a _module_raw
_module_raw is a function
_module_raw ()
{
unset _mlshdbg;
if [ "${MODULES_SILENT_SHELL_DEBUG:-0}" = '1' ]; then
case "$-" in
*v*x*)
set +vx;
_mlshdbg='vx'
;;
*v*)
set +v;
_mlshdbg='v'
;;
*x*)
set +x;
_mlshdbg='x'
;;
*)
_mlshdbg=''
;;
esac;
fi;
unset _mlre _mlIFS;
if [ -n "${IFS+x}" ]; then
_mlIFS=$IFS;
fi;
IFS=' ';
for _mlv in ${MODULES_RUN_QUARANTINE:-};
do
if [ "${_mlv}" = "${_mlv##*[!A-Za-z0-9_]}" -a "${_mlv}" = "${_mlv#[0-9]}" ]; then
if [ -n "`eval 'echo ${'$_mlv'+x}'`" ]; then
_mlre="${_mlre:-}${_mlv}_modquar='`eval 'echo ${'$_mlv'}'`' ";
fi;
_mlrv="MODULES_RUNENV_${_mlv}";
_mlre="${_mlre:-}${_mlv}='`eval 'echo ${'$_mlrv':-}'`' ";
fi;
done;
if [ -n "${_mlre:-}" ]; then
eval `eval ${_mlre}/usr/bin/tclsh8.6 /usr/lib/x86_64-linux-gnu/modulecmd.tcl bash '"$#"'`;
else
eval `/usr/bin/tclsh8.6 /usr/lib/x86_64-linux-gnu/modulecmd.tcl bash "$#"`;
fi;
_mlstatus=$?;
if [ -n "${_mlIFS+x}" ]; then
IFS=$_mlIFS;
else
unset IFS;
fi;
unset _mlre _mlv _mlrv _mlIFS;
if [ -n "${_mlshdbg:-}" ]; then
set -$_mlshdbg;
fi;
unset _mlshdbg;
return $_mlstatus
}
Provided script output does not help to understand if there is an issue at the environment modules level.
Adding a module list command in your script after the 3 module load commands may help to determine if the module function has properly loaded your environment or not.
In some situations, like when running script on a cluster through a batch scheduler, it is good to source the module initialization script at the start of such script to ensure the module function is defined. It seems that you are running on a Debian-like system, so the initialization script may be sourced with:
. /usr/share/modules/init/bash

Shell prompt that is based on location in filesystem

I have to work within three main directories under the root filesystem - home/username, project, and scratch. I want my shell prompt to display which of these top level directories i am in.
Here is what I am trying to do:
top_level_dir ()
{
if [[ "${PWD}" == *home* ]]
then
echo "home";
elif [[ "${PWD}" == *scratch* ]]
then
echo "scratch";
elif [[ "${PWD}" == *project* ]]
then
echo "project";
fi
}
Then, I export PS1 as:
export PS1='$(top_level_dir) : '
Unfortunately this is not working as I want. I get home : for my prompt when I am in my home directory, but if I switch to scratch or projects then the prompt does not change. I do not understand bash scripting very well so I would appreciate any help to correct my code.
You can hook into cd to change the prompt every time you are changing the working directory. I've asked myself often how to hook into cd but I think that I now found a solution. What about adding this to your ~/.bashrc?:
#
# Wrapper function that is called if cd is invoked
# by the current shell
#
function cd {
# call builtin cd. change to the new directory
builtin cd $#
# call a hook function that can use the new working directory
# to decide what to do
color_prompt
}
#
# Changes the color of the prompt depending
# on the current working directory
#
function color_prompt {
pwd=$(pwd)
if [[ "$pwd/" =~ ^/home/ ]] ; then
PS1='\[\033[01;32m\]\u#\h:\w\[\033[00m\]\$ '
elif [[ "$pwd/" =~ ^/etc/ ]] ; then
PS1='\[\033[01;34m\]\u#\h:\w\[\033[00m\]\$ '
elif [[ "$pwd/" =~ ^/tmp/ ]] ; then
PS1='\[\033[01;33m\]\u#\h:\w\[\033[00m\]\$ '
else
PS1='\u#\h:\w\\$ '
fi
export PS1
}
# checking directory and setting prompt on shell startup
color_prompt
Please try this method instead and tell us how it works e.g. how your prompt changes in your home directory, your project or scratch directory, and other directories besides those. Tell us what error messages you see as well. The problem lies within it.
Tell me also how you run it, if it's by script, by direct execution, or through a startup script like ~/.bashrc.
top_level_dir ()
{
__DIR=$PWD
case "$__DIR" in
*home*)
echo home
;;
*scratch*)
echo scratch
;;
*project*)
echo project
;;
*)
echo "$__DIR"
;;
esac
}
export PS1='$(top_level_dir) : '
export -f top_level_dir
If it doesn't work, try changing __DIR=$PWD to __DIR=$(pwd) and tell us if it helps too. I also would like to confirm if you're really running bash. Note that there are many variants of sh like bash, zsh, ksh, and dash and the one installed and used by default depends on every system. To confirm that you're using Bash, do echo "$BASH_VERSION" and see if it shows a message.
You should also make sure that you're running export PS1='$(top_level_dir) : ' with single quotes and not with double quotes: export PS1="$(top_level_dir) : ".

Bash scripting on debian installer not accepting user input on preseeding

I have a very small script that needs to be run on debian installer: (via preseeding, pre installation script)
echo -n -e " # Your option [1] [2] [3]: "
read REPLY
if [ "$REPLY" == "1" ]
The script stops here and whatever I press is just displayed onto screen however it is not accepting the enter key. Normally, when you press 1 and press enter, the read should return 1 to $REPLY. But nothing happens. It keeps accepting user input but no further action happens.
Then, I switched to tty2 with ALT+F2 and run the script there, it was fine, it works as expected, when I press; it takes the input. Why tty1 is not accepting enter as usual?
Use debconf for that kind of configuration, it tackles exactly needs like yours.
Adapted example from the manual
Template file (debian/templates):
Template: your_package/select_option
Type: select
Choices: 1, 2, 3
Description: Which option?
Choose one of the options
Script (debian/config):
#!/bin/sh -e
# Source debconf library.
. /usr/share/debconf/confmodule
db_input medium your_package/select_option || true
db_go
# Check their answer.
db_get your_package/select_option
if [ "$RET" = "1" ]; then
# Do stuff
fi
Had the same problem (read not processing my input) with busybox on an embedded Linux.
Took me some time to realize that busybox's read is not CR-tolerant — my terminal program (used miniterm.py) sent CR/LF line ends by default; switching it to LF only solved my problem!
with bash interpreter, try replace read by :
builtin read
with other sh interpreter, specify the variable name :
read REPLY
The following script works fine for me:
#!/bin/sh
echo -n -e " # Your option [1] [2] [3]: "
read
case $REPLY in
1 )
echo "one" ;;
2 )
echo "two" ;;
3 )
echo "three" ;;
*)
echo "invalid" ;;
esac
It prints out one nicely if I choose 1. Any reason why you'd like to stick to if...fi?

How can I run a function from a script in command line?

I have a script that has some functions.
Can I run one of the function directly from command line?
Something like this?
myScript.sh func()
Well, while the other answers are right - you can certainly do something else: if you have access to the bash script, you can modify it, and simply place at the end the special parameter "$#" - which will expand to the arguments of the command line you specify, and since it's "alone" the shell will try to call them verbatim; and here you could specify the function name as the first argument. Example:
$ cat test.sh
testA() {
echo "TEST A $1";
}
testB() {
echo "TEST B $2";
}
"$#"
$ bash test.sh
$ bash test.sh testA
TEST A
$ bash test.sh testA arg1 arg2
TEST A arg1
$ bash test.sh testB arg1 arg2
TEST B arg2
For polish, you can first verify that the command exists and is a function:
# Check if the function exists (bash specific)
if declare -f "$1" > /dev/null
then
# call arguments verbatim
"$#"
else
# Show a helpful error
echo "'$1' is not a known function name" >&2
exit 1
fi
If the script only defines the functions and does nothing else, you can first execute the script within the context of the current shell using the source or . command and then simply call the function. See help source for more information.
The following command first registers the function in the context, then calls it:
. ./myScript.sh && function_name
Briefly, no.
You can import all of the functions in the script into your environment with source (help source for details), which will then allow you to call them. This also has the effect of executing the script, so take care.
There is no way to call a function from a shell script as if it were a shared library.
Using case
#!/bin/bash
fun1 () {
echo "run function1"
[[ "$#" ]] && echo "options: $#"
}
fun2 () {
echo "run function2"
[[ "$#" ]] && echo "options: $#"
}
case $1 in
fun1) "$#"; exit;;
fun2) "$#"; exit;;
esac
fun1
fun2
This script will run functions fun1 and fun2 but if you start it with option
fun1 or fun2 it'll only run given function with args(if provided) and exit.
Usage
$ ./test
run function1
run function2
$ ./test fun2 a b c
run function2
options: a b c
I have a situation where I need a function from bash script which must not be executed before (e.g. by source) and the problem with #$ is that myScript.sh is then run twice, it seems... So I've come up with the idea to get the function out with sed:
sed -n "/^func ()/,/^}/p" myScript.sh
And to execute it at the time I need it, I put it in a file and use source:
sed -n "/^func ()/,/^}/p" myScript.sh > func.sh; source func.sh; rm func.sh
Edit: WARNING - seems this doesn't work in all cases, but works well on many public scripts.
If you have a bash script called "control" and inside it you have a function called "build":
function build() {
...
}
Then you can call it like this (from the directory where it is):
./control build
If it's inside another folder, that would make it:
another_folder/control build
If your file is called "control.sh", that would accordingly make the function callable like this:
./control.sh build
Solved post but I'd like to mention my preferred solution. Namely, define a generic one-liner script eval_func.sh:
#!/bin/bash
source $1 && shift && "#a"
Then call any function within any script via:
./eval_func.sh <any script> <any function> <any args>...
An issue I ran into with the accepted solution is that when sourcing my function-containing script within another script, the arguments of the latter would be evaluated by the former, causing an error.
The other answers here are nice, and much appreciated, but often I don't want to source the script in the session (which reads and executes the file in your current shell) or modify it directly.
I find it more convenient to write a one or two line 'bootstrap' file and run that. Makes testing the main script easier, doesn't have side effects on your shell session, and as a bonus you can load things that simulate other environments for testing. Example...
# breakfast.sh
make_donuts() {
echo 'donuts!'
}
make_bagels() {
echo 'bagels!'
}
# bootstrap.sh
source 'breakfast.sh'
make_donuts
Now just run ./bootstrap.sh.Same idea works with your python, ruby, or whatever scripts.
Why useful? Let's say you complicated your life for some reason, and your script may find itself in different environments with different states present. For example, either your terminal session, or a cloud provider's cool new thing. You also want to test cloud things in terminal, using simple methods. No worries, your bootstrap can load elementary state for you.
# breakfast.sh
# Now it has to do slightly different things
# depending on where the script lives!
make_donuts() {
if [[ $AWS_ENV_VAR ]]
then
echo '/donuts'
elif [[ $AZURE_ENV_VAR ]]
then
echo '\donuts'
else
echo '/keto_diet'
fi
}
If you let your bootstrap thing take an argument, you can load different state for your function to chew, still with one line in the shell session:
# bootstrap.sh
source 'breakfast.sh'
case $1 in
AWS)
AWS_ENV_VAR="arn::mumbo:jumbo:12345"
;;
AZURE)
AZURE_ENV_VAR="cloud::woo:_impress"
;;
esac
make_donuts # You could use $2 here to name the function you wanna, but careful if evaluating directly.
In terminal session you're just entering:
./bootstrap.sh AWS
Result:
# /donuts
you can call function from command line argument like below
function irfan() {
echo "Irfan khan"
date
hostname
}
function config() {
ifconfig
echo "hey"
}
$1
Once you defined the functions put $1 at the end to accept argument which function you want to call.
Lets say the above code is saved in fun.sh. Now you can call the functions like ./fun.sh irfan & ./fun.sh config in command line.

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Resources