Checking cmd line argument in bash script bypass the source statement - linux

I have an bash script "build.sh" like this:
# load Xilinx environment settings
source $XILINX/../settings32.sh
cp -r "../../../EDK/platform" "hw_platform"
if [ $# -ne 0 ]; then
cp $1/system.xml hw_platform/system.xml
fi
echo "Done"
Normally I run it as "./build.sh" and it execute the "source" statement to set environment variables correct. Sometimes I need to let the script to copy file from an alternative place, I run it as "./build.sh ~/alternative_path/"; My script check whether there is an cmd line argument by checking $# against 0.
When I do that, the "source" statement at the beginning of the script somehow get skipped, and build failed. I have put two "echo" before and after the "source", and I see echo statements get executed.
Currently I circumvent this issue by "source $XILINX/../settings32.sh; build.sh". However, please advise what I have done wrong in the script? Thanks.

Try storing the values of your positional paramaters first on an array variable then reset them to 0. "$XILINX/../settings32.sh" may be acting differently when it detects some arguments.
# Store arguments.
ARGS=("$#")
# Reset to 0 arguments.
set --
# load Xilinx environment settings
source "$XILINX/../settings32.sh"
cp -r "../../../EDK/platform" "hw_platform"
if [[ ${#ARGS[#]} -ne 0 ]]; then
cp "${ARGS[0]}/system.xml" hw_platform/system.xml
fi
echo "Done"

Related

Im missing an output from MOTD, MTU and users from docker group

So, I'm writing a bash script that doesnt give me any output.
The script is:
a) going to detect what operating system that is running
b) And know what package managers to use between APT, DNF and Pacman.
Further in the script it is:
a) going to choose the correct package manager to use when installing both Docker and Docker-Compose.
I have written down the MOTD function that should show a message on my ubuntu server.
Im creating a function that adds users to a docker group.
Configuring Docker Daemon that sets a specific MTU value to 1442 and logging.
The problem is that I dont get any output, otherwise from the MTU value that is actually 1442, that seems correct in my script.
Furhter i should get an empty line where i can get an input scenario to add a user that will be added in to a docker group.
#!/bin/bash
# This script will install Docker and Docker-Compose, configure the Docker daemon,
# and add specified users to the docker group.
# Define default values
MTU=1442
VERBOSE=false
# Function to detect operating system
detect_os() {
if [ -f /etc/lsb-release ]; then
os="ubuntu"
package_manager="apt"
elif [ -f /etc/redhat-release ]; then
os="centos"
package_manager="dnf"
elif [ -f /etc/arch-release ]; then
os="arch"
package_manager="pacman"
else
echo "Error: Unable to detect operating system."
exit 1
fi
}
# Function to update MOTD
update_motd() {
local motd_file="/etc/motd"
echo "$1" > "$motd_file"
echo "MOTD updated with message: $1"
}
# Function to add users to docker group
add_users() {
local users="$1"
local group="docker"
for user in $users; do
# Check if user exists
if ! id "$user" >/dev/null 2>&1; then
useradd "$user"
echo "User $user created."
fi
# Add user to docker group
usermod -aG "$group" "$user"
echo "User $user added to $group group."
done
}
# Function to install Docker and Docker-Compose
install_docker() {
local package_manager="$1"
local packages="docker docker-compose"
case "$package_manager" in
apt)
sudo apt-get update
sudo apt-get install -y $packages
;;
dnf)
sudo dnf install -y $packages
;;
pacman)
sudo pacman -S --noconfirm $packages
;;
*)
echo "Error: Invalid package manager: $package_manager"
exit 1
;;
esac
}
# Function to configure Docker daemon
configure_docker() {
local mtu="$1"
local config_file="/etc/docker/daemon.json"
# Create config file if it does not exist
if [ ! -f "$config_file" ]; then
sudo touch "$config_file"
sudo chmod 644 "$config_file"
fi
# Update MTU value in config file
sudo sh -c "echo '{\"mtu\": $mtu}' > $config_file"
echo "Docker daemon configured with MTU=$mtu."
}
# Parse command line argume
while [ "$#" -gt 0 ]; do
case "$1" in
--motd )
MOTD="$2"
shift 2
;;
--users)
USERS="$2"
shift 2
;;
--mtu)
MTU="$2"
shift 2
;;
esac
done
echo "MOTD: $MOTD"
echo "USERS: $USERS"
echo "MTU: $MTU"
echo "Script is finish"
The output doesnt show me anything more than the MTU=1442, and missing the users and MOTD.
Im not sure if I was clear enough, but from my project i thought my script was correct, but probably I'm missing some logic any places in my script. The projects tasks are described above, but im not sure if im on the right way here
Would appreciate any suggestions for the way in my script :)
This is not a full-fix of your script - since I'm sure you are not about to cheat on your project but want to understand and know why your script doesn't provide your expected output so you will be able to develop it on your own.
Here I'm pasting a small script that may help you better understand the basic usage of functions in BASH. Hope it will help 🤞.
#!/bin/bash
### Defining function - Functions are reusable code blocks in the script and can accept arguments while calling them.
# So each time we call an individual function later in the script we may pass different arguments to it (if needed).
my_function1(){
echo "this is a function that doesn't expect any arguments."
echo "End of 'my_function1'"
}
my_function2(){
echo "this is a function that do expect an argument."
echo "this function expects one argument to print/echo it."
echo "Hello ${1}" # <-- Numerical Variables ($1 $2 .. $N) are reserved variables in 'BASH' which values are assigned from the relevant argument(s) provided to them on script runtime and function runtime.
echo "End of 'my_function2'"
}
my_function3(){
echo "this is a function that expect one or more arguments."
echo "this function print/echo all arguments passed to it."
echo "Hi ${#}"
echo "End of 'my_function3'"
}
### Calling the functions to execute their code - we may pass relevant argument(s) to them.
# This is done by using the function name - and any parameter/string added after the function name will be passed to it as the function's argument accordingly.
# Running the `my_function1` without providing any arguments - since it is not neccessary.
my_function1
# Print an empty line to seperate outputs
echo ""
# Running the `my_function2` passing it a name as argument. Ex. Vegard
my_function2 Vegard
# Print an empty line to seperate outputs
echo ""
# Running the `my_function3` passing it a `name` as first argument and a `LAST_NAME` as second argument. Ex. Vegard YOUR_LASTNAME
my_function3 Vegard YOUR_LASTNAME
# Print an empty line to seperate outputs
echo ""
### End of the script.
# Exitting the script with the `0` exit-code.
exit 0
Bonus Update #1
How to provide arguments to a script at run time:
You can provide arguments to the scripts almost in the same way as providing arguments to the functions.
Assuming the script file name is script.sh, it is located in our current working directory, and it is executable:
NAME - as first argument.
LAST_NAME - as second argument.
Run the script as follows:
./script.sh NAME LAST_NAME
Bonus Update #2
How to provide Dynamic arguments to a function from the script run time:
If you need to provide a dynamic argument for a function at runtime instead of having hard-coded argument(s) for that function, you may use the same
reserved numeric variables princip.
Simple example
Consider you run your script providing some argument that can be change on every run.
./script.sh firstarg secondarg "last arg"
Note: If a single argument contains space character should be quoted to avoid detecting it as separate arguments - same applies to providing arguments to funtions
Sum-up: These arguments will can be called by $1 $2 .. $<N> variables accordingly within the Script anywhere out of the Functions code blocks.
${#} or ${*} will get all the provided arguments - google to find their difference.
Consider you defined functions that works with one or more arguments.
#!/bin/bash
my_function(){
# Since this $1 is defined in the function's block itself, it
# will get its value from the argument provided to function
# at run-time Not directly from the arguments provided to the Script!
echo "Argument I got is: ${1}"
}
my_other_function(){
# Printing the first three arguments provided to function
# delimited by Collons.
echo "Arguments I got are: ${1} : ${2} : ${3}"
}
another_function(){
# $# will get all the argument provided to function
# at run-time Not directly from the arguments provided to the Script!
echo "All arguments got are: ${#}"
}
### Now at calling those functions
# Providing a static argument
my_function STATIC_ARGUMENT
# Passing the First argument provided to the script at run-time to the same function.
my_function "${1}"
# Passing the Three arguments provided to the script at run-time to this function.
my_other_function "${1}" "${2}" "${3}"
# Passing all the provided arguments of the script to this function at run-time.
another_function ${#}
Summery
The same numeric reserved variables that used to refer to the argument passed to the script can be passed to a function when calling it, and in the same manner the function's arguments can be referred from within the function block.
Caution
The behavior of a script that deals with any argument that will contain space or other special character may vary since the Bash threats them differently.

Scanning the whole code if $? = 0 or 1

I understand that to check for error level in Linux can be done by using $?. The thing is that the $? value is reset if one of the commands is successfully performed, even if the previous command failed. The code I use for testing as below:
cd /vobs/test2/test3
if [ $2 = "R" ]; then
mv missing ~/missing2
echo "Created"
Assuming that mv missing ~/missing2 failed the $? should be equal to 1 but due to the last command echo "Created" is performed the $? will be equal to 0. How to perform a scan for the code above so that the moment that $?=1 it will execute exit 1 command. I can perform if else for every command execute but it is not the best way to perform, is it? I need some advice on this, please.
You can exit on the first failing command using set -e. This is part of an unofficial "strict bash mode":
set -e
To exit on all errors.
set -o pipefail
To fail on the first issue in a pipeline rather than only last one.
set -u
To fail on access to undefined variables.

Bash config file or command line parameters

If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command?
The watered down code:
#!/bin/bash
source builder.conf
function xmitBuildFile {
for IP in "{SERVER_LIST[#]}"
do
echo $1#$IP
done
}
xmitBuildFile
builder.conf:
SERVER_LIST=( 192.168.2.119 10.20.205.67 )
$bash> ./builder.sh myname
My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this:
. /etc/script.conf
of course you can use the positional parameters anywhere (before or after ". /etc/..."):
echo "$#"
test -n "$1" && ...
you can even define them in the script or in the very same config file:
test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa.
By the way shflags may be pretty useful in writing such script.

Any way to exit bash script, but not quitting the terminal

When I use exit command in a shell script, the script will terminate the terminal (the prompt). Is there any way to terminate a script and then staying in the terminal?
My script run.sh is expected to execute by directly being sourced, or sourced from another script.
EDIT:
To be more specific, there are two scripts run2.sh as
...
. run.sh
echo "place A"
...
and run.sh as
...
exit
...
when I run it by . run2.sh, and if it hit exit codeline in run.sh, I want it to stop to the terminal and stay there. But using exit, the whole terminal gets closed.
PS: I have tried to use return, but echo codeline will still gets executed....
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit, you will want to use return.
Yes; you can use return instead of exit. Its main purpose is to return from a shell function, but if you use it within a source-d script, it returns from that script.
As §4.1 "Bourne Shell Builtins" of the Bash Reference Manual puts it:
return [n]
Cause a shell function to exit with the return value n.
If n is not supplied, the return value is the exit status of the
last command executed in the function.
This may also be used to terminate execution of a script being executed
with the . (or source) builtin, returning either n or
the exit status of the last command executed within the script as the exit
status of the script.
Any command associated with the RETURN trap is executed
before execution resumes after the function or script.
The return status is non-zero if return is used outside a function
and not during the execution of a script by . or source.
You can add an extra exit command after the return statement/command so that it works for both, executing the script from the command line and sourcing from the terminal.
Example exit code in the script:
if [ $# -lt 2 ]; then
echo "Needs at least two arguments"
return 1 2>/dev/null
exit 1
fi
The line with the exit command will not be called when you source the script after the return command.
When you execute the script, return command gives an error. So, we suppress the error message by forwarding it to /dev/null.
Instead of running the script using . run2.sh, you can run it using sh run2.sh or bash run2.sh
A new sub-shell will be started, to run the script then, it will be closed at the end of the script leaving the other shell opened.
Actually, I think you might be confused by how you should run a script.
If you use sh to run a script, say, sh ./run2.sh, even if the embedded script ends with exit, your terminal window will still remain.
However if you use . or source, your terminal window will exit/close as well when subscript ends.
for more detail, please refer to What is the difference between using sh and source?
This is just like you put a run function inside your script run2.sh.
You use exit code inside run while source your run2.sh file in the bash tty.
If the give the run function its power to exit your script and give the run2.sh
its power to exit the terminator.
Then of cuz the run function has power to exit your teminator.
#! /bin/sh
# use . run2.sh
run()
{
echo "this is run"
#return 0
exit 0
}
echo "this is begin"
run
echo "this is end"
Anyway, I approve with Kaz it's a design problem.
I had the same problem and from the answers above and from what I understood what worked for me ultimately was:
Have a shebang line that invokes the intended script, for example,
#!/bin/bash uses bash to execute the script
I have scripts with both kinds of shebang's. Because of this, using sh or . was not reliable, as it lead to a mis-execution (like when the script bails out having run incompletely)
The answer therefore, was
Make sure the script has a shebang, so that there is no doubt about its intended handler.
chmod the .sh file so that it can be executed. (chmod +x file.sh)
Invoke it directly without any sh or .
(./myscript.sh)
Hope this helps someone with similar question or problem.
To write a script that is secure to be run as either a shell script or sourced as an rc file, the script can check and compare $0 and $BASH_SOURCE and determine if exit can be safely used.
Here is a short code snippet for that
[ "X$(basename $0)" = "X$(basename $BASH_SOURCE)" ] && \
echo "***** executing $name_src as a shell script *****" || \
echo "..... sourcing $name_src ....."
I think that this happens because you are running it on source mode
with the dot
. myscript.sh
You should run that in a subshell:
/full/path/to/script/myscript.sh
'source' http://ss64.com/bash/source.html
It's correct that sourced vs. executed scripts use return vs. exit to keep the same session open, as others have noted.
Here's a related tip, if you ever want a script that should keep the session open, regardless of whether or not it's sourced.
The following example can be run directly like foo.sh or sourced like . foo.sh/source foo.sh. Either way it will keep the session open after "exiting". The $# string is passed so that the function has access to the outer script's arguments.
#!/bin/sh
foo(){
read -p "Would you like to XYZ? (Y/N): " response;
[ $response != 'y' ] && return 1;
echo "XYZ complete (args $#).";
return 0;
echo "This line will never execute.";
}
foo "$#";
Terminal result:
$ foo.sh
$ Would you like to XYZ? (Y/N): n
$ . foo.sh
$ Would you like to XYZ? (Y/N): n
$ |
(terminal window stays open and accepts additional input)
This can be useful for quickly testing script changes in a single terminal while keeping a bunch of scrap code underneath the main exit/return while you work. It could also make code more portable in a sense (if you have tons of scripts that may or may not be called in different ways), though it's much less clunky to just use return and exit where appropriate.
Also make sure to return with expected return value. Else if you use exit when you will encounter an exit it will exit from your base shell since source does not create another process (instance).
Improved the answer of Tzunghsing, with more clear results and error re-direction, for silent usage:
#!/usr/bin/env bash
echo -e "Testing..."
if [ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ]; then
echo "***** You are Executing $0 in a sub-shell."
exit 0
else
echo "..... You are Sourcing $BASH_SOURCE in this terminal shell."
return 0
fi
echo "This should never be seen!"
Or if you want to put this into a silent function:
function sExit() {
# Safe Exit from script, not closing shell.
[ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ] && exit 0 || return 0
}
...
# ..it have to be called with an error check, like this:
sExit && return 0
echo "This should never be seen!"
Please note that:
if you have enabled errexit in your script (set -e) and you return N with N != 0, your entire script will exit instantly. To see all your shell settings, use, set -o.
when used in a function, the 1st return 0 is exiting the function, and the 2nd return 0 is exiting the script.
if your terminal emulator doesn't have -hold you can sanitize a sourced script and hold the terminal with:
#!/bin/sh
sed "s/exit/return/g" script >/tmp/script
. /tmp/script
read
otherwise you can use $TERM -hold -e script
If a command succeeded successfully, the return value will be 0. We can check its return value afterwards.
Is there a “goto” statement in bash?
Here is some dirty workaround using trap which jumps only backwards.
#!/bin/bash
set -eu
trap 'echo "E: failed with exitcode $?" 1>&2' ERR
my_function () {
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
echo "this is run"
return 0
else
echo "fatal: not a git repository (or any of the parent directories): .git"
goto trap 2> /dev/null
fi
}
my_function
echo "Command succeeded" # If my_function failed this line is not printed
Related:
https://stackoverflow.com/a/19091823/2402577
How to use $? and test to check function?
I couldn't find solution so for those who want to leave the nested script without leaving terminal window:
# this is just script which goes to directory if path satisfies regex
wpr(){
leave=false
pwd=$(pwd)
if [[ "$pwd" =~ ddev.*web ]]; then
# echo "your in wordpress instalation"
wpDir=$(echo "$pwd" | grep -o '.*\/web')
cd $wpDir
return
fi
echo 'please be in wordpress directory'
# to leave from outside the scope
leave=true
return
}
wpt(){
# nested function which returns $leave variable
wpr
# interupts the script if $leave is true
if $leave; then
return;
fi
echo 'here is the rest of the script, executes if leave is not defined'
}
I have no idea whether this is useful for you or not, but in zsh, you can exit a script, but only to the prompt if there is one, by using parameter expansion on a variable that does not exist, as follows.
${missing_variable_ejector:?}
Though this does create an error message in your script, you can prevent it with something like the following.
{ ${missing_variable_ejector:?} } 2>/dev/null
1) exit 0 will come out of the script if it is successful.
2) exit 1 will come out of the script if it is a failure.
You can try these above two based on ur req.

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Resources