I've always developed my shell scripts using parameters, on a daily-basis or even when developing some automation scripts. However, recently I've tried a different approach, exporting environment variables to my scripts.
#!/bin/bash
: ${USER?"Requires USER"}
: ${FIRST_NAME?"Requires FIRST_NAME"}
: ${LAST_NAME?"Requires LAST_NAME"}
: ${EMAIL?"Requires EMAIL"}
set -x
setup_git_account(){
su - "${USER}" -c "git config --global user.name '${FIRST_NAME} ${LAST_NAME}'"
su - "${USER}" -c "git config --global user.email '${EMAIL}'"
}
setup_git_account
This ensures a smaller code, easy checks if all the required variables are initialized and also, better understanding of what the script is doing, once all the variables are declared on outside.
export USER='john' && export FIRST_NAME='John' && export LAST_NAME='Doe' && export EMAIL='john.doe#email.com' && setup_git_account.sh
Which could be represented like this if implemented with receiving parameters:
setup_git_account.sh --user 'john' --firstname 'John' --lastname 'Doe' --email 'john.doe#email.com'
However, the last one, would need way more lines of code to implement the getopts switch case, check the passed parameters values, etc.
Anyway, I know we're used to the second approach, but I think the first approach also has several benefits. And I would like to hear more from you, if there's any downside between the presented approaches. And which one should I be using ?
Thanks!
A bit off-topic, the invocation syntax with environment variables for bash can be shorter, no need for export's:
USER='john' FIRST_NAME='John' LAST_NAME='Doe' EMAIL='john.doe#email.com' setup_git_account.sh
None of your values is optional; I would just use positional parameters.
: ${1?"Requires USER"}
: ${2?"Requires FIRST_NAME"}
: ${3?"Requires LAST_NAME"}
: ${4?"Requires EMAIL"}
sudo -u "$1" git config --global user.name "$2 $3" user.email "$4"
Providing the way for the user to specify values in an arbitrary order is just an unnecessary complication.
You would simply call the script with
setup_git_account.sh 'john' 'John' 'Doe' 'john.doe#email.com'
Reconsider whether the first and last names need to be separate arguments. They are combined into a single argument to git config by the script anyway; just take the name as a single argument as well.
setup_git_account.sh 'john' 'John Doe' 'john.doe#email.com'
(with the appropriate changes to the script as necessary).
I never use your approach. I think there are no drawbacks by using parameters. It's a common way to use parameters and if you are using longopts there are self-descriptive.
In my opinion env vars are a solution if you need data in different scripts.
Maybe you have problems to run such a script on systems where you don't be allowed to change the environment.
I've parameterized your variables using a guide I wrote a while back and even added --help.
This solution accepts environment variables as well as options (which will trump the variables):
while getopts e:f:hl:u:-: arg; do
case "$arg" in
e ) EMAIL="$OPTARG" ;;
f ) FIRST_NAME="$OPTARG" ;;
h ) do_help ;;
l ) LAST_NAME="$OPTARG" ;;
u ) USER_NAME="$OPTARG" ;;
- ) LONG_OPTARG="${OPTARG#*=}"
case $OPTARG in
email=?* ) EMAIL="$LONG_OPTARG" ;;
first*=?* ) FIRST_NAME="$LONG_OPTARG" ;;
help* ) do_help ;;
last*=?* ) LAST_NAME="$LONG_OPTARG" ;;
user=?* ) USER_NAME="$LONG_OPTARG" ;;
* ) echo "Illegal option/missing argument: --$OPTARG" >&2; exit 2 ;;
esac ;;
* ) exit 2 ;; # error messages for short options already given by getopts
esac
done
shift $((OPTIND-1))
HELP=" - see ${0##*/} --help"
: ${USER_NAME?"Requires USER_NAME$HELP"}
: ${FIRST_NAME?"Requires FIRST_NAME$HELP"}
: ${LAST_NAME?"Requires LAST_NAME$HELP"}
: ${EMAIL?"Requires EMAIL$HELP"}
su - "$USER_NAME" -c "git config --global user.name '$FIRST_NAME $LAST_NAME'"
su - "$USER_NAME" -c "git config --global user.email '$EMAIL'"
Note that I changed $USER to $USER_NAME to avoid conflicts with your local environment ($USER is your user name on your local Linux system!)
You can also extract the user's full name from the system:
FULL_NAME="$(getent passwd |awk -v u="$USER_NAME" -F: '$1 == u { print $5 }')"
(I see no reason to separate FIRST_NAME and LAST_NAME; what do you do for Jean Claude Van Damme? They're only used together anyway. Also note that not all users will have full names in the passwd file.)
This uses do_help to show the --help output. Here's an example of how that could look (I'd put this at the vary top of the script so somebody just reading it can get the synopsis; it's not in the above code block because I wanted to prevent the block from getting a scroll bar):
do_help() { cat <</help
Usage: ${0##*/} [OPTIONS]
-u USER_NAME, --user=USER_NAME
-f FIRST_NAME, --firstname=FIRST_NAME
-l LAST_NAME, --lastname=LAST_NAME
-e EMAIL, --email=EMAIL
Each option may also be passed through the environment as e.g. $EMAIL
Code taken from https://stackoverflow.com/a/41515444/519360
/help
}
Related
So, I'm writing a bash script that doesnt give me any output.
The script is:
a) going to detect what operating system that is running
b) And know what package managers to use between APT, DNF and Pacman.
Further in the script it is:
a) going to choose the correct package manager to use when installing both Docker and Docker-Compose.
I have written down the MOTD function that should show a message on my ubuntu server.
Im creating a function that adds users to a docker group.
Configuring Docker Daemon that sets a specific MTU value to 1442 and logging.
The problem is that I dont get any output, otherwise from the MTU value that is actually 1442, that seems correct in my script.
Furhter i should get an empty line where i can get an input scenario to add a user that will be added in to a docker group.
#!/bin/bash
# This script will install Docker and Docker-Compose, configure the Docker daemon,
# and add specified users to the docker group.
# Define default values
MTU=1442
VERBOSE=false
# Function to detect operating system
detect_os() {
if [ -f /etc/lsb-release ]; then
os="ubuntu"
package_manager="apt"
elif [ -f /etc/redhat-release ]; then
os="centos"
package_manager="dnf"
elif [ -f /etc/arch-release ]; then
os="arch"
package_manager="pacman"
else
echo "Error: Unable to detect operating system."
exit 1
fi
}
# Function to update MOTD
update_motd() {
local motd_file="/etc/motd"
echo "$1" > "$motd_file"
echo "MOTD updated with message: $1"
}
# Function to add users to docker group
add_users() {
local users="$1"
local group="docker"
for user in $users; do
# Check if user exists
if ! id "$user" >/dev/null 2>&1; then
useradd "$user"
echo "User $user created."
fi
# Add user to docker group
usermod -aG "$group" "$user"
echo "User $user added to $group group."
done
}
# Function to install Docker and Docker-Compose
install_docker() {
local package_manager="$1"
local packages="docker docker-compose"
case "$package_manager" in
apt)
sudo apt-get update
sudo apt-get install -y $packages
;;
dnf)
sudo dnf install -y $packages
;;
pacman)
sudo pacman -S --noconfirm $packages
;;
*)
echo "Error: Invalid package manager: $package_manager"
exit 1
;;
esac
}
# Function to configure Docker daemon
configure_docker() {
local mtu="$1"
local config_file="/etc/docker/daemon.json"
# Create config file if it does not exist
if [ ! -f "$config_file" ]; then
sudo touch "$config_file"
sudo chmod 644 "$config_file"
fi
# Update MTU value in config file
sudo sh -c "echo '{\"mtu\": $mtu}' > $config_file"
echo "Docker daemon configured with MTU=$mtu."
}
# Parse command line argume
while [ "$#" -gt 0 ]; do
case "$1" in
--motd )
MOTD="$2"
shift 2
;;
--users)
USERS="$2"
shift 2
;;
--mtu)
MTU="$2"
shift 2
;;
esac
done
echo "MOTD: $MOTD"
echo "USERS: $USERS"
echo "MTU: $MTU"
echo "Script is finish"
The output doesnt show me anything more than the MTU=1442, and missing the users and MOTD.
Im not sure if I was clear enough, but from my project i thought my script was correct, but probably I'm missing some logic any places in my script. The projects tasks are described above, but im not sure if im on the right way here
Would appreciate any suggestions for the way in my script :)
This is not a full-fix of your script - since I'm sure you are not about to cheat on your project but want to understand and know why your script doesn't provide your expected output so you will be able to develop it on your own.
Here I'm pasting a small script that may help you better understand the basic usage of functions in BASH. Hope it will help 🤞.
#!/bin/bash
### Defining function - Functions are reusable code blocks in the script and can accept arguments while calling them.
# So each time we call an individual function later in the script we may pass different arguments to it (if needed).
my_function1(){
echo "this is a function that doesn't expect any arguments."
echo "End of 'my_function1'"
}
my_function2(){
echo "this is a function that do expect an argument."
echo "this function expects one argument to print/echo it."
echo "Hello ${1}" # <-- Numerical Variables ($1 $2 .. $N) are reserved variables in 'BASH' which values are assigned from the relevant argument(s) provided to them on script runtime and function runtime.
echo "End of 'my_function2'"
}
my_function3(){
echo "this is a function that expect one or more arguments."
echo "this function print/echo all arguments passed to it."
echo "Hi ${#}"
echo "End of 'my_function3'"
}
### Calling the functions to execute their code - we may pass relevant argument(s) to them.
# This is done by using the function name - and any parameter/string added after the function name will be passed to it as the function's argument accordingly.
# Running the `my_function1` without providing any arguments - since it is not neccessary.
my_function1
# Print an empty line to seperate outputs
echo ""
# Running the `my_function2` passing it a name as argument. Ex. Vegard
my_function2 Vegard
# Print an empty line to seperate outputs
echo ""
# Running the `my_function3` passing it a `name` as first argument and a `LAST_NAME` as second argument. Ex. Vegard YOUR_LASTNAME
my_function3 Vegard YOUR_LASTNAME
# Print an empty line to seperate outputs
echo ""
### End of the script.
# Exitting the script with the `0` exit-code.
exit 0
Bonus Update #1
How to provide arguments to a script at run time:
You can provide arguments to the scripts almost in the same way as providing arguments to the functions.
Assuming the script file name is script.sh, it is located in our current working directory, and it is executable:
NAME - as first argument.
LAST_NAME - as second argument.
Run the script as follows:
./script.sh NAME LAST_NAME
Bonus Update #2
How to provide Dynamic arguments to a function from the script run time:
If you need to provide a dynamic argument for a function at runtime instead of having hard-coded argument(s) for that function, you may use the same
reserved numeric variables princip.
Simple example
Consider you run your script providing some argument that can be change on every run.
./script.sh firstarg secondarg "last arg"
Note: If a single argument contains space character should be quoted to avoid detecting it as separate arguments - same applies to providing arguments to funtions
Sum-up: These arguments will can be called by $1 $2 .. $<N> variables accordingly within the Script anywhere out of the Functions code blocks.
${#} or ${*} will get all the provided arguments - google to find their difference.
Consider you defined functions that works with one or more arguments.
#!/bin/bash
my_function(){
# Since this $1 is defined in the function's block itself, it
# will get its value from the argument provided to function
# at run-time Not directly from the arguments provided to the Script!
echo "Argument I got is: ${1}"
}
my_other_function(){
# Printing the first three arguments provided to function
# delimited by Collons.
echo "Arguments I got are: ${1} : ${2} : ${3}"
}
another_function(){
# $# will get all the argument provided to function
# at run-time Not directly from the arguments provided to the Script!
echo "All arguments got are: ${#}"
}
### Now at calling those functions
# Providing a static argument
my_function STATIC_ARGUMENT
# Passing the First argument provided to the script at run-time to the same function.
my_function "${1}"
# Passing the Three arguments provided to the script at run-time to this function.
my_other_function "${1}" "${2}" "${3}"
# Passing all the provided arguments of the script to this function at run-time.
another_function ${#}
Summery
The same numeric reserved variables that used to refer to the argument passed to the script can be passed to a function when calling it, and in the same manner the function's arguments can be referred from within the function block.
Caution
The behavior of a script that deals with any argument that will contain space or other special character may vary since the Bash threats them differently.
I have a global npm package that provided by a third party to generate a report and send it to server.
in_report generate -date 20221211
And I want to let a group of user to have the ability to check whether the report is generated or not, in order to prevent duplication. Therefore, I want to run a sh script before executing the in_report command.
sh check.sh && in_report generate -date 20221211
But the problem is I don't want to change the command how they generate the report. I can do a patch on their PC (able to change the env path or etc).
Is it possible to run sh check.sh && in_report generate -date 20221211 by running in_report generate -date 20221211?
If this "in_report" is only used for this exact purpose, you can create an alias by putting the following line at the end of the ".bashrc" or ".bash_aliases" file that is used by the people who will need to run in_report :
alias in_report='sh check.sh && in_report'
See https://doc.ubuntu-fr.org/alias for details.
If in_report is to be used in other ways too, this is not the solution. In that case, you may want to call it directly inside check.sh if a certain set of conditions on the parameters are matched. To do that :
alias in_report='sh check.sh'
The content of check.sh :
#!/bin/sh
if [[ $# -eq 3 && "$1" == "generate" && "$2" == "-date" && "$3" == "20"* ]] # Assuming that all you dates must be in the 21st century
then
if [[ some test to check that the report has not been generated yet ]]
then
/full/path/to/the/actual/in_report "$#" # WARNING : be sure that nobody will move the actual in_report to another path
else
echo "This report already exists"
fi
else
/full/path/to/the/actual/in_report "$#"
fi
This sure is not ideal but it should work. But by far the easiest and most reliable solution if applicable would be to ignore the aliasing thing and tell those who will use in_report to run your check.sh instead (with the same parameters as they would put to run in_report), and then you can directly call in_report instead of the /full/path/to/the/actual/in_report.
Sorry if this was not very clear. In that case, feel free to ask.
On most modern Linux distros the easiest would be to place a shell script that defines a function in /etc/profile.d, e.g. /etc/profile.d/my_report with a content of
function in_report() { sh check.sh && /path/to/in_report $*; }
That way it gets automatically placed in peoples environment when they log in.
The /path/to is important so the function doesn't call itself recursively.
A cursory glance through doco for the Mac suggests that you may want to edit /etc/bashrc or /etc/zshrc respectively.
There are two releases
1. Dev available at https://example.com/foo/new-package.txt
2. GA available at https://example.com/bar/new-package.txt
I want the user to enter his choice of Dev or GA and based on that need to download the files, in the shell script is there a better way to do it?
There is a file which has environment variables that I'm sourcing inside another script.
env_var.sh
#!/bin/bash
echo "Enter your release"
export release='' #either Dev or GA
This file will be sourced from another script as
download.sh
#!/bin/bash
. ./env_var.sh #sourcing a environment var file
wget https://<Dev or GA URL>/new-package.txt
My main problem is how to set the wget URL based on the release set in env_var file.
Any help is appreciated.
Have you considered using read to get the user input?
read -p 'Selection: ' choice
You could then pass ${choice} to a function that has case statements for the urls:
get_url() {
case $1 in
'dev' ) wget https://example.com/foo/new-package.txt ;;
'ga' ) wget https://example.com/bar/new-package.txt ;;
\? ) echo "Invalid choice" ;;
esac
}
For more information on read, a good reference is TLDP's guide on user input.
Edit: To source a config file, run the command source ${PATH_TO_FILE}. You would then be able to pass the variable to the get_url() function for the same result.
In Unix shell,git clone <url> will prompt user for username then password.
I defined $username and $password variables.
how could I pass two variables to the command in order.
I have tried
echo $password | echo $username | git clone <url>
,which did not work
There are several ways you can do this. What you probably should do, because it's more secure, is use a configuration where the script doesn't have to contain and pass the username and password. For example, you could set up ssh, or you could use a credential helper. (Details depend on your environment, so I'd recommend searching for existing questions and answers re: how to set those up.)
If you still want to have the script pass the values, you basically have two choices: You can use a form of the git command that takes the values on the command line (see brokenfoot's answer), or you can pass the values on STDIN (which is the approach you're attempting, but doesn't work quite the way you're attempting it).
When you use |, you're sending the "standard output" of the command on the left to the "standard input" of the command on the right. So when you chain commands like you show, the first echo is sending output to the second echo - which ignores it. That's not what you want.
You would need a single command that outputs the username, and end-of-line character, the password, and another end-of-line character. That's not easy to do with echo (at least, not portably). You could do something like
git clone *url* <<EOF
$username
$password
EOF
Let me pretend the question is neither git-related no security-related
and my answer to the literal question "How to pass two variables to a
program" is:
( echo $username; echo $password ) | git clone 'url'
That is, just output two strings separated by a newline (echo adds the newline); or do it in one
call to echo:
echo "$username
$password" | git clone 'url'
You can pass variable like so:
username="xyz"
password="123"
echo "git clone https://$username:$password#github.com/$username/repository.git"
Output:
git clone https://xyz:123#github.com/xyz/repository.git
I'm writing a cPanel postwwwact script, if you're not familiar with the script its run after a new account is created. it relies on the user account variable being passed to the script which i then use for various things (creating databases etc). However, I can't seem to find the right way to access the variable i want. I'm not that good with shell scripts so i'd appreciate some advice. I had read somewhere that the value i wanted would be included in $ARGV{'user'} but this simply gives "root" as opposed to the value i need. I've tried looping through all the arguments (list of arguments here) like this:
#!/bin/sh
for var
do
touch /root/testvars/$var
done
and the value i want is in there, i'm just not sure how to accurately target it. There's info here on doing this with PHP or Perl but i have to do this as a shell script.
EDIT Ideally i would like to be able to call the variable by something other than $1 or $2 etc as this would create issues if an argument is added or removed
..for example in the PHP code here:
function argv2array ($argv) {
$opts = array();
$argv0 = array_shift($argv);
while(count($argv)) {
$key = array_shift($argv);
$value = array_shift($argv);
$opts[$key] = $value;
}
return $opts;
}
// allows you to do the following:
$opts = argv2array($argv);
echo $opts[‘user’];
Any ideas?
The parameters are passed to your script as a hash:
/scripts/$hookname user $user password $password
You can use associative arrays in Bash 4, or in earlier versions of Bash you can use built up variable names.
#!/bin/bash
# Bash >= 4
declare -A argv
for ((i=1;i<=${##};i+=2))
do
argv[${#:i:1}]="${#:$((i+1)):1}"
done
echo ${argv['user']}
Or
#!/bin/bash
# Bash < 4
for ((i=1;i<=${##};i+=2))
do
declare ARGV${#:i:1}="${#:$((i+1)):1}"
done
echo ${!ARGV*} # outputs all variable names that begin with ARGV
echo $ARGVuser
Running either:
$ ./argvtest user dennis password secret
dennis
Note: you can also use shift to step through the arguments, but it's destructive and the methods above leave $# ($1, $2, etc.) in place.
#!/bin/bash
# Bash < 4
# using shift (can use in Bash 4, also)
for ((i=1;i<=${##}+2;i++))
do
declare ARGV$1="$2"
# Bash 4: argv[$1}]="$2"
shift 2
done
echo ${!ARGV*}
echo $ARGVuser
If it's passed as a command-line parameter to the script, it's available as $1 if it's first parameter, $2 for the second, and so on.
Why not start off your script with something like
ARG_USER=$1
ARG_FOO=$2
ARG_BAR=$3
And then later in your script refer to $ARG_USER, $ARG_FOO and $ARG_BAR instead of $1, $2, and $3. That way, if you decide to change the order of arguments, or insert a new argument somewhere other than at the end, there is only one place in your code that you need to update the association between argument order and argument meaning.
You could even do more complex processing of $* to set your $ARG_WHATEVER variables, if it's not always going to be that all of the are specified in the same order every time.
You can do the following:
#!/bin/bash
for var in $argv; do
<do whatver you want with $var>
done
And then, invoke the script as:
$ /path/to/script param1 arg2 item3 item4 etc