Im missing an output from MOTD, MTU and users from docker group - linux
So, I'm writing a bash script that doesnt give me any output.
The script is:
a) going to detect what operating system that is running
b) And know what package managers to use between APT, DNF and Pacman.
Further in the script it is:
a) going to choose the correct package manager to use when installing both Docker and Docker-Compose.
I have written down the MOTD function that should show a message on my ubuntu server.
Im creating a function that adds users to a docker group.
Configuring Docker Daemon that sets a specific MTU value to 1442 and logging.
The problem is that I dont get any output, otherwise from the MTU value that is actually 1442, that seems correct in my script.
Furhter i should get an empty line where i can get an input scenario to add a user that will be added in to a docker group.
#!/bin/bash
# This script will install Docker and Docker-Compose, configure the Docker daemon,
# and add specified users to the docker group.
# Define default values
MTU=1442
VERBOSE=false
# Function to detect operating system
detect_os() {
if [ -f /etc/lsb-release ]; then
os="ubuntu"
package_manager="apt"
elif [ -f /etc/redhat-release ]; then
os="centos"
package_manager="dnf"
elif [ -f /etc/arch-release ]; then
os="arch"
package_manager="pacman"
else
echo "Error: Unable to detect operating system."
exit 1
fi
}
# Function to update MOTD
update_motd() {
local motd_file="/etc/motd"
echo "$1" > "$motd_file"
echo "MOTD updated with message: $1"
}
# Function to add users to docker group
add_users() {
local users="$1"
local group="docker"
for user in $users; do
# Check if user exists
if ! id "$user" >/dev/null 2>&1; then
useradd "$user"
echo "User $user created."
fi
# Add user to docker group
usermod -aG "$group" "$user"
echo "User $user added to $group group."
done
}
# Function to install Docker and Docker-Compose
install_docker() {
local package_manager="$1"
local packages="docker docker-compose"
case "$package_manager" in
apt)
sudo apt-get update
sudo apt-get install -y $packages
;;
dnf)
sudo dnf install -y $packages
;;
pacman)
sudo pacman -S --noconfirm $packages
;;
*)
echo "Error: Invalid package manager: $package_manager"
exit 1
;;
esac
}
# Function to configure Docker daemon
configure_docker() {
local mtu="$1"
local config_file="/etc/docker/daemon.json"
# Create config file if it does not exist
if [ ! -f "$config_file" ]; then
sudo touch "$config_file"
sudo chmod 644 "$config_file"
fi
# Update MTU value in config file
sudo sh -c "echo '{\"mtu\": $mtu}' > $config_file"
echo "Docker daemon configured with MTU=$mtu."
}
# Parse command line argume
while [ "$#" -gt 0 ]; do
case "$1" in
--motd )
MOTD="$2"
shift 2
;;
--users)
USERS="$2"
shift 2
;;
--mtu)
MTU="$2"
shift 2
;;
esac
done
echo "MOTD: $MOTD"
echo "USERS: $USERS"
echo "MTU: $MTU"
echo "Script is finish"
The output doesnt show me anything more than the MTU=1442, and missing the users and MOTD.
Im not sure if I was clear enough, but from my project i thought my script was correct, but probably I'm missing some logic any places in my script. The projects tasks are described above, but im not sure if im on the right way here
Would appreciate any suggestions for the way in my script :)
This is not a full-fix of your script - since I'm sure you are not about to cheat on your project but want to understand and know why your script doesn't provide your expected output so you will be able to develop it on your own.
Here I'm pasting a small script that may help you better understand the basic usage of functions in BASH. Hope it will help 🤞.
#!/bin/bash
### Defining function - Functions are reusable code blocks in the script and can accept arguments while calling them.
# So each time we call an individual function later in the script we may pass different arguments to it (if needed).
my_function1(){
echo "this is a function that doesn't expect any arguments."
echo "End of 'my_function1'"
}
my_function2(){
echo "this is a function that do expect an argument."
echo "this function expects one argument to print/echo it."
echo "Hello ${1}" # <-- Numerical Variables ($1 $2 .. $N) are reserved variables in 'BASH' which values are assigned from the relevant argument(s) provided to them on script runtime and function runtime.
echo "End of 'my_function2'"
}
my_function3(){
echo "this is a function that expect one or more arguments."
echo "this function print/echo all arguments passed to it."
echo "Hi ${#}"
echo "End of 'my_function3'"
}
### Calling the functions to execute their code - we may pass relevant argument(s) to them.
# This is done by using the function name - and any parameter/string added after the function name will be passed to it as the function's argument accordingly.
# Running the `my_function1` without providing any arguments - since it is not neccessary.
my_function1
# Print an empty line to seperate outputs
echo ""
# Running the `my_function2` passing it a name as argument. Ex. Vegard
my_function2 Vegard
# Print an empty line to seperate outputs
echo ""
# Running the `my_function3` passing it a `name` as first argument and a `LAST_NAME` as second argument. Ex. Vegard YOUR_LASTNAME
my_function3 Vegard YOUR_LASTNAME
# Print an empty line to seperate outputs
echo ""
### End of the script.
# Exitting the script with the `0` exit-code.
exit 0
Bonus Update #1
How to provide arguments to a script at run time:
You can provide arguments to the scripts almost in the same way as providing arguments to the functions.
Assuming the script file name is script.sh, it is located in our current working directory, and it is executable:
NAME - as first argument.
LAST_NAME - as second argument.
Run the script as follows:
./script.sh NAME LAST_NAME
Bonus Update #2
How to provide Dynamic arguments to a function from the script run time:
If you need to provide a dynamic argument for a function at runtime instead of having hard-coded argument(s) for that function, you may use the same
reserved numeric variables princip.
Simple example
Consider you run your script providing some argument that can be change on every run.
./script.sh firstarg secondarg "last arg"
Note: If a single argument contains space character should be quoted to avoid detecting it as separate arguments - same applies to providing arguments to funtions
Sum-up: These arguments will can be called by $1 $2 .. $<N> variables accordingly within the Script anywhere out of the Functions code blocks.
${#} or ${*} will get all the provided arguments - google to find their difference.
Consider you defined functions that works with one or more arguments.
#!/bin/bash
my_function(){
# Since this $1 is defined in the function's block itself, it
# will get its value from the argument provided to function
# at run-time Not directly from the arguments provided to the Script!
echo "Argument I got is: ${1}"
}
my_other_function(){
# Printing the first three arguments provided to function
# delimited by Collons.
echo "Arguments I got are: ${1} : ${2} : ${3}"
}
another_function(){
# $# will get all the argument provided to function
# at run-time Not directly from the arguments provided to the Script!
echo "All arguments got are: ${#}"
}
### Now at calling those functions
# Providing a static argument
my_function STATIC_ARGUMENT
# Passing the First argument provided to the script at run-time to the same function.
my_function "${1}"
# Passing the Three arguments provided to the script at run-time to this function.
my_other_function "${1}" "${2}" "${3}"
# Passing all the provided arguments of the script to this function at run-time.
another_function ${#}
Summery
The same numeric reserved variables that used to refer to the argument passed to the script can be passed to a function when calling it, and in the same manner the function's arguments can be referred from within the function block.
Caution
The behavior of a script that deals with any argument that will contain space or other special character may vary since the Bash threats them differently.
Related
Aliasing with variables in bash profile
Here's a very simple question about my vim bash profile. I would like to create an alias where I type "activate (variable)", and my virtual env immediately gets activated by running this command: $ source foldername/bin/activate As you can see, foldername will be the variable in this case, so I figured, I should write a function instead of a static one liner to set this alias. I tried something likes this: activate(something){ source something/bin/activate } Ideally, what I would like is to type: $ activate f1 and this command gets run: $ source f1/bin/activate It would also be nice to have a default. So calling "activate" would also work. Thanks for the help.
You could update your shell environment, using a function like this: function activate () { if [ $# -eq 0 ]; then # no arguments passed to the function (default case) source f1/bin/activate elif [ $# -eq 1 ]; then # one argument passed to the function source "$1"/bin/activate # argument value read from $1 fi }
Checking cmd line argument in bash script bypass the source statement
I have an bash script "build.sh" like this: # load Xilinx environment settings source $XILINX/../settings32.sh cp -r "../../../EDK/platform" "hw_platform" if [ $# -ne 0 ]; then cp $1/system.xml hw_platform/system.xml fi echo "Done" Normally I run it as "./build.sh" and it execute the "source" statement to set environment variables correct. Sometimes I need to let the script to copy file from an alternative place, I run it as "./build.sh ~/alternative_path/"; My script check whether there is an cmd line argument by checking $# against 0. When I do that, the "source" statement at the beginning of the script somehow get skipped, and build failed. I have put two "echo" before and after the "source", and I see echo statements get executed. Currently I circumvent this issue by "source $XILINX/../settings32.sh; build.sh". However, please advise what I have done wrong in the script? Thanks.
Try storing the values of your positional paramaters first on an array variable then reset them to 0. "$XILINX/../settings32.sh" may be acting differently when it detects some arguments. # Store arguments. ARGS=("$#") # Reset to 0 arguments. set -- # load Xilinx environment settings source "$XILINX/../settings32.sh" cp -r "../../../EDK/platform" "hw_platform" if [[ ${#ARGS[#]} -ne 0 ]]; then cp "${ARGS[0]}/system.xml" hw_platform/system.xml fi echo "Done"
Bash config file or command line parameters
If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command? The watered down code: #!/bin/bash source builder.conf function xmitBuildFile { for IP in "{SERVER_LIST[#]}" do echo $1#$IP done } xmitBuildFile builder.conf: SERVER_LIST=( 192.168.2.119 10.20.205.67 ) $bash> ./builder.sh myname My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this: . /etc/script.conf of course you can use the positional parameters anywhere (before or after ". /etc/..."): echo "$#" test -n "$1" && ... you can even define them in the script or in the very same config file: test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa. By the way shflags may be pretty useful in writing such script.
How can I run a function from a script in command line?
I have a script that has some functions. Can I run one of the function directly from command line? Something like this? myScript.sh func()
Well, while the other answers are right - you can certainly do something else: if you have access to the bash script, you can modify it, and simply place at the end the special parameter "$#" - which will expand to the arguments of the command line you specify, and since it's "alone" the shell will try to call them verbatim; and here you could specify the function name as the first argument. Example: $ cat test.sh testA() { echo "TEST A $1"; } testB() { echo "TEST B $2"; } "$#" $ bash test.sh $ bash test.sh testA TEST A $ bash test.sh testA arg1 arg2 TEST A arg1 $ bash test.sh testB arg1 arg2 TEST B arg2 For polish, you can first verify that the command exists and is a function: # Check if the function exists (bash specific) if declare -f "$1" > /dev/null then # call arguments verbatim "$#" else # Show a helpful error echo "'$1' is not a known function name" >&2 exit 1 fi
If the script only defines the functions and does nothing else, you can first execute the script within the context of the current shell using the source or . command and then simply call the function. See help source for more information.
The following command first registers the function in the context, then calls it: . ./myScript.sh && function_name
Briefly, no. You can import all of the functions in the script into your environment with source (help source for details), which will then allow you to call them. This also has the effect of executing the script, so take care. There is no way to call a function from a shell script as if it were a shared library.
Using case #!/bin/bash fun1 () { echo "run function1" [[ "$#" ]] && echo "options: $#" } fun2 () { echo "run function2" [[ "$#" ]] && echo "options: $#" } case $1 in fun1) "$#"; exit;; fun2) "$#"; exit;; esac fun1 fun2 This script will run functions fun1 and fun2 but if you start it with option fun1 or fun2 it'll only run given function with args(if provided) and exit. Usage $ ./test run function1 run function2 $ ./test fun2 a b c run function2 options: a b c
I have a situation where I need a function from bash script which must not be executed before (e.g. by source) and the problem with #$ is that myScript.sh is then run twice, it seems... So I've come up with the idea to get the function out with sed: sed -n "/^func ()/,/^}/p" myScript.sh And to execute it at the time I need it, I put it in a file and use source: sed -n "/^func ()/,/^}/p" myScript.sh > func.sh; source func.sh; rm func.sh
Edit: WARNING - seems this doesn't work in all cases, but works well on many public scripts. If you have a bash script called "control" and inside it you have a function called "build": function build() { ... } Then you can call it like this (from the directory where it is): ./control build If it's inside another folder, that would make it: another_folder/control build If your file is called "control.sh", that would accordingly make the function callable like this: ./control.sh build
Solved post but I'd like to mention my preferred solution. Namely, define a generic one-liner script eval_func.sh: #!/bin/bash source $1 && shift && "#a" Then call any function within any script via: ./eval_func.sh <any script> <any function> <any args>... An issue I ran into with the accepted solution is that when sourcing my function-containing script within another script, the arguments of the latter would be evaluated by the former, causing an error.
The other answers here are nice, and much appreciated, but often I don't want to source the script in the session (which reads and executes the file in your current shell) or modify it directly. I find it more convenient to write a one or two line 'bootstrap' file and run that. Makes testing the main script easier, doesn't have side effects on your shell session, and as a bonus you can load things that simulate other environments for testing. Example... # breakfast.sh make_donuts() { echo 'donuts!' } make_bagels() { echo 'bagels!' } # bootstrap.sh source 'breakfast.sh' make_donuts Now just run ./bootstrap.sh.Same idea works with your python, ruby, or whatever scripts. Why useful? Let's say you complicated your life for some reason, and your script may find itself in different environments with different states present. For example, either your terminal session, or a cloud provider's cool new thing. You also want to test cloud things in terminal, using simple methods. No worries, your bootstrap can load elementary state for you. # breakfast.sh # Now it has to do slightly different things # depending on where the script lives! make_donuts() { if [[ $AWS_ENV_VAR ]] then echo '/donuts' elif [[ $AZURE_ENV_VAR ]] then echo '\donuts' else echo '/keto_diet' fi } If you let your bootstrap thing take an argument, you can load different state for your function to chew, still with one line in the shell session: # bootstrap.sh source 'breakfast.sh' case $1 in AWS) AWS_ENV_VAR="arn::mumbo:jumbo:12345" ;; AZURE) AZURE_ENV_VAR="cloud::woo:_impress" ;; esac make_donuts # You could use $2 here to name the function you wanna, but careful if evaluating directly. In terminal session you're just entering: ./bootstrap.sh AWS Result: # /donuts
you can call function from command line argument like below function irfan() { echo "Irfan khan" date hostname } function config() { ifconfig echo "hey" } $1 Once you defined the functions put $1 at the end to accept argument which function you want to call. Lets say the above code is saved in fun.sh. Now you can call the functions like ./fun.sh irfan & ./fun.sh config in command line.
Bash script to capture input, run commands, and print to file
I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment: In the session, provide a command prompt that includes the working directory, e.g., $./logger/home/it244/it244/hw8$ Accept user’s commands, execute them, and display the output on the screen. During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command): 1: ls 2: ls -l If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”. When you quit the logging session (either by “exit” or CTRL+C), a. Delete the temporary file b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status). Here is my code so far (which did not go well, I would not try to run it): #!/bin/sh trap 'exit 1' 2 trap 'ctrl-c' 2 echo $(pwd) while true do read -p command echo "$command:" $command >> PID.cmd done Currently when I run this script I get command read: 10: arg count What is causing that? ======UPDATE========= Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index #!/bin/sh index=0 trap bashtrap INT bashtrap(){ echo "CTRL+C aborting bash script" } echo "starting to log" while : do read -p "command:" inputline if [ $inputline="exit" ] then echo "Aborting with Exit" break else echo "$index: $inputline" > output $inputline 2>&1 | tee output (( index++ )) fi done
This can be achieved in bash or perl or others. Some hints to get you started in bash : question 1 : command prompt /logger/home/it244/it244/hw8 1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros. 2) cd into that directory within you bash script. question 2 : run the user command 1) get the user input read -p "command : " input_cmd 2) run the user command to STDOUT bash -c "$input_cmd" 3) Track the user input command exit code echo $? Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages). 3) Track the command PID if the exit code is Ok echo $$ >> /tmp/pid_Ok But take care the question is to keep the user command input, not the PID itself as shown here. 4) trap on exit see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals. 5) increment the index in your while loop (on the exit code condition) index=0 while ... do ... ((index++)) done I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like: read -p "`pwd -P`\$ " _command (I use leading underscores for private variables - just a matter of style.) Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this: _cleanup() { rm -f $_LOG echo echo $0 ended with $_success successful commands and $_fail unsuccessful commands. echo exit 0 } So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this: #/usr/bin/sh # Define a function to call on exit _cleanup() { # Remove the log file as per specification #5a rm -f $_LOG # Display success/fail counts as per specification #5b echo echo $0 ended with $_success successful commands and $_fail unsuccessful commands. echo exit 0 } # Where are we? Get absolute path of $0 _abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P ) # Set the log file name based on the path & PID # Keep this constant so the log file doesn't wander # around with the user if they enter a cd command _LOG=${_abs_path}/$$.cmd # Print ctrl+c msg per specification #4 # Then run the cleanup function trap "echo aborted by ctrl+c;_cleanup" 2 # Initialize counters _line=0 _fail=0 _success=0 while true do # Count lines to support required logging format per specification #3 ((_line++)) # Set prompt per specification #1 and read command read -p "`pwd -P`\$ " _command # Echo command to log file as per specification #3 echo "$_line: $_command" >>$_LOG # Arrange to exit on user input with value 'exit' as per specification #5 if [[ "$_command" == "exit" ]] then _cleanup fi # Execute whatever command was entered as per specification #2 eval $_command # Capture the success/fail counts to support specification #5b _status=$? if [ $_status -eq 0 ] then ((_success++)) else ((_fail++)) fi done