I have a linux machine.
I need to create a bash file that calls itself multiple times and when a certain condition met, do some actions and exit.
For this i should execute the file itself with a param, for example:
./mybash -t 50
-t stand for the times i want this file to call itself and when the if statement is true then: print, wait and exit.
So to create the bash i wrote the following (Which is not currently working)
#!/bin/bash
while getopts t: option
do
case "${option}"
in
t) TIMES=${OPTARG};;
esac
done
echo "this is the run number: $TIMES"
if [ $TIMES = 0 ]; then
#psudo here
echo "Hello"
wait 5 seconds
echo "Done"
else
sh ./myBash.bash -t ($TIMES - 1)
fi
What seems to be the issue here?
You are expecting -t option in script but passing -n.
You need to use $((...)) for shell arithmetic.
Suggest you to use [[ ... ]] instead of [ ... ] in bash.
Use wait instead of sleep
You may use:
#!/usr/bin/env bash
while getopts t: option
do
case "${option}" in
t) TIMES=${OPTARG};;
esac
done
echo "this is the run number: $TIMES"
if [[ $TIMES -eq 0 ]]; then
#psudo here
echo "Hello"
sleep .1
echo "Done"
else
bash "$0" -t $((TIMES - 1))
fi
You may then invoke your script as:
bash myBash.bash -t 5
Related
I have a series of 250 bash scripts which need to be executed in parallel. Each script utilizes approximately 1 core so I do not want to execute them all at the same time. I would like to run 10 scripts at the same time and whenever one finishes execute another script.
I recommend parallel, but I'm going to post this monstrosity for the benefit of having people pick it apart and tune it. :)
#! /bin/env bash
## TODO: this documentation block needs to be expanded.
use="
$0 <#procs> <cmdfile>
Pass the number of desired processes to prespawn as the 1st argument.
Pass the command file with the list of tasks you need done.
Command file format:
KEYSTRING:cmdlist
where KEYSTRING will be used as a unique logfile name
and cmdlist is the base command string to be run
KEYSTRING may not contain whitespace of any sort.
Other lines are not allowed, including blanks or comments.
"
die() { echo "$# $use" >&2; exit 1; }
case $# in
2) case "$1" in
*[^0-9]*) die "INVALID #procs '$1'" ;;
esac
declare -i primer="$1" # a countdown of how many processes to pre-spawn
cmdfile="$2"
[[ -r "$cmdfile" ]] || { die "$cmdfile not readable"; }
grep -v : "$cmdfile" || { die "$cmdfile has invalid lines"; }
declare -i lines=$( grep -c : $cmdfile)
if (( lines < primer ))
then die "Note - command lines in $cmdfile ($lines) fewer than requested process chains ($primer)"
fi ;;
*) die ;;
esac >&2
trap 'echo abort $0#$LINENO; use; exit 1' ERR # make sure any error is fatal
trap ': no-op to ignore' HUP # ignore hangups (built-in nohup without explicit i/o redirection)
spawn() {
IFS="$IFS:" read key cmd && [[ "${cmd:-}" ]] || return
echo "$(date) executing '$cmd'; c.f. $key.log" | tee "$key.log"
echo "# autogenerated by $0 $(date)
{ $cmd
spawn
} >> $key.log 2>&1 &
" >| $key.sh
. $key.sh
rm -f $key.sh
return 0
}
# create a command list based on those designators
declare chains=0
while (( primer-- )) # until we've filled the requested quota
do spawn # create a child process
done < $cmdfile
I would like to put my getopt call into a function so I can make my script a bit more tidy. I've read a few guides Using getopts inside a Bash function but they seem to be for getopts not getopt and cannot get my head round it.
I have the following getopt call at the start of my script
#-------------------------------------------------------------------------------
# Main
#-------------------------------------------------------------------------------
getopt_results=$( getopt -s bash -o e:h --long ENVIRONMENT:,HELP:: -- "$#" )
if test $? != 0
then
echo "Failed to parse command line unrecognized option" >&2
Usage
exit 1
fi
eval set -- "$getopt_results"
while true
do
case "$1" in
-e | --ENVIRONMENT)
ENVIRONMENT="$2"
if [ ! -f "../properties/static/build_static.${ENVIRONMENT}.properties" -o ! -f "../properties/dynamic/build_dynamic.${ENVIRONMENT}.properties" ]; then
echo "ERROR: Unable to open properties file for ${ENVIRONMENT}"
echo "Please check they exist or supply a Correct Environment name"
Usage
exit 1
else
declare -A props
readpropsfile "../properties/dynamic/dynamic.${ENVIRONMENT}.properties"
readpropsfile "../properties/static/static.${ENVIRONMENT}.properties"
fi
shift 2
;;
-h | --HELP)
Usage
exit 1
;;
--)
shift
break
;;
*)
echo "$0: unparseable option $1"
Usage
exit 1
;;
esac
done
when I put the whole lot in function , say called parse_command_line ()
and call it with parse_command_line "$#"
my script dies because it cannot work out the parameters it was called with. I have tried making OPTIND local as per some of the guides. Any advice? Thanks.
getopt shouldn't be used, but the bash-aware GNU version works fine inside a function, as demonstrated below:
#!/usr/bin/env bash
main() {
local getopt_results
getopt_results=$(getopt -s bash -o e:h --long ENVIRONMENT:,HELP:: "$#")
eval "set -- $getopt_results" # this is less misleading than the original form
echo "Positional arguments remaining:"
if (( $# )); then
printf ' - %q\n' "$#"
else
echo " (none)"
fi
}
main "$#"
...when saved as getopt-test and run as:
./getopt-test -e foo=bar "first argument" "second argument"
...properly emits:
Positional arguments remaining:
- -e
- foo=bar
- --
- hello
- cruel
- world
I've implemented a way to have concurrent jobs in bash, as seen here.
I'm looping through a file with around 13000 lines. I'm just testing and printing each line, as such:
#!/bin/bash
max_bg_procs(){
if [[ $# -eq 0 ]] ; then
echo "Usage: max_bg_procs NUM_PROCS. Will wait until the number of background (&)"
echo " bash processes (as determined by 'jobs -pr') falls below NUM_PROCS"
return
fi
local max_number=$((0 + ${1:-0}))
while true; do
local current_number=$(jobs -pr | wc -l)
if [[ $current_number -lt $max_number ]]; then
echo "success in if"
break
fi
echo "has to wait"
sleep 4
done
}
download_data(){
echo "link #" $2 "["$1"]"
}
mapfile -t myArray < $1
i=1
for url in "${myArray[#]}"
do
max_bg_procs 6
download_data $url $i &
((i++))
done
echo "finito!"
I've also tried other solutions such as this and this, but my issue is persistent:
At a "random" given step, usually between the 2000th and the 5000th iteration, it simply gets stuck. I've put those various echo in the middle of the code to see where it would get stuck but it the last thing it prints is the $url $i.
I've done the simple test to remove any parallelism and just loop the file contents: all went fine and it looped till the end.
So it makes me think I'm missing some limitation on the parallelism, and I wonder if anyone could help me out figuring it out.
Many thanks!
Here, we have up to 6 parallel bash processes calling download_data, each of which is passed up to 16 URLs per invocation. Adjust per your own tuning.
Note that this expects both bash (for exported function support) and GNU xargs.
#!/usr/bin/env bash
# ^^^^- not /bin/sh
download_data() {
echo "link #$2 [$1]" # TODO: replace this with a job that actually takes some time
}
export -f download_data
<input.txt xargs -d $'\n' -P 6 -n 16 -- bash -c 'for arg; do download_data "$arg"; done' _
Using GNU Parallel it looks like this
cat input.txt | parallel echo link '\#{#} [{}]'
{#} = the job number
{} = the argument
It will spawn one process per CPU. If you instead want 6 in parallel use -j:
cat input.txt | parallel -j6 echo link '\#{#} [{}]'
If you prefer running a function:
download_data(){
echo "link #" $2 "["$1"]"
}
export -f download_data
cat input.txt | parallel -j6 download_data {} {#}
Suppose I created my own bash script with a command called customcmd
I want it so that if I type in customcmd into the terminal, every subsequent commands following it will also execute customcmd
so suppose I do
>customcmd
>param1
>param2
>param3
I want this to be the equivalent of
>customcmd
>customcmd param1
>customcmd param2
>customcmd param3
ie. I want it to be so that by executing customcmd once, I won't have to type in customcmd again and I want to have the command line parse every single command I type afterwards to automatically be parameters to customcmd...
how do I go about achieving this when writing the bash script?
If I understand your question correctly, I'd do the following:
Create a script, eg mycommand.sh:
#!/bin/bash
while [[ 1 ]]; do
read _INPUT
echo $_INPUT
done
initialize an infinite loop
for each iteration, get the user input ( whatever it is ) and run it through the command you specify in the while loop ( if your script needs to parse multiple arguments, you can swap our echo with a function that can handle that )
Hope that helps!
This could be one of your forms.
#!/bin/bash
shopt -s extglob
function customcmd {
# Do something with "$#".
echo "$*"
}
while read -er INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
if [[ $INPUT == customcmd ]]; then
customcmd
while read -er INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
customcmd "$INPUT"
done
fi
done
Or this:
#!/bin/bash
shopt -s extglob
function customcmd {
if [[ $# -gt 0 ]]; then
# Do something with "$#".
echo "$*"
else
local INPUT
while read -er INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
customcmd "$INPUT"
done
fi
}
while read -era INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
case "$INPUT" in
customcmd)
customcmd "${INPUT[#]:2}"
;;
# ...
esac
done
** In arrays $INPUT is equivalent to ${INPUT[0]}, although other people would disagree using the former since it's less "documentive", but every tool has their own traditionally accepted hacks which same people would allow just like those hacks in Awk, and not any Wiki or he who thinks is a more veteran Bash user could dictate which should be standard.
I have bash script where i have echo before every command showing what is happening.
But i need to disbale echo when setting as cron job and then enable again if do some testing.
i find it very hard to go to each line and then add/remove comment
is there anything which i can include at top something like
enable echo or disable echo
so that i don't have to waste time
The absolute easiest would be to insert the following line after the hashbang line:
echo() { :; }
When you want to re-enable, either delete the line or comment it out:
#echo() { :; }
If you're not using echo but printf, same strategy, i.e.:
printf() { :; }
If you absolutely need to actually echo/printf something, prepend the builtin statement, e.g.:
builtin echo "This 'echo' will not be suppressed."
This means that you can do a conditional output, e.g.:
echo () {
[[ "$SOME_KIND_OF_FLAG" ]] && builtin echo $#
}
Set the SOME_KIND_OF_FLAG variable to something non-null, and the overridden echo function will behave like normal echo.
EDIT: another alternative would be to use echo for instrumenting (debugging), and printf for the outputs (e.g., for piping purposes). That way, no need for any FLAG. Just disable/enable the echo() { :; } line according to whether you want to instrument or not, respectively.
Enable/Disable via CLI Parameter
Put these lines right after the hashbang line:
if [[ debug == "$1" ]]; then
INSTRUMENTING=yes # any non-null will do
shift
fi
echo () {
[[ "$INSTRUMENTING" ]] && builtin echo $#
}
Now, invoking the script like this: script.sh debug will turn on instrumenting. And because there's the shift command, you can still feed parameters. E.g.:
Without instrumenting: script.sh param1 param2
With instrumenting: script.sh debug param1 param2
The above can be simplified to:
if [[ debug != "$1" ]]; then
echo () { :; }
shift
fi
if you need the instrumenting flag (e.g. to record the output of a command to a temp file only if debugging), use an else-block:
if [[ debug != "$1" ]]; then
echo () { :; }
shift
else
INSTRUMENTING=yes
fi
REMEMBER: in non-debug mode, all echo commands are disabled; you have to either use builtin echo or printf. I recommend the latter.
Several things:
Don't use echo at all
Instead use set -xv to set debug mode which will echo each and every command. You can set PS4 to the desired prompt: for example PS4='$LINENO: ' will print out the line number on each line. In BASH, I believe it's the same. Then, you don't have to clean up your script. To shut off, use set +xv.
Example:
foo=7
bar=7
PS4='$LINENO: '
set -xv #Begin debugging
if [ $foo = $bar ]
then
echo "foo certainly does equal bar"
fi
set +xv #Debugging is off
if [ $bar = $foo ]
then
echo "And bar also equals foo"
fi
Results:
$ myprog.sh
if [ $foo = $bar ]
then
echo "foo certainly does equal bar"
fi
5: [ 7 = 7 ]
7: echo 'foo certainly does equal bar'
foo certainly does equal bar
set +xv #Debugging is off
And bar also equals foo
Use a function
Define a function instead of using echo:
Example:
function myecho {
if [ ! -z "$DEBUG" ]
then
echo "$*"
fi
}
DEBUG="TRUE"
my echo "Will print out this line"
unset DEBUG
myecho "But won't print out this line"
Use the nop command
The colon (:) is the nop command in BASH. It doesn't do anything. Use an environment variable and define it as either echo or :. When set to a colon, nothing happens. When set to echo, the line prints.
Example:
echo=":"
$echo "This line won't print"
echo="echo"
$echo "But this line will."
Building on Matthew's answer, how about something like this:
myEcho = "/bin/true"
if [ ! "$CRON" ]: then
myEcho = "/bin/echo"
fi
and then use $myEcho instead of echo in your script?
You can do one better. If you setup your crontab as detailed in another answer, you can then check if you are running in cron and only print if you are not. This way you don't need to modify your script at all between different runs.
You should then be able to use something like this (probably doesn't quite work, I'm not proficient in bash):
if [ ! "$CRON" ]; then
echo "Blah blah"
fi
Try set -v at the top to echo each command. To stop echoing change it to set +v.
Not sure if I miss the below solution to use a variable (e.g. debug) at the start of the bash script.
Once you set the debug=true, any conditional-if will enable or disable multiple “echo statements” in bash script.
typeset debug=false # set to true if need to debug
...
if [ $debug == "true" ]; then
echo
echo "Filter"
read
fi
...
if [ $debug == "true" ]; then
echo
echo "to run awk"
fi
Couldn't post a code block in a comment, so I'll post this as an answer.
If you're a perfectionist (like I am) and don't want the last set +x line to be printed... and instead print Success or FAIL, this works:
(
set -e # Stop at first error
set -x # Print commands
set -v # Print shell input lines as they are read
git pull
// ...other commands...
) && echo Success || echo FAIL
It will create a sub process, though, which may be an overkill solution.
If you're running it in cron, why not just dump the output? Change your crontab entry so that it has > /dev/null at the end of the command, and all output will be ignored.