I'm new to bash scripting and I faced an issue when I tried to improve my script. My script is spliting a text file and each part of this text file is processed in a function ... Everything is working fine but my problem occurs when I'm forking (with &) my function processing ! My args are not like expected (it's a line number and text with whitespaces and backspaces) and I suppose it's because of global variables ... I tried to fork, then sleep 1 second in the parent thread and then continue in order to put args into local variables for my function execution but it doesn't work either ... Can you give me a hint about how to do it ? What I want is to be able to pass args to my function and be allowed to modify it after the fork call in my parent thread ... is it possible ?
Thanks in advance :)
Here is my script :
#!/bin/bash
#Parameters#
FILE='tasks'
LINE_BY_THREAD='500'
#Function definition
function checkPart() {
local NUMBER="$1"
local TXT="$2"
echo "$TXT" | { while IFS= read -r line ; do
IFS=' ' read -ra ADDR <<< "$line"
#If the countdown is set to 0, launch the task ans set it to init value
if [ ${ADDR[0]} == '0' ]; then
#task launching
#to replace by $()l
echo `./${ADDR[1]}.sh ${ADDR[2]} &`
#countdown set to init value
sed -i "$NUMBER c ${ADDR[3]} ${ADDR[1]} ${ADDR[2]} ${ADDR[3]}" $FILE
else
sed -i "$NUMBER c $((ADDR-1)) ${ADDR[1]} ${ADDR[2]} ${ADDR[3]}" $FILE
fi
((NUMBER++))
done }
}
#Init processes number#
LINE_NUMBER=$(wc -l < $FILE)
NB_PROCESSES=$(($LINE_NUMBER / $LINE_BY_THREAD))
if [ $(($LINE_NUMBER % $LINE_BY_THREAD)) -ne '0' ]; then
((NB_PROCESSES++))
fi
echo "Number of thread to be run : $NB_PROCESSES"
#Start the split sequence#
for (( i = 2; i <= $LINE_NUMBER; i += $LINE_BY_THREAD ))
do
PARAM=$(sed "$i,$(($i + $LINE_BY_THREAD - 1))!d" "$FILE")
(checkPart "$i" "$PARAM") &
sleep 1
done
My job is to create a scheduler for tasks described in this following file :
#MinutesBeforeLaunch#TypeOfProcess#Argument#Frequency#
2 agr_m 42 5
5 agr_m_s 26 5
0 agr_m 42 5
3 agr_m_s 26 5
0 agr_m 42 5
5 agr_m_s 26 5
4 agr_m 42 5
5 agr_m_s 26 5
4 agr_m 42 5
4 agr_m_s 26 5
2 agr_m 42 5
4 agr_m_s 26 5
When I'm reading a number > 0 in the first column, I just decrement it and when it's a 0 I have to launch the task and set the first number to frequency, last column ...
My first code is the previous with sed for text replacement but is it possible to do better ?
Related
I have an application that takes from 1 to 20 variables. The names of each of the variables are identical EXCEPT they end with a sequential number. From _1 to _20.
For example:
Value_1, Value_2, Value_3...Value_20
What I'd like to do is:
See if the variables exist first. There may be one variable or 20 of them. Furthermore, they are not guaranteed to be in a particular order. For example, only Value_15 might exist.
For the variables beginning with Value_ I'd like to check and see if they are null or not.
For those variables that are NOT NULL or EMPTY I want to get their values and feed them into another process.
I've heard using arrays might be the way to go. But I'd have to convert the variables found into an array...OR
I could just interact over them from 1...20 and see if they exist and it they have values assigned to them, I could pull the value from each one.
What is the best way of doing this?
Bash provides for variable indirection using, e.g. var_$i (where $i is a number), you can assign the pattern created (e.g. tmp="var_$i") and then get the value from the resulting var_x name using ${!tmp}., e.g.
#!/bin/bash
var_1=1
var_2=2
var_4=4
var_5=5
var_6=6
var_8=8
var_9=9
var_10=10
var_11=11
var_13=13
var_14=14
var_15=15
var_16=16
var_18=18
var_19=19
for ((i = 1; i <= 20; i++)); do
tmp="var_$i"
if [ -n "${!tmp}" ]; then
printf "var_%d exits: %s\n" "$i" "${!tmp}"
fi
done
This has been incorporated into the nameref declaration in bash since 4.3. You have the option of declare -n tmp=var_x to declare a nameref, e.g.
for ((i = 1; i <= 20; i++)); do
declare -n tmp="var_$i"
if [ -n "$tmp" ]; then
printf "var_%d exits: %s\n" "$i" "$tmp"
fi
done
Give both a try and let me know if you have questions.
Example Use/Output
In both cases you will receive:
$ bash nameref.sh
var_1 exits: 1
var_2 exits: 2
var_4 exits: 4
var_5 exits: 5
var_6 exits: 6
var_8 exits: 8
var_9 exits: 9
var_10 exits: 10
var_11 exits: 11
var_13 exits: 13
var_14 exits: 14
var_15 exits: 15
var_16 exits: 16
var_18 exits: 18
var_19 exits: 19
I have DB error log file, it will grow continuously.
Now i want to set some error monitoring on that file for every 5 minutes.
The problem is i don’t want to scan whole file for every 5 minutes(when monitoring cron executed), because it may grow very big in future. Scanning through whole(big) file for every 5 mins will consume bit more resources.
So i just want to scan only the lines which were inserted/written to the log during last 5 mins interval.
Each error recorded in log will have Timestamp prepend to it like below:
180418 23:45:00 [ERROR] mysql got signal 11.
So i want to search with pattern [ERROR] only on lines which were added from last 5 mins(not whole file) and place the output to another file.
Please help me here.
Feel free if u need more clarification on my question.
I’m using RHEL 7 and i’m trying to implement above monitoring through bash shell script
Serializing the Byte Offset
This picks up where the last instance left off. If you run it every 5 minutes, then, it'll scan 5 minutes of data.
Note that this implementation knowingly can scan data added during an invocation's run twice. This is a little sloppy, but it's much safer to scan overlapping data twice than to never read it at all, which is a risk that can be run if relying on cron to run your program on schedule (likewise, sleeps can run over the requested time if the system is busy).
#!/usr/bin/env bash
file=$1; shift # first input: filename
grep_opts=( "$#" ) # remaining inputs: grep options
dir=$(dirname -- "$file") # extract directory name to use for offset storage
basename=${file##*/} # pick up file name w/o directory
size_file="$dir/.$basename.size" # generate filename to use to store offset
if [[ -s $size_file ]]; then # ...if we already have a file with an offset...
old_size=$(<"$size_file") # ...read it from that file
else
old_size=0 # ...otherwise start at the front.
fi
new_size=$(stat --format=%s -- "$file") || exit # Figure out current size
if (( new_size < old_size )); then
old_size=0 # file was truncated, so we can't trust old_size
elif (( new_size == old_size )); then
exit 0 # no new contents, so no point in trying to search
fi
# read starting at old_size and grep only that content
dd iflag=skip_bytes skip="$old_size" if="$file" | grep "${grep_opts[#]}"; grep_retval=$?
# if the read failed, don't store an updated offset
(( ${PIPESTATUS[0]} != 0 )) && exit 1
# create a new tempfile to store offset in
tempfile=$(mktemp -- "${size_file}.XXXXXX") || exit
# write to that temporary file...
printf '%s\n' "$new_size" > "$tempfile" || { rm -f "$tempfile"; exit 1; }
# ...and if that write succeeded, overwrite the last place where we serialized output.
mv -- "$tempfile" "$new_size" || exit
exit "$grep_retval"
Alternate Mode: Bisect For The Timestamp
Note that this can miss content if you're relying on, say, cron to invoke your code every 5 minutes on-the-dot; storing byte offsets can thus be more accurate.
Using the bsearch tool by Ole Tange:
#!/usr/bin/env bash
file=$1; shift
start_date=$(date -d 'now - 5 minutes' '+%y%m%d %H:%M:%S')
byte_offset=$(bsearch --byte-offset "$file" "$start_date")
dd iflag=skip_bytes skip="$byte_offset" if="$file" | grep "$#"
Another approach could be something like this:
DB_FILE="FULL_PATH_TO_YOUR_DB_FILE"
current_db_size=$(du -b "$DB_FILE" | cut -f 1)
if [[ ! -a SOME_PATH_OF_YOUR_CHOICE/last_size_db_file ]] ; then
tail --bytes $current_db_size $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
else
if [[ $(cat last_size_db_file) -gt $current_db_size ]] ; then
previously_readed_bytes=0
else
previously_readed_bytes=$(cat last_size_db_file)
fi
new_bytes=$(($current_db_size - $previously_readed_bytes))
tail --bytes $new_bytes $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
fi
printf $current_db_size > SOME_PATH_OF_YOUR_CHOICE/last_size_db_file
this prints all bytes of DB_FILE not previously printed to SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
Note that $(date +%Y-%m-%d_%H-%M-%S) will be the current 'full' date at the time of creating the log file
you can make this an script, and use cron to execute that script every five minutes; something like this:
*/5 * * * * PATH_TO_YOUR_SCRIPT
Here is my approach:
First, read the whole log once so far.
If you reach the end, collect and read new lines for a timespan (in my example 9 seconds, for faster testing, while my dummy server appends to the logfile every 3 seconds).
After the timespan, echo the cache, clear the cache (an array arr), loop and sleep for some time, so that this process doesn't consume all CPU time.
First, my dummy logfile writer:
#!/bin/bash
#
# dummy logfile writer
#
while true
do
s=$(( $(date +%s) % 3600))
echo $s server msg
sleep 3
done >> seconds.log
Startet via ./seconds-out.sh &.
Now the more complicated part:
#!/bin/bash
#
# consume a logfile as written so far. Then, collect every new line
# and show it in an interval of $interval
#
interval=9 # 9 seconds
#
printf -v secnow '%(%s)T' -1
start=$(( secnow % (3600*24*365) ))
declare -a arr
init=false
while true
do
read line
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
# consume every line created in the past
if (( ! init ))
then
# assume reading a line might not take longer than a second (rounded to whole seconds)
while (( ${#line} > 0 && (now - start) < 2 ))
do
read line
start=$now
echo -n "." # for debugging purpose, remove
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
done
init=1
echo "init=$init" # for debugging purpose, remove
# collect new lines, display them every $interval seconds
else
if ((${#line} > 0 ))
then
echo -n "-" # for debugging purpose, remove
arr+=("read: $line \n")
fi
if (( (now - start) > interval ))
then
echo -e "${arr[#]]}"
arr=()
start=$now
fi
fi
sleep .1
done < seconds.log
Output with logfile generator in 3 seconds, running for some time, then starting the read-seconds.sh script, with debugging output activated:
./read-seconds.sh
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................init=1
---read: 1688 server msg
read: 1691 server msg
read: 1694 server msg
---read: 1697 server msg
read: 1700 server msg
read: 1703 server msg
----read: 1706 server msg
read: 1709 server msg
read: 1712 server msg
read: 1715 server msg
^C
Every dot represents a logfile line from the past and therefor skipped.
Every dash represents a logfile line collected.
I have a file named countdown on my computer and I am trying to get it to countdown from what ever the user puts in.
for example ./countdown 5 would cause a 5 second timer to start outputting a "." every second and prints done after 5 seconds.
./countdown 10 would cause a 10 second timer to start outputting "." every second and prints done after 10 seconds.
here is my code, how can i read what the user inputs
t=$((5))
while [ $t -gt 0 ]; do
echo -ne "."
sleep 1
: $((t--))
done
echo "done"
Just change t=$((5)) to t=$1, which will assign the first argument to t.
I'm looking for a bash script that can parse a time duration.
If three arguments are given, they represent hours, minutes, and seconds. If two arguments are given, they represent minutes and seconds, with the hours zero.
What about the following:
#!/bin/bash
h=0
if [ "$#" -ge 3 ]
then
h=$1
shift
fi
sec=$((3600*$h+60*$1+$2))
echo "The total number of seconds is $sec"
Since the question does not specify what you aim to do with the given time, the program calculates the total number of seconds. Furthermore perhaps it is useful to do a check if at least two arguments are given.
The script uses the shift operation, the shift makes makes $1 := $2; $2 := $3, etc. In other words, the first argument is processed, and then you "pretend" it never existed.
By default you set h to zero, and only if the number of arguments is greater than or equal to 3, it will set h.
This is a more or less general solution for that type of task. Sorry, if it is a monkeycode, but I think it is sufficient:
gettime() {
params=(
years months weeks days hours minutes seconds
)
for i in `seq ${#params}`; do
param_i=$((${#params} - i + 1)) # reversed params index
[ $i -le $# ] && {
eval "local ${params[$param_i]}=\$$(($# - i + 1))"
} || {
eval "local ${params[$param_i]}=0"
}
eval "echo ${params[$param_i]} '==' \$${params[$param_i]}" # debug output
done
}
Here's the sample output:
$ gettime 3 4 5 6 7
seconds == 7
minutes == 6
hours == 5
days == 4
weeks == 3
months == 0
years == 0
Note, that the shell you are using must be not only support POSIX standards, but also arrays.
First Argument: $1
Second Argument: $2
Third Argument: $3
and so on...
Example:
bash-2.05a$ ./parseDuration.sh 13 25 25
13 hours and 25 minutes and 25 seconds
bash-2.05a$ cat ./parseDuration.sh
#!/bin/bash
echo "$1 hours and $2 minutes and $3 seconds"
I have a bash program which extracts marks from a file that looks like this:
Jack ex1=5 ex2=3 quiz1=9 quiz2=10 exam=50
I want the code to execute such that when I input into terminal:
./program -ex1 -ex2 -ex3
Jack does not have an ex3 in his data, so an output of 0 will be returned:
Jack 5 3 0
how do I code my program to output 0 for each unrecognized argument?
If I understand what you are trying to do, it isn't that difficult. What you need to do is read each line into a name and the remainder into marks. (input is read from stdin)
Then for each argument given on the command line, check if the first part matches the beginning of any grade in marks (the left size of the = sign). If it does, then save the grade (right side of the = sign) and set the found flag to 1.
After checking all marks against the first argument, if the found flag is 1, output the grade, otherwise output 0. Repeat for all command line arguments. (and then for all students in file) Let me know if you have questions:
#!/bin/bash
declare -i found=0 # initialize variables
declare -i grade=0
while read -r name marks; do # read each line into name & marks
printf "%s" "$name" # print student name
for i in "$#"; do # for each command line argument
found=0 # reset found (flag) 0
for j in $marks; do # for each set of marks check for match
[ $i = -${j%=*} ] && { found=1; grade=${j#*=}; } # if match save grade
done
[ $found -eq 1 ] && printf " %d" $grade || printf " 0" # print grade or 0
done
printf "\n" # print newline
done
exit 0
Output
$ bash marks_check.sh -ex1 -ex2 -ex3 < dat/marks.txt
Jack 5 3 0