I've created a script that basically gets the status codes on the url.
I'am trying to add the status code if it's greater then 399.
The elements are getting appended to array correctly outside the for loop block but not inside of it.
(here that array is TP)
ARR=('http://google.com' 'https://example.org/olol' 'https://example.org' 'https://steamcommunity.com')
TP=()
TP+=("77")
ERROR=true
#for item in "${ARR[#]}";do curl -I $item | grep HTTP | awk '{print $2}'; sleep 3; echo $item; done;
for item in "${ARR[#]}";
do curl -I $item | grep HTTP | awk '{print $2}'| { read message;
echo "hi $message";
TP+=("57")
if [ $message -gt 399 ]; then
#TP+=("57");
ERROR=false;
echo "$message is greater";
fi
};
sleep 2;
echo $item;
done;
echo "${TP[#]}"
please help, I am noob.
When you pipe results (eg, from the curl call) to another command (eg, grep/awk/read) you are spawning a subshell; any variables set within the subshell are 'lost' when the subshell exists.
One idea for fixing the current code:
for item in "${ARR[#]}";
do
read -r message < <(curl -I "$item" | awk '/HTTP/ {print $2}')
echo "hi $message";
TP+=("57")
if [ "$message" -gt 399 ]
then
#TP+=("57")
ERROR=false
echo "$message is greater"
fi
sleep 2
echo "$item"
done
Where:
while the curl|awk is invoked as a subshell the result is fed to the read in the current/parent shell, so the contents of the message variable are available within the scope of the current/parent shell
added double quotes around a few variable references as good practice (eg, in case variable contains white space, in case variable is empty)
Related
The related stub is like:
tag=('*' '#')
i=0
function output()
{
ifs="$IFS"
IFS=$'\n'
for line in $#
do
echo $'\t' "${tag[$i]}" $line
done
IFS="$ifs"
echo $i
i=$((i+1))
echo $i
i=$((i%2))
echo $i
}
output a|tee README
output b
What I want to do is:
Every time execute output to output a message block, different prefix(${tag[$ind]}) can be used for distinguishing itself from context. Besides, part-message can be redirect to file.
Result of it is:
* a
0
1
1
* b
0
1
1
With the pipe |tee README, variable $i had been reset to 0.
Why it happened and can I implement the function by this train of thought?
Thanks.
It happens becase, as stated at Bash manual, each command in a pipeline is executed as a separate process (i.e., in a subshell).
In order to preserve i variable value I suggest you to enclose the two output calls into a single shell process as follow:
#!/bin/bash
tag=('*' '#')
i=0
function output()
{
ifs="$IFS"
IFS=$'\n'
for line in $#
do
echo $'\t' "${tag[$i]}" $line
done
IFS="$ifs"
echo $i
i=$((i+1))
echo $i
i=$((i%2))
echo $i
}
(
output a
output b
) | tee README
I'm trying to write a simple function to debug my script easily and making my code simpler. (Still stuck after 3 hours)
I want to pass to this function 3 arguments
A command
A success string
And an error string
The function is supposed to execute the command and print the proper string whether it's a success or not.
What I mean by successful is when the command prints something in the output.
Here is what I've tried (On CentOS7) :
#!/bin/bash
CMD=$(yum list installed | egrep "yum.utils.\w+" | cut -d " " -f1)
SUCCESS="YES"
ERROR="NO"
foo() {
if ["$1" != ""]; then
echo -e "$2"
else
echo -e "$3"
fi
}
foo $CMD $SUCCESS $ERROR
Unfortunately, I'm encountering 2 problems :
Firstly, when the $CMD is empty, the first parameter will be $SUCCESS instead of an empty string (the behaviour I want)
Secondly, I want to remove the console output (> /dev/null 2>&1 ???).
Do you think it's possible? Do you have any idea how to do it?
Otherwise, is there an easier way with the eval command?
Thanks for reading and have a nice day,
Valentin M.
------------------ Correction ------------------
#!/bin/bash
CMD=$(yum list installed | grep -E "yum.utils.\w+" | cut -d " " -f1)
SUCCESS="YES"
ERROR="NO"
foo() {
if [ "$1" != "" ]; then
echo -e "$2"
else
echo -e "$3"
fi
}
foo "$CMD" "$SUCCESS" "$ERROR"
I found out a similar topic here: Stack overflow : How to write a Bash function that can generically test the output of executed commands?
Unfortunately, I'm encountering 2 problems :
Firstly, when the $CMD is empty, the first parameter will be $SUCCESS instead of an empty string (the behaviour I want)
If you follow the suggestion in William Pursell's comment above, this problem is solved, since an empty first parameter is then passed.
Secondly, I want to remove the console output (> /dev/null 2>&1 ???).
I assume by console output you mean the output to STDERR, since STDOUT is assigned to CMD. Your > /dev/null 2>&1 is unsuitable, as it redirects also STDOUT to /dev/null; just do this with STDERR:
CMD=$(yum list installed 2>/dev/null | egrep "yum.utils.\w+" | cut -d " " -f1)
I've implemented a way to have concurrent jobs in bash, as seen here.
I'm looping through a file with around 13000 lines. I'm just testing and printing each line, as such:
#!/bin/bash
max_bg_procs(){
if [[ $# -eq 0 ]] ; then
echo "Usage: max_bg_procs NUM_PROCS. Will wait until the number of background (&)"
echo " bash processes (as determined by 'jobs -pr') falls below NUM_PROCS"
return
fi
local max_number=$((0 + ${1:-0}))
while true; do
local current_number=$(jobs -pr | wc -l)
if [[ $current_number -lt $max_number ]]; then
echo "success in if"
break
fi
echo "has to wait"
sleep 4
done
}
download_data(){
echo "link #" $2 "["$1"]"
}
mapfile -t myArray < $1
i=1
for url in "${myArray[#]}"
do
max_bg_procs 6
download_data $url $i &
((i++))
done
echo "finito!"
I've also tried other solutions such as this and this, but my issue is persistent:
At a "random" given step, usually between the 2000th and the 5000th iteration, it simply gets stuck. I've put those various echo in the middle of the code to see where it would get stuck but it the last thing it prints is the $url $i.
I've done the simple test to remove any parallelism and just loop the file contents: all went fine and it looped till the end.
So it makes me think I'm missing some limitation on the parallelism, and I wonder if anyone could help me out figuring it out.
Many thanks!
Here, we have up to 6 parallel bash processes calling download_data, each of which is passed up to 16 URLs per invocation. Adjust per your own tuning.
Note that this expects both bash (for exported function support) and GNU xargs.
#!/usr/bin/env bash
# ^^^^- not /bin/sh
download_data() {
echo "link #$2 [$1]" # TODO: replace this with a job that actually takes some time
}
export -f download_data
<input.txt xargs -d $'\n' -P 6 -n 16 -- bash -c 'for arg; do download_data "$arg"; done' _
Using GNU Parallel it looks like this
cat input.txt | parallel echo link '\#{#} [{}]'
{#} = the job number
{} = the argument
It will spawn one process per CPU. If you instead want 6 in parallel use -j:
cat input.txt | parallel -j6 echo link '\#{#} [{}]'
If you prefer running a function:
download_data(){
echo "link #" $2 "["$1"]"
}
export -f download_data
cat input.txt | parallel -j6 download_data {} {#}
Hi Im making a script to do some rsync process, for the rsync process, Sys admin has created the script, when it run it is asking select options, so i want to create a script to pass that argument from script and run it from cron.
list of directories to rsync take from file.
filelist=$(cat filelist.txt)
for i in filelist;do
echo -e "3\nY" | ./rsync.sh $i
#This will create a rsync log file
so i check the some value of log file and if it is empty i moving to the second file. if the file is not empty, i have to start rsync process as below that will take more that 2 hours.
if [ a != 0 ];then
echo -e "3\nN" | ./rsync.sh $i
above rsync process need to send to the background and take next file to loop. i check with the screen command, but screen is not working with server. also i need to get the duration that take to run process and passing to the log, when i use the time command i am unable to pass the echo variable. Also need to send this to background and take next file. appreciate any suggestions to success this task.
Question
1. How to send argument with Time command
echo -e "3\nY" | time ./rsync.sh $i
above one not working
how to send this to background and take next file to rsync while running previous rsync process.
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done
Your question is not very clear, but I'll try:
(1) If I understand you correctly, you want to time the rsync.
My first attempt would be to use echo xxxx | time rsycnc. On my bash, this was however broken (or not supposed to work?). I'm normally using Zsh instead of bash, and on zsht, this indeed runs fine.
If it is important for you to use bash, an alternative (since the time for the echo can likely be neglected) would be to time the whole pipe, i.e. time (echo xxxx | time rsync), or even simpler time rsync <(echo xxxx)
(2) To send a process to the background, add an & to the line. However, the time command produces of course output (that's it purpose), and you don't want to receive output from a program in background. The solution is to redirect the output:
(time rsync <(echo xxxx) >output.txt 2>error.txt) &
If you want to time something, you can use:
time sleep 3
If you want to time two things, you can do a compound statement like this (note semicolon after second sleep):
time { sleep 3; sleep 4; }
So, you can do this to time your echo (which will take no time at all) and your rsync:
time { echo "something" | rsync something ; }
If you want to do that in the background:
time { echo "something" | rsync something ; } &
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done
I need to the following things to make sure my application server is
Tail a log file for a specific string
Remain blocked until that string is printed
However if the string is not printed for about 20 mins quit and throw and exception message like "Server took more that 20 mins to be up"
If string is printed in the log file quit the loop and proceed.
Is there a way to include time outs in a while loop ?
#!/bin/bash
tail -f logfile | grep 'certain_word' | read -t 1200 dummy_var
[ $? -eq 0 ] && echo 'ok' || echo 'server not up'
This reads anything written to logfile, searches for certain_word, echos ok if all is good, otherwise after waiting 1200 seconds (20 minutes) it complains.
You can do it like this:
start_time=$(date +"%s")
while true
do
elapsed_time=$(($(date +"%s") - $start_time))
if [[ "$elapsed_time" -gt 1200 ]]; then
break
fi
sleep 1
if [[ $(grep -c "specific string" /path/to/log/file.log) -ge 1 ]]; then
break
fi
done
You can use signal handlers from shell scripts (see http://www.ibm.com/developerworks/aix/library/au-usingtraps/index.html).
Basically, you'd define a function to be called on, say, signal 17, then put a sub-script in the background that will send that signal at some later time:
timeout(pid) {
sleep 1200
kill -SIGUSR1 $pid
}
watch_for_input() {
tail -f file | grep item
}
trap 'echo "Not found"; exit' SIGUSR1
timeout($$) &
watch_for_input
Then if you reach 1200 seconds, your function is called and you can choose what to do (like signal your tail/grep combo that is watching for your pattern in order to kill it)
time=0
found=0
while [ $time -lt 1200 ]; do
out=$(tail logfile)
if [[ $out =~ specificString ]]; then
found=1
break;
fi
let time++
sleep 1
done
echo $found
The accepted answer doesn't work and will never exit (because althouth read -t exits, the prior pipe commands (tail -f | grep) will only be notified of read -t exit when they try to write to output, which never happens until the string matches).
A one-liner is probably feasible, but here are scripted (working) approaches.
Logic is the same for each one, they use kill to terminate the current script after the timeout.
Perl is probably more widely available than gawk/read -t
#!/bin/bash
FILE="$1"
MATCH="$2"
# Uses read -t, kill after timeout
#tail -f "$FILE" | grep "$MATCH" | (read -t 1 a ; kill $$)
# Uses gawk read timeout ability (not available in awk)
#tail -f "$FILE" | grep "$MATCH" | gawk "BEGIN {PROCINFO[\"/dev/stdin\", \"READ_TIMEOUT\"] = 1000;getline < \"/dev/stdin\"; system(\"kill $$\")}"
# Uses perl & alarm signal
#tail -f "$FILE" | grep "$MATCH" | perl -e "\$SIG{ALRM} = sub { `kill $$`;exit; };alarm(1);<>;"