I run a bash script, and looping as much line in text file. to cURL the site listed in the txt file.
here is my script :
SECRET_KEY='zuhahaha'
FILE_NAME=""
case "$1" in
"sma")
FILE_NAME="sma.txt"
;;
"smk")
FILE_NAME="smk.txt"
;;
"smp")
FILE_NAME="smp.txt"
;;
"sd")
FILE_NAME="sd.txt"
;;
*)
echo "not in case !"
;;
esac
function save_log()
{
printf '%s\n' \
"Header Code : $1" \
"Executed at : $(date)" \
"Response Body : $2" \
"====================================================================================================="$'\r\n\n' >> output.log
}
while IFS= read -r line;
do
HTTP_RESPONSE=$(curl -L -s -w "HTTPSTATUS:%{http_code}\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line 2>&1) &
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g') &
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://') &
save_log "$HTTP_STATUS" "$HTTP_BODY" &
done < $FILE_NAME
how i can run threading or make the loop fast in bash ?
You should be able to do this relatively easily. Don't try to background each command, but instead put the body of your while loop into a subshell and background that. That way, your commands (which clearly depend on each other) run sequentially, but all the lines in the file can be process in parallel.
while IFS= read -r line;
do
(
HTTP_RESPONSE=$(curl -L -s -w "HTTPSTATUS:%{http_code}\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line 2>&1)
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
save_log "$HTTP_STATUS" "$HTTP_BODY" ) &
done < $FILE_NAME
My favourite was to do this is generate a file that lists all the commands you wish to perform. If you have a script that performs your operations create a file like:
$ cat commands.txt
echo 1
echo 2
echo $[12+3]
....
For example this could be hundreds of commands long.
To execute each line in parallel, use the parallel command with, say, at most 3 jobs running in parallel at any time.
$ cat commands.txt | parallel -j
1
2
15
For your curl example you could generate thousands of curl commands, execute them say 30 in parallel at any one time.
Related
I need to limit concurrent SSH/Dropbear Tunnel connections to 1 login per user.
I have a script that takes care of that.
But it doesn't work for me because when there are many users it becomes saturated and it takes a long time to kick the users.
Another problem with this script is that if the user logs out and logs back in it is detected as multilogin.
Maxlogins and MaxSessions does not work on Dropbear.
Below is the script I am using:
#!/bin/bash
# This script locates all users who have multiple active dropbear
# processes and kills processes in excess of one for each user.
if [ "$EUID" -ne 0 ]; then
printf "Please run as root.\n"
exit
fi
IFS=+
while true; do
PIDFILE=$(mktemp)
AUTHFILE=$(mktemp)
USERS=$(mktemp)
ps aux | grep dropbear | grep -v grep | awk 'BEGIN{} {print $2}' > $PIDFILE
journalctl -r | grep dropbear | grep auth > $AUTHFILE
while read LINE; do
USER=$(printf "%s" $LINE | sed "s/^.* '//" | sed "s/'.*$//" -)
PID=$(printf "%s" $LINE | sed "s/^.*\[//" | sed "s/].*$//" -)
if grep -Fxq $(printf "%s" $USER) $USERS; then
:
else
printf "%s\n" $USER >> $USERS
fi
USERFILE=$(printf "/tmp/%s" $USER)
if [ ! -f $USERFILE ]; then
touch $USERFILE
fi
if grep -Fxq $(printf "%s" $PID) $PIDFILE; then
printf "%s\n" $PID >> $USERFILE
else
:
fi
done < $AUTHFILE
while read USER; do
i=1
while read PID; do
if [ $i -gt 1 ]; then
printf "Kill PID %s of user %s\n" $PID $USER
kill -9 $(printf "%s" $PID)
curl -k "https://redesprivadasvirtuales.com/modules/servers/openvpn/vega.php?secret=DD8sPD&user=$USER"
else
:
fi
((i++))
done < $(printf "/tmp/%s" $USER)
rm $(printf "/tmp/%s" $USER)
done < $USERS
rm $PIDFILE
rm $AUTHFILE
rm $USERS
done
Suggestions:
journalctl -r is very expensive. Limit journalctl to time since last search.
Line with USER=$(...) and PID=$(...). Replace printf and sed commands, with single awk command.
Research pgrep and pkill commaonds.
Replace file $PIDFILE $AUTHFILE $USERS with array variables (research readarray command).
While loop over $AUTHFILE could be implemented as loop over bash array.
While loop over $USERS (including internal loop) could be implemented as loop over bash array.
curl command might be very expensive. You do not check the response from each curl request. Run curl in background and if possible in parallel for all users.
Kind SO members could assist more, if you put sample lines from $AUTHFILE in the questions as sample input line.
I am not familiar with this platform, so if this is in the frozen section my apologies :P
I am working on upgrading a raspberry pi script project. It decodes NOAA APT Satellite images and it runs from AT Scheduler (I think) and scripts. Scripts are used to start recordings and do auto processing.
I have been having some problems and am trying to get a log of what is processed through the scripts, their are 3. I have tired adding something like ...) >> log.txt to the files but they are always empty.
I cant call them as sh -x script.sh >>log.txt because they are scheduled to trigger at different times and it would be a pain to replace all the calls.
Idealy i would like something i could add at the end of each script to log all the things they process and stick them in their own log file (script1.log, script2.log, script3.log)
Thanks!!
Jake
edit: I was advised to post the scripts. These are not "mine" i got them off of an instructable and made some changes to fit my needs. And i would rather not screw them up more than i have. ideally i would like something i could put after the #!/bin/bash line where it would log all of the commands processed by the script.
Thanks!
Script 1, the main scheduling script. some of them have been comented out because i dont use NOAA 15 or Meteor M2.
#!/bin/bash
# Update Satellite Information
wget -qr https://www.celestrak.com/NORAD/elements/weather.txt -O /home/pi/weather/predict/weather.txt
#grep "NOAA 15" /home/pi/weather/predict/weather.txt -A 2 > /home/pi/weather/predict/weather.tle
grep "NOAA 18" /home/pi/weather/predict/weather.txt -A 2 >> /home/pi/weather/predict/weather.tle
grep "NOAA 19" /home/pi/weather/predict/weather.txt -A 2 >> /home/pi/weather/predict/weather.tle
#grep "METEOR-M 2" /home/pi/weather/predict/weather.txt -A 2 >> /home/pi/weather/predict/weather.tle
#Remove all AT jobs
for i in `atq | awk '{print $1}'`;do atrm $i;done
#Schedule Satellite Passes:
/home/pi/weather/predict/schedule_satellite.sh "NOAA 19" 137.1000
/home/pi/weather/predict/schedule_satellite.sh "NOAA 18" 137.9125
#/home/pi/weather/predict/schedule_satellite.sh "NOAA 15" 137.6200
script 2, the individual satellite scheduler. It uses information from the first script to find times the satellite is passing overhead.
#!/bin/bash
PREDICTION_START=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" | head -1`
PREDICTION_END=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" | tail -1`
var2=`echo $PREDICTION_END | cut -d " " -f 1`
MAXELEV=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" | awk -v max=0 '{if($5>max){max=$5}}END{print max}'`
while [ `date --date="TZ=\"UTC\" #${var2}" +%D` == `date +%D` ]; do
START_TIME=`echo $PREDICTION_START | cut -d " " -f 3-4`
var1=`echo $PREDICTION_START | cut -d " " -f 1`
var3=`echo $START_TIME | cut -d " " -f 2 | cut -d ":" -f 3`
TIMER=`expr $var2 - $var1 + $var3`
OUTDATE=`date --date="TZ=\"UTC\" $START_TIME" +%Y%m%d-%H%M%S`
if [ $MAXELEV -gt 28 ]
then
echo ${1//" "}${OUTDATE} $MAXELEV
echo "/home/pi/weather/predict/receive_and_process_satellite.sh \"${1}\" $2 /home/pi/weather/${1//" "}${OUTDATE} /home/pi/weather/predict/weather.tle $var1 $TIMER" | at `date --date="TZ=\"UTC\" $START_TIME" +"%H:%M %D"`
fi
nextpredict=`expr $var2 + 60`
PREDICTION_START=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" $nextpredict | head -1`
PREDICTION_END=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" $nextpredict | tail -1`
MAXELEV=`/usr/bin/predict -t /home/pi/weather/predict/weather.tle -p "${1}" $nextpredict | awk -v max=0 '{if($5>max){max=$5}}END{print max}'`
var2=`echo $PREDICTION_END | cut -d " " -f 1`
done
the final script takes care of recording the audio from the satellite at the specified frequency, calculated for dopler shift, auto decodes/processes it, and posts it to my archive and webserver.
#!/bin/bash
# $1 = Satellite Name
# $2 = Frequency
# $3 = FileName base
# $4 = TLE File
# $5 = EPOC start time
# $6 = Time to capture
sudo timeout $6 rtl_fm -f ${2}M -s 60k -g 45 -p 55 -E wav -E deemp -F 9 - | sox -t wav - $3.wav rate 11025
#pass start 150 was 90
PassStart=`expr $5 + 150`
if [ -e $3.wav ]
then
/usr/local/bin/wxmap -T "${1}" -H $4 -p 0 -l 0 -o $PassStart ${3}-map.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e ZA $3.wav ${3}.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e NO $3.wav ${3}.NO.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e MSA $3.wav ${3}.MSA.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e MCIR $3.wav ${3}.MCIR.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e MSA-PRECIP $3.wav ${3}.MSA-PRECIP.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e EC $3.wav ${3}.EC.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e HVCT $3.wav ${3}.HVCT.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e CC $3.wav ${3}.CC.png
/usr/local/bin/wxtoimg -m ${3}-map.png -e SEA $3.wav ${3}.SEA.png
fi
NOW=$(date +%m-%d-%Y_%H-%M)
mkdir /home/pi/weather/Pictures/${NOW}
sudo cp /home/pi/weather/*.png /home/pi/weather/Pictures/${NOW}/ #move pictures to date folder in pi/pictures
sudo mv /var/www/html/APT_Pictures/PREVIOUS/* /var/www/html/APT_Pictures/ARCHIVE #move previous to archive
sudo mv /var/www/html/APT_Pictures/LATEST/* /var/www/html/APT_Pictures/PREVIOUS #move latest pictures to previous folder
sudo cp /home/pi/weather/Pictures/${NOW} /var/www/html/APT_Pictures/LATEST -r #copys date folder to latest
sudo cp /home/pi/weather/*-map.png home/pi/weather/Pictures/${NOW}/ #copys map to archive folder
##sudo mv /home/pi/weather/Pictures/${NOW}/*-map.png /home/pi/weather/maps #moves map from /pi/pictures date to maps folder
sudo rm /home/pi/weather/*.png #removes pictures from weather folder
sudo mv /home/pi/weather/*.wav /home/pi/weather/audio #moves audio to audio folder
Perhaps the scripts are outputing status messages to stderr instead of stdout (which your ...) >> log.txt method would have captured.)?
Here's how I'd capture stdout and stderr for debugging purposes.
$ /bin/bash script1.sh 1>>script1_stdout.log 2>>script1_stderr.log
$ /bin/bash script2.sh 1>>script2_stdout.log 2>>script2_stderr.log
$ /bin/bash script3.sh 1>>script3_stdout.log 2>>script3_stderr.log
Or combine the two streams into a single log file:
$ /bin/bash script1.sh 1>>script1.log 2>&1
$ /bin/bash script2.sh 1>>script2.log 2>&1
$ /bin/bash script3.sh 1>>script3.log 2>&1
The "1" in 1>> refers to stdout and the "2" in 2>> refers to stderr.
Edit: If you want to continue to see stdout/stderr messages and still write them to file, use tee as described here. tee prints stdin it receives, writes a copy of stdout to the file path provided.
$ /bin/bash script1.sh 2>&1 | tee script1.log
$ /bin/bash script2.sh 2>&1 | tee script2.log
$ /bin/bash script3.sh 2>&1 | tee script3.log
Reference about stdout and stderr.
I have a shell Unix running every hour (crontab on CentOS 7).
Inside that shell, a loop read and proceed treatment for all new files find in a defined folder.
At the end of each files's treatment a CURL command is send with some parameters, for example :
curl https://aaaaaa.com/website -d param1=value1 -d param2=value2 ....
Each time the shell is run by crontab, the 1st CURL is correctly converted to a true URL and received by Apache/Tomcat, but all the others are bad. In fact the 2nd and the following CURLs seem not converted in the correct format like
https://aaaaaa.com/website?param1=value1¶m2=value2
but they are sent like
https://aaaaaa.com/website -d param1=value1 -d param2=value2
So the website is unable to treat the parameters properly.
Why the 1st command is correctly converted to a correct URL format and not the following ?
EDIT - EDIT
The part of shell :
#!/bin/bash
...
#======================================================
# FUNCTIONS
#======================================================
UpdateStatus () {
CMD_CURL="${URL_WEBSITE} -d client=CLIENT -d site=TEST -d produit=MEDIASFILES -d action=update"
CMD_CURL="${CMD_CURL} -d codecmd=UPDATE_MEDIA_STATUS"
CMD_CURL="${CMD_CURL} -d idmedia=$4"
CMD_CURL="${CMD_CURL} -d idbatch=$3"
CMD_CURL="${CMD_CURL} -d statusmedia=$2"
if [[ ! -z "$5" ]]; then
CMD_CURL="${CMD_CURL} -d filename=$5"
fi
echo " ${CMD_CURL}" >> $1
CURL_RESULT=`curl -k ${CMD_CURL}`
CURL_RESULT=`echo ${CURL_RESULT} | tr -d ' '`
echo " Result CURL = ${CURL_RESULT}" >> $1
if [ "${CURL_RESULT}" = "OK" ]; then
return 0
fi
return 1
}
#======================================================
# MAIN PROGRAM
#======================================================
echo "----- Batch in progress : `date '+%d/%m/%y - %H:%M:%S'` -----"
for file in $( ls ${DIR_FACTORY_BATCHFILES}/*.batch )
do
...
old_IFS=$IFS
while IFS=';' read <&3 F_STATUS F_FILEIN F_TYPE F_CODE F_ID F_IDPARENT F_TAGID3 F_PROF F_YEARMEDIA F_DATECOURS F_TIMEBEGINCOURS F_LANG || [[ -n "$F_STATUS $F_FILEIN $F_TYPE $F_CODE $F_ID $F_IDPARENT $F_TAGID3 $F_PROF $F_YEARMEDIA $F_DATECOURS $F_TIMEBEGINCOURS $F_LANG" && $F_STATUS ]];
do
...
UpdateStatus ${LOG_FILENAME} ${STATUS_ERROR} ${F_ID} ${F_IDPARENT}
...
done 3< $file
IFS=$Old_IFS
...
done
You need to provide the "-d" flags and values before the URL so:
curl -d param1=value1 -d param2=value2 https://aaaaaa.com/website
Moreover, this command is going to send the parameters/values as POST parameters, not query parameters. You can use the "-G" flag, possibly combined with "--url-encode" to send as query parameters, see:
https://unix.stackexchange.com/questions/86729/any-way-to-encode-the-url-in-curl-command
Hi Im making a script to do some rsync process, for the rsync process, Sys admin has created the script, when it run it is asking select options, so i want to create a script to pass that argument from script and run it from cron.
list of directories to rsync take from file.
filelist=$(cat filelist.txt)
for i in filelist;do
echo -e "3\nY" | ./rsync.sh $i
#This will create a rsync log file
so i check the some value of log file and if it is empty i moving to the second file. if the file is not empty, i have to start rsync process as below that will take more that 2 hours.
if [ a != 0 ];then
echo -e "3\nN" | ./rsync.sh $i
above rsync process need to send to the background and take next file to loop. i check with the screen command, but screen is not working with server. also i need to get the duration that take to run process and passing to the log, when i use the time command i am unable to pass the echo variable. Also need to send this to background and take next file. appreciate any suggestions to success this task.
Question
1. How to send argument with Time command
echo -e "3\nY" | time ./rsync.sh $i
above one not working
how to send this to background and take next file to rsync while running previous rsync process.
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done
Your question is not very clear, but I'll try:
(1) If I understand you correctly, you want to time the rsync.
My first attempt would be to use echo xxxx | time rsycnc. On my bash, this was however broken (or not supposed to work?). I'm normally using Zsh instead of bash, and on zsht, this indeed runs fine.
If it is important for you to use bash, an alternative (since the time for the echo can likely be neglected) would be to time the whole pipe, i.e. time (echo xxxx | time rsync), or even simpler time rsync <(echo xxxx)
(2) To send a process to the background, add an & to the line. However, the time command produces of course output (that's it purpose), and you don't want to receive output from a program in background. The solution is to redirect the output:
(time rsync <(echo xxxx) >output.txt 2>error.txt) &
If you want to time something, you can use:
time sleep 3
If you want to time two things, you can do a compound statement like this (note semicolon after second sleep):
time { sleep 3; sleep 4; }
So, you can do this to time your echo (which will take no time at all) and your rsync:
time { echo "something" | rsync something ; }
If you want to do that in the background:
time { echo "something" | rsync something ; } &
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done
I have this
#! /bin/bash
cd ~
hostname=`hostname`
cat /opt/ip.txt | while read line;
do
# do something with $line here
RES=`ping -c 2 -q $line | grep "packet loss"`
echo "---" >> /opt/os-$hostname.txt
echo "---"
echo "$line $RES" >> /opt/os-$hostname.txt
echo "$line $RES"
done
How I can make the script multi-threaded? I would like to speed up the performance.
You can use the <(...) notation for starting a subprocess and then cat all the outputs together:
myping() {
ping -c 2 -q "$1" | grep "packet loss"
}
cat <(myping hostname1) <(myping hostname2) ...
To use a loop for this, you will need to build the command first:
cat /opt/ip.txt | {
command='cat'
while read line
do
command="$command "'<'"(myping $line)"
done
eval "$command"
}
If you really want the delimiting --- of your original, I propose to add an echo "---" in the myping.
If you want to append the output to a file as well, use tee:
eval "$command" | tee -a /opt/os-$hostname.txt
DELETED.
WAS UN USEFUL ? NO THREAD IN BASH.