curl code : stop send http request when it reach specific amount? - linux

target=${1:-http://web1.com}
while true # loop forever, until ctrl+c pressed.
do
for i in $(seq 10) # perfrom the inner command 10 times.
do
curl $target > /dev/null & # send out a curl request, the & indicates not to wait for the response.
done
wait # after 100 requests are sent out, wait for their processes to finish before the next iteration.
done
I want to train my HTTP load balancing by giving multiple HTTP requests in one time. I found this code to help me to send 10 sequence HTTP requests to web1.com at one time. However, I want the code to stop when reaches 15000 requests.
so in total, there will be 1500 times to send HTTP requests.
thank you for helping me
update
I decide to use Hey.
https://serverfault.com/a/1082007/861352
i use this command to run the request
hey -n 100 -q 2 -c 1 http://web1.com -A
which:
-n: stop the request when it reaches 100 request
-q: give a break for 2 (not request /second but query /second) to avoid flooding requests at once
-c: send the request at once 1 request
http://web1.com = my web
-A = accept header, I have an error when removing -A

this works:
target=${1:-http://web1.com}
limit=1500
count=0
while true # loop forever, until ctrl+c pressed.
do
for i in $(seq 10) # perfrom the inner command 10 times.
do
count=$(expr $count + 1) # increments +1
curl -sk $target > /dev/null & # send out a curl request, the & indicates not to wait for the response
done
wait # after 100 requests are sent out, wait for their processes to finish before the next iteration.
if [ $count -ge $limit ];then
exit 0
fi
done

Maybe what you're really looking for is a basic command-line stress-test tool.
For instance siege is particularly simple and easy to use:
siege --delay 1 --concurrent 10 --reps 1500 http://web1.com
will do pretty much what you're expecting...
introduction article about siege

Related

shell script put result of curl in variable followed by sleep command [duplicate]

This question already has answers here:
Assign variable in the background shell
(2 answers)
Closed 1 year ago.
I want to trigger curl requests every 400ms in shell script and put the results in a variable, and after finishing the curl request (eg 10 requests) finally write all results in a file. when I use the following code for this purpose
result="$(curl --location --request GET 'http://localhost:8087/say-hello')" & sleep 0.400;
Because & creates a new process result can not achieve. so what should I do?
You can use the -m curl option instead of the sleep.
-m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. See
also the --connect-timeout option.
The difference can be sound in the next sequence of commands:
a=1; a=$(echo 2) ; sleep 1; echo $a
2
and with a background process
a=1; a=$(echo 2) & sleep 1; echo $a
[1] 973
[1]+ Done a=$(echo 2)
1
Why is a not changed in the second case?
Actually it is changed... in a new environment. The & creates a new process with its own a, and that a is assigned the value 2. When the process is finished, the variable a of that subprocess is deleted and you only of the original value of a.
Depending on your requirements you might want to make a resultdir, have every background curl process write to a different tmpfile, wait with wait until all curls are finished and collect your results.

Need alert when the system is going down

I need a small alarm (HTTP req or any) when the server is going down. I checked many applications like nagios, servercheck and so on... All these application monitor only remote servers. I have only two servers to monitor. So if my server (10.172.65.124) is going down cant it send a alarm. I dont want to maintain one more server to monitor this. I am using rhel6 & centos7. Any suggestions
Here's a python script that will serve the purpose. It uses sendmail to send your email which will require running it from a linux server that has sendmail enabled. Change the url to point to the url you are monitoring. If you run this script, it'll check stackoverflow.
This uses urllib to check the status code it receives when trying to load your url. If it gets a status other than 200 from the HTTP request it expects the site to be down.
To monitor your server you should run the script on a server or desktop that is independent from your webhost, otherwise you won't be alerted when your server crashes due to a number of reasons.
#Import time to allow you to sleep the script, urllib to load the site, subprocess will allow you to run a process on the machine outside of the script (in this instance it's send mail)
import time
import urllib
from email.mime.text import MIMEText
from subprocess import Popen, PIPE
#The url being monitored.
url = "http://www.stackoverflow.com"
#The contents of the email
msg = MIMEText(url + " is not responding. Please investigate.")
msg["From"] = "me#youremail.com"
msg["To"] = "me#youremail.com"
msg["Subject"] = url + "is not responding"
#This loops while the script is running.
# It gets the status returned from the urllib call, if it's not 200 it will email the email contents above.
while True:
status = urllib.urlopen(url).getcode()
if status <> 200:
#This is what sends the email. If you don't have sendmail then update this.
p = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE)
p.communicate(msg.as_string())
#The number of seconds the loop will pause for before checking again. I set it to 60.
time.sleep(60)
I would recommend create simple script to ping the machine (they can monitor each other)and if ping timesout send an email.
something like this
#!/bin/bash
SERVERIP=IP ADDRESS
NOTIFYEMAIL=test#example.com
ping -c 3 $SERVERIP > /dev/null 2>&1
if [ $? -ne 0 ]
then
# Use your favorite mailer here:
mailx -s "Server $SERVERIP is down" -t "$NOTIFYEMAIL" < /dev/null
fi
As given above script you can configure normal bash script to monitor server http request or any other service request so if it would not get reply then you will get mail.
There is normal application for monitoring web service which is free on limited no of site per user which you can also use this.
http://uptimerobot.com/
Below script will check for the operation state of interface, If you need add some interfaces do alert as per your wish
#!/bin/bash
while true
do
if [ $(cat /sys/class/net/eth0/operstate) != "up" ]; then
sleep 1
#send mail for logging
fi
done

Launch the same program with different arguments in parallel via bash

I have a program that has very big computation times. I need to call it with different arguments. I want to run them on a server with a lot of processors, so I'd like to launch them in parallel in order to save time. (One program instance only uses one processor)
I have tried my best to write a bash script which looks like this:
#!/bin/bash
# set maximal number of parallel jobs
MAXPAR=5
# fill the PID array with nonsense pid numbers
for (( PAR=1; PAR<=MAXPAR; PAR++ ))
do
PID[$PAR]=-18
done
# loop over the arguments
for ARG in 50 60 70 90
do
# endless loop that checks, if one of the parallel jobs has finished
while true
do
# check if PID[PAR] is still running, suppress error output of kill
if ! kill -0 ${PID[PAR]} 2> /dev/null
then
# if PID[PAR] is not running, the next job
# can run as parellel job number PAR
break
fi
# if it is still running, check the next parallel job
if [ $PAR -eq $MAXPAR ]
then
PAR=1
else
PAR=$[$PAR+1]
fi
# but sleep 10 seconds before going on
sleep 10
done
# call to the actual program (here sleep for example)
#./complicated_program $ARG &
sleep $ARG &
# get the pid of the process we just started and save it as PID[PAR]
PID[$PAR]=$!
# give some output, so we know where we are
echo ARG=$ARG, par=$PAR, pid=${PID[PAR]}
done
Now, this script works, but I don't quite like it.
Is there any better way to deal with the beginning? (Setting PID[*]=-18 looks wrong to me)
How do I wait for the first job to finish without the ugly infinite loop and sleeping some seconds? I know there is wait, but I'm not sure how to use it here.
I'd be grateful for any comments on how to improve style and conciseness.
I have a much more complicated code that, more or less, does the same thing.
The things you need to consider:
Does the user need to approve the spawning of a new thread
Does the user need to approve the killing of an old thread
Does the thread terminate on it's own or it needs to be killed
Does the user want the script to run endlessly, as long as it has MAXPAR threads
If so, does the user need an escape sequence to stop further spawning
Here is some code for you:
spawn() #function that spawns a thread
{ #usage: spawn 1 ls -l
i=$1 #save the thread index
shift 1 #shift arguments to the left
[ ${thread[$i]} -ne 0 ] && #if the thread is not already running
[ ${#thread[#]} -lt $threads] && #and if we didn't reach maximum number of threads,
$# & #run the thread in the background, with all the arguments
thread[$1]=$! #associate thread id with thread index
}
terminate() #function that terminates threads
{ #usage: terminate 1
[ your condition ] && #if your condition is met,
kill {thread[$1]} && #kill the thread and if so,
thread[$1]=0 #mark the thread as terminated
}
Now, the rest of the code depends on your needs (things to consider), so you will either loop through input arguments and call spawn, and then after some time loop through threads indexes and call terminate. Or, if the threads end on their own, loop through input arguments and call both spawn and terminate,but the condition for the terminate is then:
[ ps -aux 2>/dev/null | grep " ${thread[$i]} " &>/dev/null ]
#look for thread id in process list (note spaces around id)
Or, something along the lines of that, you get the point.
Using the tips #theotherguy gave in the comments, I rewrote the script in a better way using the sem command that comes with GNU Parallel:
#!/bin/bash
# set maximal number of parallel jobs
MAXPAR=5
# loop over the arguments
for ARG in 50 60 70 90
do
# call to the actual program (here sleep for example)
# prefixed by sem -j $MAXPAR
#sem -j $MAXPAR ./complicated_program $ARG
sem -j $MAXPAR sleep $ARG
# give some output, so we know where we are
echo ARG=$ARG
done

Curl Command to Repeat URL Request

Whats the syntax for a linux command that hits a URL repeatedly, x number of times. I don't need to do anything with the data, I just need to replicate hitting refresh 20 times in a browser.
You could use URL sequence substitution with a dummy query string (if you want to use CURL and save a few keystrokes):
curl http://www.myurl.com/?[1-20]
If you have other query strings in your URL, assign the sequence to a throwaway variable:
curl http://www.myurl.com/?myVar=111&fakeVar=[1-20]
Check out the URL section on the man page: https://curl.haxx.se/docs/manpage.html
for i in `seq 1 20`; do curl http://url; done
Or if you want to get timing information back, use ab:
ab -n 20 http://url/
You might be interested in Apache Bench tool which is basically used to do simple load testing.
example :
ab -n 500 -c 20 http://www.example.com/
n = total number of request, c = number of concurrent request
You can use any bash looping constructs like FOR, with is compatible with Linux and Mac.
https://tiswww.case.edu/php/chet/bash/bashref.html#Looping-Constructs
In your specific case you can define N iterations, with N is a number defining how many curl executions you want.
for n in {1..N}; do curl <arguments>; done
ex:
for n in {1..20}; do curl -d #notification.json -H 'Content-Type: application/json' localhost:3000/dispatcher/notify; done
If you want to add an interval before executing the cron the next time you can add a sleep
for i in {1..100}; do echo $i && curl "http://URL" >> /tmp/output.log && sleep 120; done
If you want to add a bit of delay before each request you could use the watchcommand in Linux:
watch curl https://yourdomain.com/page
This will call your url every other second. Alter the interval by adding the ´-n´ parameter with a delay containing the number of seconds. For instance:
watch -n0.5 curl https://yourdomain.com/page
This will now call the url every half second.
CTRL+C to exit watch

how to re-run the "curl" command automatically when the error occurs

Sometimes when I execute a bash script with the curl command to upload some files to my ftp server, it will return some error like:
56 response reading failed
and I have to find the wrong line and re-run them manually and it will be OK.
I'm wondering if that could be re-run automatically when the error occurs.
My scripts is like this:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do curl -T $files ftp.myserver.com --user ID:pw ;
done
But sometimes A,B,C, would be uploaded successfully, only D were left with an "error 56", so I have to rerun curl command manually. Besides, as Will Bickford said, I prefer that no confirmation will be required, because I'm always asleep at the time the script is running. :)
Here's a bash snippet I use to perform exponential back-off:
# Retries a command a configurable number of times with backoff.
#
# The retry count is given by ATTEMPTS (default 5), the initial backoff
# timeout is given by TIMEOUT in seconds (default 1.)
#
# Successive backoffs double the timeout.
function with_backoff {
local max_attempts=${ATTEMPTS-5}
local timeout=${TIMEOUT-1}
local attempt=1
local exitCode=0
while (( $attempt < $max_attempts ))
do
if "$#"
then
return 0
else
exitCode=$?
fi
echo "Failure! Retrying in $timeout.." 1>&2
sleep $timeout
attempt=$(( attempt + 1 ))
timeout=$(( timeout * 2 ))
done
if [[ $exitCode != 0 ]]
then
echo "You've failed me for the last time! ($#)" 1>&2
fi
return $exitCode
}
Then use it in conjunction with any command that properly sets a failing exit code:
with_backoff curl 'http://monkeyfeathers.example.com/'
Perhaps this will help. It will try the command, and if it fails, it will tell you and pause, giving you a chance to fix run-my-script.
COMMAND=./run-my-script.sh
until $COMMAND; do
read -p "command failed, fix and hit enter to try again."
done
I have faced a similar problem where I need to make contact with servers using curl that are in the process of starting up and haven't started up yet, or services that are temporarily unavailable for whatever reason. The scripting was getting out of hand, so I made a dedicated retry tool that will retry a command until it succeeds:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do retry curl -f -T $files ftp.myserver.com --user ID:pw ;
done
The curl command has the -f option, which returns code 22 if the curl fails for whatever reason.
The retry tool will by default run the curl command over and over forever until the command returns status zero, backing off for 10 seconds between retries. In addition retry will read from stdin once and once only, and writes to stdout once and once only, and writes all stdout to stderr if the command fails.
Retry is available from here: https://github.com/minfrin/retry

Resources