Curl Command to Repeat URL Request - linux

Whats the syntax for a linux command that hits a URL repeatedly, x number of times. I don't need to do anything with the data, I just need to replicate hitting refresh 20 times in a browser.

You could use URL sequence substitution with a dummy query string (if you want to use CURL and save a few keystrokes):
curl http://www.myurl.com/?[1-20]
If you have other query strings in your URL, assign the sequence to a throwaway variable:
curl http://www.myurl.com/?myVar=111&fakeVar=[1-20]
Check out the URL section on the man page: https://curl.haxx.se/docs/manpage.html

for i in `seq 1 20`; do curl http://url; done
Or if you want to get timing information back, use ab:
ab -n 20 http://url/

You might be interested in Apache Bench tool which is basically used to do simple load testing.
example :
ab -n 500 -c 20 http://www.example.com/
n = total number of request, c = number of concurrent request

You can use any bash looping constructs like FOR, with is compatible with Linux and Mac.
https://tiswww.case.edu/php/chet/bash/bashref.html#Looping-Constructs
In your specific case you can define N iterations, with N is a number defining how many curl executions you want.
for n in {1..N}; do curl <arguments>; done
ex:
for n in {1..20}; do curl -d #notification.json -H 'Content-Type: application/json' localhost:3000/dispatcher/notify; done

If you want to add an interval before executing the cron the next time you can add a sleep
for i in {1..100}; do echo $i && curl "http://URL" >> /tmp/output.log && sleep 120; done

If you want to add a bit of delay before each request you could use the watchcommand in Linux:
watch curl https://yourdomain.com/page
This will call your url every other second. Alter the interval by adding the ´-n´ parameter with a delay containing the number of seconds. For instance:
watch -n0.5 curl https://yourdomain.com/page
This will now call the url every half second.
CTRL+C to exit watch

Related

curl code : stop send http request when it reach specific amount?

target=${1:-http://web1.com}
while true # loop forever, until ctrl+c pressed.
do
for i in $(seq 10) # perfrom the inner command 10 times.
do
curl $target > /dev/null & # send out a curl request, the & indicates not to wait for the response.
done
wait # after 100 requests are sent out, wait for their processes to finish before the next iteration.
done
I want to train my HTTP load balancing by giving multiple HTTP requests in one time. I found this code to help me to send 10 sequence HTTP requests to web1.com at one time. However, I want the code to stop when reaches 15000 requests.
so in total, there will be 1500 times to send HTTP requests.
thank you for helping me
update
I decide to use Hey.
https://serverfault.com/a/1082007/861352
i use this command to run the request
hey -n 100 -q 2 -c 1 http://web1.com -A
which:
-n: stop the request when it reaches 100 request
-q: give a break for 2 (not request /second but query /second) to avoid flooding requests at once
-c: send the request at once 1 request
http://web1.com = my web
-A = accept header, I have an error when removing -A
this works:
target=${1:-http://web1.com}
limit=1500
count=0
while true # loop forever, until ctrl+c pressed.
do
for i in $(seq 10) # perfrom the inner command 10 times.
do
count=$(expr $count + 1) # increments +1
curl -sk $target > /dev/null & # send out a curl request, the & indicates not to wait for the response
done
wait # after 100 requests are sent out, wait for their processes to finish before the next iteration.
if [ $count -ge $limit ];then
exit 0
fi
done
Maybe what you're really looking for is a basic command-line stress-test tool.
For instance siege is particularly simple and easy to use:
siege --delay 1 --concurrent 10 --reps 1500 http://web1.com
will do pretty much what you're expecting...
introduction article about siege

shell script put result of curl in variable followed by sleep command [duplicate]

This question already has answers here:
Assign variable in the background shell
(2 answers)
Closed 1 year ago.
I want to trigger curl requests every 400ms in shell script and put the results in a variable, and after finishing the curl request (eg 10 requests) finally write all results in a file. when I use the following code for this purpose
result="$(curl --location --request GET 'http://localhost:8087/say-hello')" & sleep 0.400;
Because & creates a new process result can not achieve. so what should I do?
You can use the -m curl option instead of the sleep.
-m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. See
also the --connect-timeout option.
The difference can be sound in the next sequence of commands:
a=1; a=$(echo 2) ; sleep 1; echo $a
2
and with a background process
a=1; a=$(echo 2) & sleep 1; echo $a
[1] 973
[1]+ Done a=$(echo 2)
1
Why is a not changed in the second case?
Actually it is changed... in a new environment. The & creates a new process with its own a, and that a is assigned the value 2. When the process is finished, the variable a of that subprocess is deleted and you only of the original value of a.
Depending on your requirements you might want to make a resultdir, have every background curl process write to a different tmpfile, wait with wait until all curls are finished and collect your results.

WGET - Simultaneous connections are SLOW

I use the following command to append the browser's response from list of URLs into an according output file:
wget -i /Applications/MAMP/htdocs/data/urls.txt -O - \
>> /Applications/MAMP/htdocs/data/export.txt
This works fine and when finished it says:
Total wall clock time: 1h 49m 32s
Downloaded: 9999 files, 3.5M in 0.3s (28.5 MB/s)
In order to speed this up I used:
cat /Applications/MAMP/htdocs/data/urls.txt | \
tr -d '\r' | \
xargs -P 10 $(which wget) -i - -O - \
>> /Applications/MAMP/htdocs/data/export.txt
Which opens simultaneous connections making it a little faster:
Total wall clock time: 1h 40m 10s
Downloaded: 3943 files, 8.5M in 0.3s (28.5 MB/s)
As you can see, it somehow omits more than half of the files and takes approx. the same time to finish. I cannot guess why. What I want to do here is download 10 files at once (parallel processing) using xargs and jump to the next URL when the ‘STDOUT’ is finished. Am I missing something or can this be done elsewise?
On the other hand, can someone tell me what the limit that can be set is regarding the connections? It would really help to know how many connections my processor can handle without slowing down my system too much and even avoid some type of SYSTEM FAILURE.
My API Rate-Limiting is as follows:
Number of requests per minute 100
Number of mapping jobs in a single request 100
Total number of mapping jobs per minute 10,000
Have you tried GNU Parallel? It will be something like this:
parallel -a /Applications/MAMP/htdocs/data/urls.txt wget -O - > result.txt
You can use this to see what it will do without actually doing anything:
parallel --dry-run ...
And either of these to see progress:
parallel --progress ...
parallel --bar ...
As your input file seems to be a bit of a mess, you can strip carriage returns like this:
tr -d '\r' < /Applications/MAMP/htdocs/data/urls.txt | parallel wget {} -O - > result.txt
A few things:
I don't think you need the tr, unless there's something weird about your input file. xargs expects one item per line.
man xargs advises you to "Use the -n option with -P; otherwise
chances are that only one exec will be done."
You are using wget -i - telling wget to read URLs from stdin. But xargs will be supplying the URLs as parameters to wget.
To debug, substitute echo for wget and check how it's batching the parameters
So this should work:
cat urls.txt | \
xargs --max-procs=10 --max-args=100 wget --output-document=-
(I've preferred long params - --max-procs is -P. --max-args is -n)
See wget download with multiple simultaneous connections for alternative ways of doing the same thing, including GNU parallel and some dedicated multi-threading HTTP clients.
However, in most circumstances I would not expect parallelising to significantly increase your download rate.
In a typical use case, the bottleneck is likely to be your network link to the server. During a single-threaded download, you would expect to saturate the slowest link in that route. You may get very slight gains with two threads, because one thread can be downloading while the other is sending requests. But this will be a marginal gain.
So this approach is only likely to be worthwhile if you're fetching from multiple servers, and the slowest link in the route to some servers is not at the client end.

Run cURL command every 5 seconds

This is the command that I want to run -
curl --request POST --data-binary #payload.txt --header "carriots.apiKey:XXXXXXXXXXXXXXXXXXXX" --verbose http://api.carriots.com/streams
This basically sends a data stream to a server.
I want to run this command every 5 seconds. How do I achieve this?
You can run in while loop.
while sleep 5; do cmd; done
Edit:
If you don't want to use while..loop. you can use watch command.
watch -n 5 cmd
Another simple way to accomplish the same task is:
watch -n {{seconds}} {{your-command}}
For example, every 900 milliseconds (note the double quotes around "your-command"):
watch -n 0.9 "curl 'https://public-cloud.abstratium.dev/user/site'"

How to run multithreading wget to make load test on REST API

I'd like to write a Linux bash script in order to make a load test on URL responding REST, specifically on time spent. I'd like to run multi wget thread then to eval time spent when ALL threads are terminated. But following sh code doesnt calculate time properly, giving back hand whithout waiting for threads ends. Could you help me ? Thanks.
date > temps
for i in (seq 1 10)
do
wget -q -header "Content-Type : application/json" -post-file json.txt server &
done
date >> temps
I think you are looking for a benchmark utility. You may try these commands:
ab (Apache Benchmark)
siege
Both have the option for "concurrent connections" and detailed stats.

Resources