How to run multithreading wget to make load test on REST API - multithreading

I'd like to write a Linux bash script in order to make a load test on URL responding REST, specifically on time spent. I'd like to run multi wget thread then to eval time spent when ALL threads are terminated. But following sh code doesnt calculate time properly, giving back hand whithout waiting for threads ends. Could you help me ? Thanks.
date > temps
for i in (seq 1 10)
do
wget -q -header "Content-Type : application/json" -post-file json.txt server &
done
date >> temps

I think you are looking for a benchmark utility. You may try these commands:
ab (Apache Benchmark)
siege
Both have the option for "concurrent connections" and detailed stats.

Related

Bash script results in different output when running from a cron job

I'm puzzled by this problem I'm having on Ubuntu 20.04 where cron is able to run a bash script but the overall outcome is different then when using the shell command.
I've look through all questions I could in here and on Google but couldn't find anyone that had the same problem.
Background:
I'm using Pushgateway to store metrics I'm generating through a bash script, and afterwards it's being imported automatically to Prometheus.
The end goal is to export a list of running processes, their CPU%, Mem% etc, similar to top command.
This is the bash script:
#!/bin/bash
z=$(top -n 1 -bi)
while read -r z
do
var=$var$(awk 'FNR>7{print "cpu_usage{process=\""$12"\", pid=\""$1"\"}", $9z} FNR>7{print "memory_usage{process=\""$12"\", pid=\""$1"\"}", $10z}')
done <<< "$z"
curl -X POST -H "Content-Type: text/plain" --data "$var
" http://localhost:9091/metrics/job/top/instance/machine
I used to have a version that used ps aux but then I found out that it only shows the average CPU% per process.
As you can see, the command I'm running is top -n 1 -bi which gives me a snapshot of active processes and their metrcis.
I'm using awk to format the data, and FNR>7 because I need to ignore the first 7 lines which is the summery presented by top.
The bash scrip is registered on /bin, /usr/bin and /usr/local/bin.
When checking http://localhost:9091/metrics, which is supposed to show me the information gathered, I'm getting this some of information when running the scrip using shell:
cpu_usage{instance="machine",job="top",pid="114468",process="php-fpm74"} 17.6
cpu_usage{instance="machine",job="top",pid="114483",process="php-fpm74"} 11.8
cpu_usage{instance="machine",job="top",pid="126305",process="ffmpeg"} 64.7
And this is the same information when cron is running the same script:
cpu_usage{instance="machine",job="top",pid="114483",process="php-fpm+"} 5
cpu_usage{instance="machine",job="top",pid="126305",process="ffmpeg"} 60
cpu_usage{instance="machine",job="top",pid="128777",process="php"} 15
So, for some reason, when I run it from cron it cuts the process name after 7 places.
I initially though it was related to the FNR>7 but even after changing it to 8 or 9 (and using exec bash to re-register the command) it gives the same results, also when I run it manually it works just fine.
Any help would be appreciated!!

Cron / wget jobs intermittently not running - not getting into access log

I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!
Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php

Curl Command to Repeat URL Request

Whats the syntax for a linux command that hits a URL repeatedly, x number of times. I don't need to do anything with the data, I just need to replicate hitting refresh 20 times in a browser.
You could use URL sequence substitution with a dummy query string (if you want to use CURL and save a few keystrokes):
curl http://www.myurl.com/?[1-20]
If you have other query strings in your URL, assign the sequence to a throwaway variable:
curl http://www.myurl.com/?myVar=111&fakeVar=[1-20]
Check out the URL section on the man page: https://curl.haxx.se/docs/manpage.html
for i in `seq 1 20`; do curl http://url; done
Or if you want to get timing information back, use ab:
ab -n 20 http://url/
You might be interested in Apache Bench tool which is basically used to do simple load testing.
example :
ab -n 500 -c 20 http://www.example.com/
n = total number of request, c = number of concurrent request
You can use any bash looping constructs like FOR, with is compatible with Linux and Mac.
https://tiswww.case.edu/php/chet/bash/bashref.html#Looping-Constructs
In your specific case you can define N iterations, with N is a number defining how many curl executions you want.
for n in {1..N}; do curl <arguments>; done
ex:
for n in {1..20}; do curl -d #notification.json -H 'Content-Type: application/json' localhost:3000/dispatcher/notify; done
If you want to add an interval before executing the cron the next time you can add a sleep
for i in {1..100}; do echo $i && curl "http://URL" >> /tmp/output.log && sleep 120; done
If you want to add a bit of delay before each request you could use the watchcommand in Linux:
watch curl https://yourdomain.com/page
This will call your url every other second. Alter the interval by adding the ´-n´ parameter with a delay containing the number of seconds. For instance:
watch -n0.5 curl https://yourdomain.com/page
This will now call the url every half second.
CTRL+C to exit watch

Get website's status code in linux

I have a small vps where I host a web app that I developed, and it's starting to receive a lot of visits.
I need to check/verify, some how, every X minutes to see if the web is up and running (check for status code, 200) or if is down (code 500), and if down, then restart run a script that I made to restart some services.
Any idea how to check that in linux? Curl, Lynx?
curl --head --max-time 10 -s -o /dev/null \
-w "%{http_code} %{time_total} %{url_effective}\n" \
http://localhost
Times out after 10 seconds, and reports Response Code and time
Curl will exit with an error code of 28 if the request times out (check $?)
Found this on a sister website (serverfault)
https://serverfault.com/questions/124952/testing-a-website-from-linux-command-line
wget -p http://site.com
This seems to do the trick
For questions like that the man pages of the tools normally provide a pretty good list of all possible options.
For curl you can also find it here.
The option you seem to search is -w with the http-code variable.
EDIT:
Please see #Ken's answer of how to use -w.
Ok, I created two scripts:
site-statush.sh http://yoursite.com => to check site status and if 200, do no thing, else invoke services-action.sh restart
services-action.sh restart => to restart all services indicated in $services
Check it out at https://gist.github.com/2421072

Is there a variable in Linux that shows me the last time the machine was turned on?

I want to create a script that, after knowing that my machine has been turned on for at least 7h, it does something.
Is this possible? Is there a system variable or something like that that shows me the last time the machine was turned on?
The following command placed in /etc/rc.local:
echo 'touch /tmp/test' | at -t $(date -d "+7 hours" +%m%d%H%M)
will create a job that will run a touch /tmp/test in seven hours.
To protect against frequent reboots and prevent adding multiple jobs you could use one at queue exclusively for this type of jobs (e.g. c queue). Adding -q c to the list of at parameters will place the job in the c queue. Before adding new job you can delete all jobs from c queue:
for job in $(atq -q c | sed 's/[ \t].*//'); do atrm $job; done
You can parse the output of uptime I suppose.
As Pavel and thkala point out below, this is not a robust solution. See their comments!
The uptime command shows you how long the system has been running.
To accomplish your task, you can make a script that first does sleep 25200 (25200 seconds = 7 hours), and then does something useful. Have this script run at startup, for example by adding it to /etc/rc.local. This is a better idea than polling the uptime command to see if the machine has been up for 7 hours (which is comparable to a kid in the backseat of a car asking "are we there yet?" :-))
Just wait for uptime to equal seven hours.
http://linux.die.net/man/1/uptime
I don't know if this is what you are looking for, but uptime command will give you for how many computer was running since last reboot.
$ cut -d ' ' -f 1 </proc/uptime
This will give you the current system uptime in seconds, in floating point format.
The following could be used in a bash script:
if [[ "$(cut -d . -f 1 </proc/uptime)" -gt "$(($HOURS * 3600))" ]]; then
...
fi
Add the following to your crontab:
#reboot sleep 7h; /path/to/job
Either /etc/crontab, /etc/cron.d/, or your users crontab, depending on whether you want to run it as root or the user -- don't forget to put "root" after "#reboot" if you put it in /etc/crontab or cron.d
This has the benefit that if you reboot multiple times, the jobs get cancelled at shut down, so you won't get a bunch of them stacking up if you reboot several times within 7 hours. The "#reboot" time specification triggers the job to be run once when the system is rebooted. "sleep 7h;" waits for 7 hours before running "/path/to/job".

Resources