Website Benchmarking using ab - linux

I am trying my hand at various benchmarking tools for the website I am working on and have found Apache Bench (ab) to be an excellent tool for load testing. It is a command line tool and is very easy to use, apparently. However I have a doubt about two of its basic flags. The site I was reading says:
Suppose we want to see how fast Yahoo can handle 100 requests, with a maximum of 10 requests running concurrently:
ab -n 100 -c 10 http://www.yahoo.com/
and the explanation for the flags states:
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make
I guess I am just not able to wrap my head around number of requests to perform and number of multiple requests to make. What happens when I give them both together like in the example above?
Can anyone give me a simpler explanation of what these two flags do together?

In your example ab will create 10 connections to yahoo.com and request a page using each of them simultaneously.
If you omit -c 10 ab will create only one connection and create next only when the first completes(when we have the whole main page downloaded).
If we pretend that server's response time does not depend on the number of requests it is simultaneously handling, your example will complete 10 times faster than without -c 10.
Also: What is concurrent request (-c) in Apache Benchmark?

-n 100 -c 10 means "issue 100 requests, 10 at a time."

Related

Baseline Scan ZAP (OWASP) on a defined list of urls

Is it possible to define a list of URLs that the ZAP baseline (https://www.zaproxy.org/docs/docker/baseline-scan/) scan should scan? The default behaviour is that it runs for one minute. I only want 20 defined URLs to be scanned.
It the moment I use the docker container with the following parameters:
docker run -t owasp/zap2docker-stable zap-baseline.py -t https://www.example.com
It will run for up_to one minute (by default). If your app has only 20 urls then it will hopefully find them much faster than that. If it takes 2 seconds to find them then thats how long it will take to find them. The passive scanning will take a bit longer, but hopefully not too long.

Aria2 max-connection not working

I installed aria2 (1.18.1) in my ubuntu.
but the problem is it can not increase download connection.
aria2c -x 10 http://mirror.sg.leaseweb.net/speedtest/100mb.bin
[#fe34b4 2.1MiB/95MiB(2%) CN:4 DL:233KiB ETA:6m49s]
by default it only download from 4 connection not 10 (as i given).
i even try with another sites, but default 4 connection carried out while downloading
Solved:-
There are multiple options that influence the behavior:
--split -s Maximum number of concurrent splits (connections) per download. Defaults to 5, so unless you changed it, you will get max 5 connections for a single download no matter what -x.
--min-split-size -k A split should only be initiated when the split would be bigger than this. Defaults to 20M, meaning that when you download a 100M file, the time the download is split some data is already retrieved which means less slightly less than 100MB is remaining, meaning 4 splits (5 splits would create splits slightly less than 20MB).
There you have it. Please check out the manual for more information on the various options.
$ aria2c -k 1M -s 10 -x 10 http://mirror.sg.leaseweb.net/speedtest/100mb.bin
[#1325f1 7.4MiB/95MiB(7%) CN:10 DL:1.2MiB ETA:1m8s]

Net::SNMP caching results for extend OIDs

The scope of this work is to query two machines' high resolution timer at the "same time" and get the time clock inaccuracy between both systems. This is done by having the 3rd machine sending an SNMP-get for a custom OID where the SNMP agent is configured to invoke a perl script to return the high-resolution timer. All works fine as in the snmp-get manages to return the expected result. However it appears that regardless of the frequency of the snmpget queries, the snmpagent only performs a fresh query to the script at ~5 second intervals. I am running NET SNMP version 5.4.3. After some research I've seen that this is typical of NET SNMP to cache the results and this is done on MIB tree basis. There is MIB (nsCacheTable) with the respective intervals by querying snmpwalk to 1.3.6.1.4.1.8072.1.5.3. Apparently the values can be changed to 0 to remove caching. Some of these are read-only though. Although I've set a few of them to 0 using SNMPset (as most of them return a Bad object type error).
I know very basic SNMP so I followed a guide online and mapped the below custom OID to the perl script with this line in the snmpd.conf.
extend .1.3.6.1.4.1.27654.3 return_date /usr/bin/perl [directory]/[perl script name].pl
Then the actual OID containing the output (time in epoch) is:
iso.3.6.1.4.1.27654.3.3.1.1.11.114.101.116.117.114.110.95.100.97.116.101
Anyone has any ideas how I can disable the caching for this OID?
Thanks in advance.
---EDIT---
According to this blog post, in order to avoid disabling the caching - one can instead use pass-persist scripts which look more complex to implement at first glance. The perl script I used to call is the below:
#!/usr/bin/perl
# THIS SCRIPT RETURNS THE EPOCH TIME OF DAY IN MICROSECONDS
use Time::HiRes qw(gettimeofday);
($s, $usec) = gettimeofday();
$newtime = $s.$usec;
print ($newtime);
Anyone can provide help in converting this script for pass-persist and how the snmpd.conf should look like?

Benchmarking Node.JS server

I've written a Node.JS server which I would like to benchmark. It has the following components that I would like to benchmark separately:
- socket.io: how many continuous connections can it accept and process (where is the saturation point)
- redis: the same as above
- express: don't want to benchmark it
I know there is quite some (not a lot) documentation about that on the internet, but I don't like to reinvent the wheel, plus I don't want to actually spend countless hours of time trying some solution that turns out to be wrong for the job.
This is why I'm asking you guys here: what should I use to get a number/graph (whatever) on the number of simultaneous connections that server can process simultaneuosly without being bogged down? It would also be nice to monitor cpu, memory and swap of the process (yeah, yeah I know I can use countless of techniques or write my own script, but maybe something like that exists already).
I'm not looking for an answer where you'll paste a link to some solution that I already know it exists, I would like an answer in such a way, so that the person giving it has some actual experience and can really make a point or two and point me in the right direction.
Thank you
You can use ApacheBench ab to test the load that your server may take - man page
Some nice tutorials :
nixcraft/howto-performance-benchmarks-a-web-server
petefreitag/Using Apache Bench for Simple Load Testing
Usage :
$ ab -k -n 1000 -c 100 www.yourserver.com
-k - keep alive
-n N - will send N requests to the server
-c X - will send X packets concurrently

Linux display average CPU load for last week

On a Linux box, I need to display the average CPU utilisation per hour for the last week. Is that information logged somewhere? Or do I need to write a script that wakes up every 15 minutes to copy /proc/loadavg to a logfile?
EDIT: I'm not allowed to use any tools other than those that come with Linux.
You might want to check out sar (man page), it fits your use case nicely.
System Activity Reporter (SAR) - capture important system performance metrics at
periodic intervals.
Example from IBM Developer Works Article:
Add an entry to your root crontab
# Collect measurements at 10-minute intervals
0,10,20,30,40,50 * * * * /usr/lib/sa/sa1
# Create daily reports and purge old files
0 0 * * * /usr/lib/sa/sa2 -A
Then you can simply query this information using a sar command (display all of today's info):
root ~ # sar -A
Or just for a certain days log file:
root ~ # sar -f /var/log/sa/sa16
You can usually find it in the sysstat package for your linux distro
As far as I know it's not stored anywhere... It's a trivial thing to write, anyway. Just add something like
cat /proc/loadavg >> /var/log/loads
to your crontab.
Note that there are monitoring tools (like Munin) which can do this kind of thing for you, and generate pretty graphs of it to boot... they might be overkill for your situation though.
I would recommend looking at Multi Router Traffic Grapher (MRTG).
Using snmpd to read the load average, it will automatically calculate averages at any time interval and length, along with nice charts for analysis.
Someone has already posted a CPU usage example.

Resources