Siege aborted due to excessive socket failure - linux

I have encountered this problem whilst trying to run off the following cmd from siege on Mac OS X 10.8.3.
siege -d1 -c 20 -t2m -i -f -r10 urls.txt
The output from Siege is the following:
** SIEGE 2.74
** Preparing 20 concurrent users for battle.
The server is now under siege...
done.
siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 27.04 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 1043
Longest transaction: 0.00
Shortest transaction: 0.00
FILE: /usr/local/var/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.

The problem may be that you run out of ephemeral ports. To remedy that, either expand the number of ports to use, or reduce the duration that ports stay in TIME_WAIT, or both.
Expand the usable ports:
Check your current setting:
$ sudo sysctl net.inet.ip.portrange.hifirst
net.inet.ip.portrange.hifirst: 49152
Set it lower to expand your window:
$ sudo sysctl -w net.inet.ip.portrange.hifirst=32768
net.inet.ip.portrange.hifirst: 49152 -> 32768
(hilast should already be at the max, 65536)
Reduce the maximum segment lifetime
$ sudo sysctl -w net.inet.tcp.msl=1000
net.inet.tcp.msl: 15000 -> 1000

I had this error too. Turned out my URI's were faulty. Most of them returned a 404 or 500 status. When I fixed the uri's all went well.

Related

How do I get 'stress --hdd' to stop at end of '--timeout' period?

I am using stress --hdd 1 --timeout 10s -v to stress the IO/disk on an Ubuntu system. Sometimes when running this command, I get something like:
stress: info: [16420] dispatching hogs: 0 cpu, 0 io, 0 vm, 1 hdd
...
stress: info: [16420] successful run completed in 13s
and other times I got something like:
...
stress: info: [16390] successful run completed in 31s
With iostat -yx 1 open in a separate terminal, I can indeed see that the %util and aqu-sz size stay high for roughly that longer-than-specified amount of time. I also see the mmcqd/0 process at the top of top.
How can I get the IO stress to stop within +2 seconds of the --timeout argument given in the stress command?
I have tried stuff like timeout 10 stress --hdd 1 --timeout 10s -v; pkill stress; pkill mmcqd;
without much success. Is there a way to purge the IO queue?
Thanks!

Siege to capture only final statistics

I have a requirement to capture final statistics of siege benchmark tool.
what I have tried is
siege -c2 -t10s http://127.0.0.1:3000/ > siege.log 2>&1
but siege.log has many HTTP/1.1 200 0.00 secs: 16 bytes ==> GET / aswel with it.
All i want is final statistics like below
Lifting the server siege...
Transactions: 496 hits
Availability: 100.00 %
Elapsed time: 0.97 secs
Data transferred: 0.01 MB
Response time: 0.00 secs
Transaction rate: 511.34 trans/sec
Throughput: 0.01 MB/sec
Concurrency: 1.82
Successful transactions: 496
Failed transactions: 0
Longest transaction: 0.04
Shortest transaction: 0.00
Please suggest. Thanks in advance
I found the solution in a hard way.
By default, at $HOME/.siege/siege.conf has verbose set to true.
Option 1:
One can override it to false at $HOME/.siege/siege.conf
Option 2:
if you prefer to not to change global configuration, One can create a custom siege configuration and run as below
siege -c2 -t10s http://127.0.0.1:3000/ --rc=siege.conf

Does iperf have a bandwidth ceiling?

I am trying to run iperf and have a throughput of 1Gig. I'm using UDP so I expect the overhead to pretty much be minimal. Still, I see it capped at 600M despite my attempts.
I have been running:
iperf -c 172.31.1.1 -u -b 500M -l 1100
iperf -c 172.31.1.1 -u -b 1000M -l 1100
iperf -c 172.31.1.1 -u -b 1500M -l 1100
Yet Anything above 600 it seems to hit a limit of about 600. For example, the output for 1000M is:
[ 3] Server Report:
[ 3] 0.0-10.0 sec 716 MBytes 601 Mbits/sec 0.002 ms 6544/689154 (0.95%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
I'm running this on a server with a 10Gig port and even sending it right back to itself, so there should be no interface bottlenecks.
Unsure if I am running up against an iperf limit or if there is another way to get a true 1Gig test.

calculating correct memory utilization from invalid output of pidstat

I used, pidstat -r -p <pid> <interval> >> log_Path/MemStat.CSV & command to to collect the memory stat.
after running this command I found that RSS VSZ %MEM values are increased continuously, which is not expected, as pidstat provides the the values considering the interval.
After searching on net I found that there is bug in pidstat and I need to update the syssat package.
(please refer last few statements by pidstat author on this link : http://sebastien.godard.pagesperso-orange.fr/tutorial.html)
Now, my question is, how do I calculate the correct %MEM utilization from the current output as we cannot run the test again.
sample output :
Time PID minflt/s majflt/s VSZ RSS %MEM
9:55:22 AM 18236 1280 0 26071488 119136 0.36
9:55:23 AM 18236 4273 0 27402768 126276 0.38
9:55:24 AM 18236 9831 0 27402800 162468 0.49
9:55:25 AM 18236 161 0 27402800 169092 0.51
9:55:26 AM 18236 51 0 27402800 175416 0.53
9:55:27 AM 18236 6859 0 27402800 198340 0.6
9:55:28 AM 18236 1440 0 27402800 203608 0.62
On the Sysstat tutorial page you refer to it is said:
I noticed that pidstat had a memory footprint (VSZ and RSS fields)
that was constantly increasing as the time went by. I quickly found
that I had forgotten to close a file descriptor in a function of my
code and that was responsible for the memory leak...!
So, the output of pidstat has never been invalid; on the contrary, the author wrote
pidstat helped me to detect a memory leak in the pidstat command
itself.

Getting CPU utilization information

How could I get the CPU utilization with time info of a process in linux? Basically I want to let my application run overnight. At the same time, I would like to monitor the CPU utilization during the period the application is run.
I tried top | grep appName >& log, it does not seem to return me anything in the log. Could someone help me with this?
Thanks.
vmstat and iostat can both give you periodic information of this nature; I would suggest either setting up the number of times manually, or putting a single poll into a cron job, and then redirecting the output to a file:
vmstat 20 4230 >> cpu_log_file
This would give you a snapshot of usage every 20 seconds for 24 hours.
install sysstat package and run sar
nohup sar -o output.file 12 8 >/dev/null 2>&1 &
use the top or watch command
PID COMMAND %CPU TIME #TH #WQ #PORT #MREG RPRVT RSHRD RSIZE VPRVT VSIZE PGRP PPID STATE UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH CSW PAGEINS USER
10764 top 8.4 00:01.04 1/1 0 24 33 2000K 244K 2576K 17M 2378M 10764 10719 running 0 9908+ 54 564790+ 282365+ 3381+ 283412+ 838+ 27 root
10763 taskgated 0.0 00:00.00 2 0 25 27 432K 244K 1004K 27M 2387M 10763 1 sleeping 0 376 60 140 60 160 109 11 0 root
Write a program that invokes your process and then calls getrusage(2) and reports statistics for its children.
You can monitor the time used by your program with top while it is running.
Alternatively, you can launch your application with the time command, which will print the total amount of CPU time used by your program at the end of its execution. Just type time ./my_app instead of just ./my_app
For more info, man 1 time

Resources