I am trying to run iperf and have a throughput of 1Gig. I'm using UDP so I expect the overhead to pretty much be minimal. Still, I see it capped at 600M despite my attempts.
I have been running:
iperf -c 172.31.1.1 -u -b 500M -l 1100
iperf -c 172.31.1.1 -u -b 1000M -l 1100
iperf -c 172.31.1.1 -u -b 1500M -l 1100
Yet Anything above 600 it seems to hit a limit of about 600. For example, the output for 1000M is:
[ 3] Server Report:
[ 3] 0.0-10.0 sec 716 MBytes 601 Mbits/sec 0.002 ms 6544/689154 (0.95%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
I'm running this on a server with a 10Gig port and even sending it right back to itself, so there should be no interface bottlenecks.
Unsure if I am running up against an iperf limit or if there is another way to get a true 1Gig test.
Related
I'm trying to get memory metric from client machine. I installed nrpe in client machine and works well for default checks like load, users and all.
Manual output from client machine,
root#Nginx:~# /usr/lib/nagios/plugins/check_mem -w 50 -c 40
OK - 7199 MB (96%) Free Memory
But when i try from server, other metrics works but memory metrics not working,
[ec2-user#ip-10-0-2-179 ~]$ /usr/lib64/nagios/plugins/check_nrpe -H 107.XX.XX.XX -c check_mem
NRPE: Unable to read output
Other metrics works well
[ec2-user#ip-10-0-2-179 ~]$ /usr/lib64/nagios/plugins/check_nrpe -H 107.XX.XX.XX -c check_load
OK - load average: 0.00, 0.01, 0.05|load1=0.000;15.000;30.000;0; load5=0.010;10.000;25.000;0; load15=0.050;5.000;20.000;0;
I ensured that check_mem command has execution permission for all,
root#Nginx:~# ll /usr/lib/nagios/plugins/check_mem
-rwxr-xr-x 1 root root 2394 Sep 6 00:00 /usr/lib/nagios/plugins/check_mem*
Also here is my client side nrpe config commands
command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/sda1
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_procs]=/usr/lib/nagios/plugins/check_procs -w 200 -c 250
command[check_http]=/usr/lib/nagios/plugins/check_http -I 127.0.0.1
command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 30 -c 20
command[check_mem]=/usr/lib/nagios/plugins/check_mem -w 30 -c 20
Can anyone help me to fix the issue?
When I run iperf UDP test with multiple threads, it simply hangs. It never returns. But the same test always successfully completes with single stream. Here is my iperf version and details:
$ iperf --v
iperf version 2.0.5 (08 Jul 2010) pthreads
The client (10.20.32.50) command: $ iperf -c 10.20.32.52 -P 2 -t 10 -u -b 1g
The server (10.20.32.52) command: $ iperf -s -u
The client gives following output and never finishes
$ iperf -c 10.20.32.52 -P 2 -t 10 -u -b 1g
------------------------------------------------------------
Client connecting to 10.20.32.52, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 4] local 10.20.32.50 port 33635 connected with 10.20.32.52 port 5001
[ 3] local 10.20.32.50 port 56336 connected with 10.20.32.52 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 483 MBytes 406 Mbits/sec
[ 4] Sent 344820 datagrams
[ 4] Server Report:
[ 4] 0.0-696.8 sec 483 MBytes 5.82 Mbits/sec 0.020 ms 229/344819 (0.066%)
[ 4] 0.0-696.8 sec 478 datagrams received out-of-order
The server output is following
$ iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 12] local 10.20.32.52 port 5001 connected with 10.20.32.50 port 60971
[ 10] local 10.20.32.52 port 5001 connected with 10.20.32.50 port 34388
[ 10] 0.0-823.4 sec 483 MBytes 4.92 Mbits/sec 0.018 ms 420/344819 (0.12%)
[ 10] 0.0-823.4 sec 365 datagrams received out-of-order
My both client/server machines are 32 cores with 10 Gbps. Note that client runs fine with the single thread/stream, i.e., $ iperf -c 10.20.32.52 -P 1 -t 10 -u -b 1g always completes. Any help is appreciated!
This question was originally asked as a response to similar question [1], but with iperf3. I made a separate question after receiving suggestions to do so.
Nodir
[1] https://stackoverflow.com/questions/31836985/iperf3-parallel-udp-not-running/32728777
Server: Intel® Core™ i7-3930K 6 core, 64 GB DDR3 RAM, 2 x 3 TB 6 Gb/s HDD SATA3
OS (uname -a): Linux *** 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
Redis server 2.8.19
On the server to spin Redis-server, whose task is to serve requests from two PHP servers.
Problem: The server is unable to cope with peak loads and stops processing incoming requests or makes it very slowly.
Which attempts to optimize server I made:
cat /etc/rc.local
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -SHn 100032
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
cat /etc/sysctl.conf
vm.overcommit_memory=1
net.ipv4.tcp_max_syn_backlog=65536
net.core.somaxconn=32768
fs.file-max=200000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
cat /etc/redis/redis.conf
tcp-backlog 32768
maxclients 100000
What are the settings I found on redis.io, which is in blogs.
Tests
redis-benchmark -c 1000 -q -n 10 -t get,set
SET: 714.29 requests per second
GET: 714.29 requests per second
redis-benchmark -c 3000 -q -n 10 -t get,set
SET: 294.12 requests per second
GET: 285.71 requests per second
redis-benchmark -c 6000 -q -n 10 -t get,set
SET: 175.44 requests per second
GET: 192.31 requests per second
By increasing the number of customers is reduced query performance and the worst thing, Redis-server stops processing incoming requests and servers in PHP there are dozens of types of exceptions
Uncaught exception 'RedisException' with message 'Connection closed' in [no active file]:0\n
Stack trace:\n
#0 {main}\n thrown in [no active file] on line 0
What to do? What else to optimize? How do such a machine can pull customers?
Thank you!
I'm running iperf multiple times via the following command
iperf -c 1.1.1.1 -t 60 -w 6400 -f m >> iperf.log
sometimes with different arguments. The resulting iperf.log may look like this:
[ 3] local 2.2.2.2 port 51129 connected with 1.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-20.0 sec 1869 MBytes 784 Mbits/sec
[ 3] local 2.2.2.2 port 51130 connected with 1.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-15.0 sec 1445 MBytes 808 Mbits/sec
what i'd like to able to do is once it completed to have the average transfer rate outputted ie
average ....... XXX Mbits/sec
awk is the way to go, you can try something like this:
iperf -c 1.1.1.1 -t 60 -w 6400 -f m|awk -F 'MBytes' {'print $2'} >> iperf.log
You just need to remove the empty lines now, that I will leave to you. :)
Do you need to start and stop it? You might just want to use interval reporting (-i ) You can set i to 15 and set -t to samples desired * 15.
I have encountered this problem whilst trying to run off the following cmd from siege on Mac OS X 10.8.3.
siege -d1 -c 20 -t2m -i -f -r10 urls.txt
The output from Siege is the following:
** SIEGE 2.74
** Preparing 20 concurrent users for battle.
The server is now under siege...
done.
siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 27.04 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 1043
Longest transaction: 0.00
Shortest transaction: 0.00
FILE: /usr/local/var/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.
The problem may be that you run out of ephemeral ports. To remedy that, either expand the number of ports to use, or reduce the duration that ports stay in TIME_WAIT, or both.
Expand the usable ports:
Check your current setting:
$ sudo sysctl net.inet.ip.portrange.hifirst
net.inet.ip.portrange.hifirst: 49152
Set it lower to expand your window:
$ sudo sysctl -w net.inet.ip.portrange.hifirst=32768
net.inet.ip.portrange.hifirst: 49152 -> 32768
(hilast should already be at the max, 65536)
Reduce the maximum segment lifetime
$ sudo sysctl -w net.inet.tcp.msl=1000
net.inet.tcp.msl: 15000 -> 1000
I had this error too. Turned out my URI's were faulty. Most of them returned a 404 or 500 status. When I fixed the uri's all went well.