Why does Ubuntu terminal shut down while running load tests? - linux

Facing a peculiar problem when doing load testing on my laptop with 2000 comcurrent users using cometd. Following all steps in http://cometd.org/documentation/2.x/howtos/loadtesting.
These tests run fine for about 1000 concurrent client.
But when I increase the load to about 2000 CCUs, the terminal just shuts down.
Any idea what's happening here?
BTW, i have followed all the OS level settings as per the site. i.e.
# ulimit -n 65536
# ifconfig eth0 txqueuelen 8192 # replace eth0 with the ethernet interface you are using
# /sbin/sysctl -w net.core.somaxconn=4096
# /sbin/sysctl -w net.core.netdev_max_backlog=16384
# /sbin/sysctl -w net.core.rmem_max=16777216
# /sbin/sysctl -w net.core.wmem_max=16777216
# /sbin/sysctl -w net.ipv4.tcp_max_syn_backlog=8192
# /sbin/sysctl -w net.ipv4.tcp_syncookies=1
Also, I have noticed this happened even when I run load tests for other platforms. I know this has to be something related to the OS, but I cannot figure out what it could be.

ulimit command has been executed correctly? I read something about it in Ubuntu forum archive and Ubuntu apache problem.

Related

Nodetool describecluster list all nodes unreachable

I am deploying cassandra on two public networks, when nodes are started i can see all the node has joined the ring. Also nodetool describecluster shows all nodes are reachable.
After sometime i see nodes are not able to connect to each other and nodetool describecluster shows all nodes in unreachable list.
FYI, i have used public_ip as BROADCAST_ADDRESS AND RPC_ADDRESS. Listen address is the private_ip.
One reason this can happen, is that firewalls are sometimes configured to find and kill idle connections. The Linux kernel has default TCP "keepalive" settings that it can use to refresh long-running connections. The default values for these settings can be seen using sysctl:
$ sudo sysctl -a | grep keepalive
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
In an effort to combat this problem, DataStax recommends adjusting these values in production deployments:
$ sudo sysctl -w \
net.ipv4.tcp_keepalive_time=60 \
net.ipv4.tcp_keepalive_probes=3 \
net.ipv4.tcp_keepalive_intvl=10
You can also add each of those values to your system's equivalent of the/etc/sysctl.conf file (minus the backslashes) and implement that via sysctl also:
sudo sysctl -p /etc/sysctl.conf

Find out how much data is sent and received via a terminal command

I'm working on a project where my client is billed exorbitant rates for data transfer on a boat. When they are in port, they use 3g and when they are out at sea they use sattelite.
Every 30 minutes I need to check to see what network I am attached to (moving vessel) but I need to give them specific information on how much data is actually used to make these calls.
I was wondering if anyone knew of any way to get the exact bytes that were sent out and received via terminal response.
Right now I am running this command to get the IP address that my ISP has assigned me.
dig +short myip.opendns.com #resolver1.opendns.com
To identify which network is used right now you may check route table
netstat -r | grep default
You will see default interface used for connection.
There are multiple commands that will show you statistics for interface. E.g.
ip -s link show dev eth0
where eth0 interface identified from command above.
or
ethtool -S eth0
If you want to get data independently from interface(all data stats from boot) you may use IpExt sectoin of
netstat -s
All those metrics will provide system wide counters. For inspecting specific app you may use iptables stats. There are owner module in iptables-extensions that may help. Here are example commands:
# sudo su
# iptables -A OUTPUT -m owner --uid-owner 1000 -j CONNMARK --set-mark 1
# iptables -A INPUT -m connmark --mark 1
# iptables -A OUTPUT -m connmark --mark 1
# iptables -nvL | grep -e Chain -e "connmark match 0x1"
Iptables will allow you to clear counters whenever it needed. Also owner module allow you match packets associated with user group, process id and socket.

Jetty Websocket Scaling

what is the maximum number of websockets any one has opened using jetty websocket server. I recently tried to load test the same and was able to open 200k concurrent connections on a 8 core linux VM as server and 16 clients with 4 core each. Each client was able to make 12500 concurrent connections post which they started to get socket timeout exceptions. Also I had tweaked the number of open files as well as tcp connections settings of both client and server as follows.
sudo sysctl -w net.core.rmem_max=16777216
sudo sysctl -w net.core.wmem_max=16777216
sudo sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
sudo sysctl -w net.core.somaxconn=8192
sudo sysctl -w net.core.netdev_max_backlog=16384
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=8192
sudo sysctl -w net.ipv4.tcp_syncookies=1
sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535"
sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_congestion_control=cubic
On the contrary one 2 core machine running node was able to scale upto 90k connections.
My questions are as follows
Can we increase the throughput of jetty VM any more
What is the reason of node.js higher performance over jetty.

Improve Ethernet throughput for jumbo frames

We are running throughput test on the gigE of Macnica Helio board with 1GB DDR3 specification.We are now achieving 60% (Jumbo frame) throughput, however we expect higher throughput in our application.
Method of calculation as following:-
(100M / time taken * 8-bit /1Gbps)*100%
What we did:
-Transfer 100MB using server and client code
Server(Cyclone V)
-change eth0 MTU 7500 (only achieve if we turn off tx checksum using ethtool "ethtool -K eth0 tx off" else we are just able to change the MTU up to 3500 only) then execute the server code
Client (Laptop runs UBUNTU)
-change eth0 MTU to 9000 then execute the client code and test the throughput performance using wireshark
We do try to change ipv4 setting using command below but throughput result is still the same
-sysctl -w net.core.rmem_max=18388608
-sysctl -w net.core.wmem_max=18388608
-sysctl -w net.core.rmem_default=1065536
-sysctl -w net.core.wmem_default=1065536
-sysctl -w net.ipv4.tcp_rmem=4096 87380 18388608
-sysctl -w net.ipv4.tcp_wmem=4096 87380 18388608
-sysctl -w net.ipv4.tcp_mem=18388608 18388608 18388608
-sysctl -w net.ipv4.route.flush=1
-sysctl -w net.ipv4.tcp_mtu_probing=1
Question
is there any method or solution to achieve higher throughput?
Is there any effect if we turn off the tx checksum?
What is the the different of tcp_congestion_control between cubic and bic and will it effect throughput performance?
Use ntop.org's PF_RING sockets instead of PF_INET sockets. We have been able to get up to 75% throughput with GigE Vision protocol (UDP) using Intel (e1000) NIC's, without using the NIC-specific PF_RING drivers.
AFAIK the tcp_congestion_control will only help you at the start of the TCP session and has no effect once the session is established.

How to Capture Remote System network traffic?

I have been using wire-shark to analyse the packets of socket programs, Now i want to see the traffic of other hosts traffic, as i found that i need to use monitor mode that is only supported in Linux platform, so i tried but i couldn't capture any packets that is transferred in my network, listing as 0 packets captured.
Scenario:
I'm having a network consisting of 50+ hosts (all are powered by windows Except mine), my IP address is 192.168.1.10, when i initiate a communication between any 192.168.1.xx it showing the captured traffic.
But my requirement is to monitor the traffic of 192.168.1.21 b/w 192.168.1.22 from my host i,e. from 192.168.1.10.
1: is it possible to capture the traffic as i mentioned?
2: If it is possible then is wire-shark is right tool for it (or should i have to use differnt one)?
3: if it is not possible, then why?
Just adapt this a bit with your own filters and ips : (on local host)
ssh -l root <REMOTE HOST> tshark -w - not tcp port 22 | wireshark -k -i -
or using bash :
wireshark -k -i <(ssh -l root <REMOTE HOST> tshark -w - not tcp port 22)
You can use tcpdump instead of tshark if needed :
ssh -l root <REMOTE HOST> tcpdump -U -s0 -w - -i eth0 'port 22' |
wireshark -k -i -
You are connected to a switch which is "switching" traffic. It bases the traffic you see on your mac address. It will NOT send you traffic that is not destined to your mac address. If you want to monitor all the traffic you need to configure your switch to use a "port mirror" and plug your sniffer into that port. There is no software that you can install on your machine that will circumvent the way network switching works.
http://en.wikipedia.org/wiki/Port_mirroring

Resources