Jetty Websocket Scaling - node.js

what is the maximum number of websockets any one has opened using jetty websocket server. I recently tried to load test the same and was able to open 200k concurrent connections on a 8 core linux VM as server and 16 clients with 4 core each. Each client was able to make 12500 concurrent connections post which they started to get socket timeout exceptions. Also I had tweaked the number of open files as well as tcp connections settings of both client and server as follows.
sudo sysctl -w net.core.rmem_max=16777216
sudo sysctl -w net.core.wmem_max=16777216
sudo sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
sudo sysctl -w net.core.somaxconn=8192
sudo sysctl -w net.core.netdev_max_backlog=16384
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=8192
sudo sysctl -w net.ipv4.tcp_syncookies=1
sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535"
sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_congestion_control=cubic
On the contrary one 2 core machine running node was able to scale upto 90k connections.
My questions are as follows
Can we increase the throughput of jetty VM any more
What is the reason of node.js higher performance over jetty.

Related

Nodetool describecluster list all nodes unreachable

I am deploying cassandra on two public networks, when nodes are started i can see all the node has joined the ring. Also nodetool describecluster shows all nodes are reachable.
After sometime i see nodes are not able to connect to each other and nodetool describecluster shows all nodes in unreachable list.
FYI, i have used public_ip as BROADCAST_ADDRESS AND RPC_ADDRESS. Listen address is the private_ip.
One reason this can happen, is that firewalls are sometimes configured to find and kill idle connections. The Linux kernel has default TCP "keepalive" settings that it can use to refresh long-running connections. The default values for these settings can be seen using sysctl:
$ sudo sysctl -a | grep keepalive
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
In an effort to combat this problem, DataStax recommends adjusting these values in production deployments:
$ sudo sysctl -w \
net.ipv4.tcp_keepalive_time=60 \
net.ipv4.tcp_keepalive_probes=3 \
net.ipv4.tcp_keepalive_intvl=10
You can also add each of those values to your system's equivalent of the/etc/sysctl.conf file (minus the backslashes) and implement that via sysctl also:
sudo sysctl -p /etc/sysctl.conf

Tcpdump write pcap to remote server with file rotation

I'm trying to run tcpdump on linux machine, which needs to write pcap on the remote server with file rotation every 10 seconds.
tcpdump -s0 -i eth0 -G 10 -w - | ssh {remote_ip} "cat > capture_%d-%m_%Y__%H_%M.pcap"
The file gets return on the remote server for first cycle (10 seconds) and then I'm getting the following error.
tcpdump: listening on ens224, link-type EN10MB (Ethernet), capture size 262144 bytes
tcpdump: Can't write to standard output: Bad file descriptor
I'm using -G for time based rotation, if I remove -G, then i'm able to write to remote server continuously.
My remote server is configured with password-less login form this host.
You can pipe tcpdump to another tcpdump so in your case :
tcpdump -i eth0 -w - not port 22 | \
ssh my.remote.host tcpdump -r - -w /tmp/capture_%d-%m_%Y__%H_%M_%S.pcap -G 2 -C 100

How to optimize Redis-server for high-load?

Server: Intel® Core™ i7-3930K 6 core, 64 GB DDR3 RAM, 2 x 3 TB 6 Gb/s HDD SATA3
OS (uname -a): Linux *** 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
Redis server 2.8.19
On the server to spin Redis-server, whose task is to serve requests from two PHP servers.
Problem: The server is unable to cope with peak loads and stops processing incoming requests or makes it very slowly.
Which attempts to optimize server I made:
cat /etc/rc.local
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -SHn 100032
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
cat /etc/sysctl.conf
vm.overcommit_memory=1
net.ipv4.tcp_max_syn_backlog=65536
net.core.somaxconn=32768
fs.file-max=200000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
cat /etc/redis/redis.conf
tcp-backlog 32768
maxclients 100000
What are the settings I found on redis.io, which is in blogs.
Tests
redis-benchmark -c 1000 -q -n 10 -t get,set
SET: 714.29 requests per second
GET: 714.29 requests per second
redis-benchmark -c 3000 -q -n 10 -t get,set
SET: 294.12 requests per second
GET: 285.71 requests per second
redis-benchmark -c 6000 -q -n 10 -t get,set
SET: 175.44 requests per second
GET: 192.31 requests per second
By increasing the number of customers is reduced query performance and the worst thing, Redis-server stops processing incoming requests and servers in PHP there are dozens of types of exceptions
Uncaught exception 'RedisException' with message 'Connection closed' in [no active file]:0\n
Stack trace:\n
#0 {main}\n thrown in [no active file] on line 0
What to do? What else to optimize? How do such a machine can pull customers?
Thank you!

Improve Ethernet throughput for jumbo frames

We are running throughput test on the gigE of Macnica Helio board with 1GB DDR3 specification.We are now achieving 60% (Jumbo frame) throughput, however we expect higher throughput in our application.
Method of calculation as following:-
(100M / time taken * 8-bit /1Gbps)*100%
What we did:
-Transfer 100MB using server and client code
Server(Cyclone V)
-change eth0 MTU 7500 (only achieve if we turn off tx checksum using ethtool "ethtool -K eth0 tx off" else we are just able to change the MTU up to 3500 only) then execute the server code
Client (Laptop runs UBUNTU)
-change eth0 MTU to 9000 then execute the client code and test the throughput performance using wireshark
We do try to change ipv4 setting using command below but throughput result is still the same
-sysctl -w net.core.rmem_max=18388608
-sysctl -w net.core.wmem_max=18388608
-sysctl -w net.core.rmem_default=1065536
-sysctl -w net.core.wmem_default=1065536
-sysctl -w net.ipv4.tcp_rmem=4096 87380 18388608
-sysctl -w net.ipv4.tcp_wmem=4096 87380 18388608
-sysctl -w net.ipv4.tcp_mem=18388608 18388608 18388608
-sysctl -w net.ipv4.route.flush=1
-sysctl -w net.ipv4.tcp_mtu_probing=1
Question
is there any method or solution to achieve higher throughput?
Is there any effect if we turn off the tx checksum?
What is the the different of tcp_congestion_control between cubic and bic and will it effect throughput performance?
Use ntop.org's PF_RING sockets instead of PF_INET sockets. We have been able to get up to 75% throughput with GigE Vision protocol (UDP) using Intel (e1000) NIC's, without using the NIC-specific PF_RING drivers.
AFAIK the tcp_congestion_control will only help you at the start of the TCP session and has no effect once the session is established.

Why does Ubuntu terminal shut down while running load tests?

Facing a peculiar problem when doing load testing on my laptop with 2000 comcurrent users using cometd. Following all steps in http://cometd.org/documentation/2.x/howtos/loadtesting.
These tests run fine for about 1000 concurrent client.
But when I increase the load to about 2000 CCUs, the terminal just shuts down.
Any idea what's happening here?
BTW, i have followed all the OS level settings as per the site. i.e.
# ulimit -n 65536
# ifconfig eth0 txqueuelen 8192 # replace eth0 with the ethernet interface you are using
# /sbin/sysctl -w net.core.somaxconn=4096
# /sbin/sysctl -w net.core.netdev_max_backlog=16384
# /sbin/sysctl -w net.core.rmem_max=16777216
# /sbin/sysctl -w net.core.wmem_max=16777216
# /sbin/sysctl -w net.ipv4.tcp_max_syn_backlog=8192
# /sbin/sysctl -w net.ipv4.tcp_syncookies=1
Also, I have noticed this happened even when I run load tests for other platforms. I know this has to be something related to the OS, but I cannot figure out what it could be.
ulimit command has been executed correctly? I read something about it in Ubuntu forum archive and Ubuntu apache problem.

Resources