Netty server shutdown takes a long time - linux

I have a Netty server that mobile clients connect to through an AWS load balancer, i.e. all the open TCP sockets have the AWS load balancer on the other end. There are typically concurrent connections from about 500k clients to each instance.
A problem is that to deploy fixes and react quickly to production issues we want deployment of a new code update to be fairly quick and have therefore set timeouts for server restarts to be quite low. However, deployments often fail, because restarting the server can take many minutes even though CPU usage is typically only 10-15% and memory about 50%.
Are there some kernel timeouts that could be the reason here? Any kernel or Netty parameters that could be tuned to speed up tearing down the server. We use systemctl restart during deployment.

Related

ASP.NET Core 2.2 experiencing high CPU usage

So I have hosted asp.net core 2.2 web service on Azure(S2 plan). The problem is that my application sometimes getting high CPU usage(almost 99%). What I have done for now - checked process explorer on azure. I see there a lot of processes who are consuming CPU. Maybe someone knows if it's okay for these processes consume CPU?
Currently, I don't have an idea where do they come from. Maybe it's normal to have them here.
Shortly about my application:
Currently, there is not much traffic. 500-600 request in a day. Most of the request is used to communicate with MS SQL by querying records, adding, etc.
As well I am using MS Websocket, but high CPU happens when no WebSocket client is connected to web service, so I hardly believe that it's a cause. I tried to use apache ab for load testing, but there isn't any pattern, that after one request's load test, I would get high CPU. So sometimes happens, sometimes don't during load testing.
So I just update screenshot of processes, I see that lots of threads are being locked/used during the time when fluent migrator start running its logging.
Update*
I will remove fluent migrator logging middleware from Configure method. Will look forward with the situation.
UPDATE**
So I removed logging of FluentMigrator. Until now I didn't notice any CPU usage over 90%.
But still, I am confused. My CPU usage is spinning. Is it health CPU usage graph or not?
Also, I tried to make a load test on the websocket server.
I made a script that calls some functions of WebSocket every 100ms from 6-7 clients. So every 100ms there are 7 calls to WebSocket server from different clients, every function within itself queries some data/insert (approximately 3-4 queries of every WebSocket function).
What I did notice, on Azure S1 DTU 20 after 2min I am getting out of SQL pool connections, If I increase DTU to 100, it handles 7 clients properly without any errors of 'no connection pool'.
So the first question: is it a normal CPU spinning?
Second: should I get an error message of 'no SQL connection free' using this kind of load test on DTU 10 Azure SQL. I am afraid that when creating a scoped service on singleton WebSocket Service I am leaking connections.
This topic gets too long, maybe I should move it to a new topic?
-
At this stage I would say you need to profile your application and figure out what areas of your code are CPU intensive. In the past I have used dotTrace, this highlighted methods which are the most expensive with a call tree.
Once you know what areas of your code base are the least efficient, you can begin to refactor them so that they are more efficient. This could simply be changing some small operations, adding caching for queries or using distributed locking for example.
I believe the reason the other DLLs are showing CPU usage is because your code calling methods which are within those DLLs.

Diagnosing Sporadic Lockups in Website Running on IIS

Goal
Determine the cause of the sporadic lock ups of our web application running on IIS.
Problem
An application we are running on IIS sporadically locks up throughout the day. When it locks up it will lock up on all workers and on all load balanced instance.
Environment and Application
The application is running on 4 different Windows Server 2016 machines. The machines are load balanced using ha-proxy using a round robin load balancing scheme. The IIS application pools this website is hosted in are configured to have 4 workers each and the application it hosts is a 32-bit application. The IIS instances are not using a shared configuration file but the application pools for this application are all configured the same.
This application is the only application in the IIS application pool. The application is an ASP.NET web API and is using .NET 4.6.1. The application is not creating threads of its own.
Theory
My theory for why this is happening is that we have requests that are coming in that are taking ~5-30 minutes to complete. Every machine gets tied up servicing these requests so they look "locked up". The company rolled their own logging mechanism and from that I can tell we have requests that are taking ~5-30 minutes to complete. The team responsible for the application has cleaned up many of these but I am still seeing ~5 minute requests in the log.
I do not have access to the machines personally so our systems team has gotten memory dumps of the application when this happens. In the dumps I generally will see ~50 threads running and all of them are in our code. These threads will be all over our application and do not seem to be stopped on any common piece of code. When the application is running correctly the dumps have 3-4 threads running. Also I have looked at performance counters like the ASP.NET\Requests Queued but it never seems to have any requests queued. During these times the CPU, Memory, Disk and Network usage look normal. Using windbg none of the threads seem to have a high CPU time other than the finalizer thread which as far as I know should live the entire time.
Conclusion
I am looking for a means to prove or disprove my theory as to why we are locking up as well as any metrics or tools I should look at.
So this issue came down to our application using query in stitch on a table with 2,000,000 records in it to another table. Memory would become so fragmented that the Garbage Collector was spending more time trying to find places to put objects and moving them around than it was running our code. This is why it appeared that our application was still working and why their was no exceptions. Oddly IIS would time out the requests but would continue processing the threads.

How can I safely restart a Node application which receives a high volume of traffic?

For example, in the Python world you would use uWSGI or Gunicorn to restart your Python web app if stopped running for any reason, e.g. memory leaks, unexpected runtime errors, etc. However this is done in such a way that connections aren't dropped (so no 502s).
Looking at the options for Node it seems PM2 is a popular choice but I have two concerns:
Can it make the same guarantees regarding connection draining (no 502s, please)?
When I looked at PM2 before it seemed to cause significant performance degradation in my application where every millisecond of latency counts (100s of added ms).
So my question is, where performance is a serious consideration and we can't drop connections while restarting, what are Node's uWSGI and Gunicorn equivalents?
Here are some strategies:
Use node.js clustering with N worker processes. You can then restart any single worker process and not affect overall availability.
Use a load balancer in front of multiple clusters. Then temporarily configure the load balancer to only send traffic to one cluster. When the deconfigured cluster has finished with all open connnections, you can then restart all the processes in that cluster.
For even more flexibility, use multiple clusters on separate machines. That allows you to even take a server machine down for hardware maintenance without disrupting overall availability.
If you have resources among multiple clustered processes such as databases, then you will also need redundancy for them in order to be able to restart them without interruption.
Now of course, you have to make sure that taking some part of your system out of service for reboot or maintenance still leaves you with enough service capacity so you would typically do this when overall service load is low (4am for your largest user base).
PM2 is one such tool that allows you to do portions of what is recommended here (such as clustering and seamlessly restarting part of a cluster). There are other tools.

JMeter never fails

I'm trying to stress test a server with JMeter. I followed the manual and successfully created the tests (Test are running ok and response is correct).
However even if I keep increasing the number of threads it never fails, but I keep reading that there must be limitations? So what am I doing wrong?
My CPU is running on +/-5% when I'm not running JMeter. Running 3000 threads I see the number of threads increase by 3000 and CPU usage goes to +/-15%. Also JMeter never complains something went wrong.
My JMeter configuration is:
Number of threads: 3000
Ramp-Up Period: 30
LoopCount: Forever (Let it run for over an hour and still nothing goes wrong)
The bottleneck now is my internet connection which simply can't handle this load and maxes out at 2.1Mbps. Is this causing the problem? It is increasing my latency from 10ms per thread to over 5000ms per thread, but threads are still running.
Assuming you have confirmed that you definitely aren't getting back any errors (e.g. using a results table listener, or logging/displaying only errors using a results graph listener) and your internet connection is running at capacity then yes, it does sound like your internet connection is the bottleneck. It doesn't sound like your server is being stressed at all.
If you can easily make use of other machines (e.g. servers in the same location as the server you are testing), you could try using JMeter remote (distributed) testing to sidestep the limitations of your internet connection. See http://jmeter.apache.org/usermanual/remote-test.html for details.
Alternatively, if it's easy (e.g. if you're using VM's in a cloud and can easily spin one up with your software on), you could try using the least-powerful server you can instead and stress testing that to see if you can make it struggle even with your internet connection (just as a sanity check).
If this doesn't help, more details on your server (hardware specifications, web server software and thread pool settings, language) and the site/pages you are testing (mostly static or dynamic? large requests/responses?) would be useful. I've certainly managed to make lower-powered machines (e.g. EC2 m1.small) struggle using JMeter over a 2Mbps connection, but it depends on the site you're testing.

Socket.io: How many concurrent connections can WebSockets handle?

I am wondering if you have any data on concurrent connections to websockets? I am using Socket.io on Node.js server. How many clients can connect to socket and receive data without bringing my server down? 1000? 1000.0000?
Thanks!
This highly depends on your hardware configuration, what exactly are you doing/processing on the server side and if your system is optimized for many concurrent connections. For example on Linux machine by default you would probably first hit maximum number of opened files or other limits (which can be increased) before running into hardware resources exhaustion or similar scalability issues. Key resource may be the amount of RAM which can be allocated by your node.js program to keep concurrent connections opened and ability to receive new ones.
http://blog.caustik.com/2012/08/19/node-js-w1m-concurrent-connections/
Check this blog. We use the same principle. Previously our nodejs server will crash after 100 concurrent connections due to hardware constraints. But after we move to Amazon EC2 it is now highly scalable.

Resources