What can cause ExpressJS to timeout all requests? - node.js

I have an ExpressJS app running on EC2 in a docker container which suddenly stopped responding to any requests after 6 days of normal operation with similar requests. The CPU and network traffic looked normal, but I don't have memory metrics because AWS doesn't automatically collect those.
Once I restarted the container, it resumed responding to requests as normal.
Under what circumstances would an Express app stop responding to requests?
Possible causes I can think of:
Code running stuck in an infinite loop (but this would max out the CPU)
Memory full
What else could cause this?

Related

Why does my Node express pm2 primary process in cluster mode never handle incoming requests?

I am running a node express app in pm2 cluster mode. Everything is working fine, however; I have noticed that incoming connections to my express routes only ever hit the forked worker app instances and never the primary (master) process.
In the pm2 documentation (https://pm2.keymetrics.io/docs/usage/cluster-mode/) on cluster mode they say
Under the hood, this uses the Node.js cluster module
In the "how it works" section on the Node.js website (https://nodejs.org/api/cluster.html#cluster_how_it_works) it says
The cluster module supports two methods of distributing incoming
connections. The first one (and the default one on all platforms
except Windows) is the round-robin approach, where the primary process
listens on a port, accepts new connections and distributes them across
the workers in a round-robin fashion, with some built-in smarts to
avoid overloading a worker process.
Does this mean the primary process will never actually handle any incoming requests? That can't be!! That would make the entire primary process a glorified load balancer and essentially a dead weight with a bunch of code and a full CPU never really getting used.
If the above IS accurate does that mean that the primary process is a bottleneck for all incoming express connections?
What am I understanding incorrectly or doing wrong that the primary (master) process never actually handles any requests please?
After I completely removed and re installed pm2 and then re-added all my node apps back in cluster mode via cli the first instance (app 0) started receiving messages. I didn't change any code so I'm not exactly sure what the issues was. Thank you to #JonePolvora for your time with comments that lead me to troubleshoot more.

Google App Engine NodeJS app stops after 30 min

I have a very basic NodeJS application hosted on Google App Engine that executes an async function on 15 second intervals. The deployment is successful and the app starts and runs fine, but stops after about 30 minutes with the following error logs. This runs fine locally, though.
Quitting on terminated signal
Start program failed: user application failed with exit code -1 (refer to stdout/stderr logs for more detail): signal: terminated
I have used App Engine before with no issues, so I'm not sure why this is happening. I used https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/main/appengine/typescript as a reference and am still not able to resolve this issue. Any ideas?
Quitting on terminated signal
You may receive this error if your App Engine instances is down scaling or shutting down due to some reasons and possibly due to:
Your application runs out of Instance Hours quota.
Your instance is moved to a different machine, either because the current machine that is running the instance is restarted, or App Engine moved your instance to improve load distribution.
There are good strategies to avoid the downtime of your instance and here are additional:
You can try to have a minimum number of idle instances
Use manual scaling which you can specify the number of instances
will continuously run regardless of the load level.
Increase the maximum instance.
Asynchronous background work is not recommended in App Engine. It can result in higher billing and users may also experience increased latency because of high pushback or request queuing. Google recommend to use Cloud Tasks. With Cloud Tasks, HTTP requests are long-lived and return a response only after any asynchronous work ends.

AppEngine nodejs app sporadically sends 502s and restarts

We have a nodejs app that gets successfully deployed to a standard environment. Something happens after about two hours (or sooner depending on traffic): our downstream clients start receiving a bunch of 502 responses and then the service stabilizes. We think this has been happening for at least a few months.
When investigating the cause of the 502s, I see that:
There are no unhandled exception/promise rejection logs to indicate that the node app has crashed
I console.log when receiving SIGTERM and that, too, does not appear in the logs
The logs of the nginx sidecar include the following:
2020/06/16 23:11:11 [error] 35#35: *1149 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 169.254.1.1, server: _, request: "POST /api/redacted HTTP/1.1", upstream: "http://127.0.0.1:8081/api/redacted", host: "redacted.appspot.com""
I'm assuming that the 502s are coming from nginx because the upstream has disappeared. Are there other explanations I should explore?
If GAE is replacing my app containers intentionally, shouldn't that process prevent these types of 502s?
Should I expect something other than SIGTERM to be sent by the environment when the application/container is getting replaced?
Update #1 (2020-06-22)
I investigated and found evidence that we might be exceeding memory quota so I changed our instance_class from F1 to F2. As I write this our instances are sitting at ~200M of memory usage (F2s have 512M available). Additionally, I use the --max-old-space-size switch to set nodes memory usage to 496M.
The 502s are still happening.
I suspect that the 502s are happening as a result of the autoscaler terminating instances. Our app never receives SIGTERM (even during deployments). That means I can't close http keepalive connections gracefully and might explain why nginx raises Connection reset by peer.
Update #2 (2020-06-24)
Our service is just standard REST type stuff, no heavy loops.
I'll post another update with some memory graphs but I don't see any spikes. Perhaps a small memory leak.
Here's our app.yaml:
service: redacted
runtime: nodejs12
instance_class: F2
handlers:
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
We had a very similar problem with our Node.js app deployed on App Engine Flexible.
In our case, we ultimately determined that we had memory pressure that was causing the Node.js garbage collector to sometimes delay the processing of a request for hundreds of milliseconds (sometimes more). This caused our health check URLs to sporadically timeout, prompting GAE to remove the instance from the active pool.
Because we typically had just two instances handling the steady traffic, removing one instance quickly overloaded the remaining instance, and it would soon suffer the same fate.
We were surprised to find that it could take two minutes or longer before App Engine assigned traffic to a newly-created instance. Between the time our original instances were declared unhealthy, and when new instance(s) were online, 502s would be returned (presumably by GAE's nginx) to the client.
We were able to stabilize the environment simply by adding:
automatic_scaling:
min_num_instances: 4
To our app.yaml. Because two instances were generally sufficient for the traffic, ensuring we always had four running apparently kept our memory usage low enough to prevent the GC from stalling request handling, and even if it did, we had enough excess capacity to handle one instance being removed.
The scaling settings for GAE standard are slightly different.
In retrospect, we could see that our latency/response times would get a little "jittery" before the real problems started. Most responses had typical response times ~30ms, but increasingly we would see outlier requests in the x00ms range. You may want to check your request logs to see if you see something similar.
New Relic's Node.js VM data was helpful in detecting that garbage collection was taking an increasing amount of time.
Usually, 502 messages are errors on nginx side, as you have mentioned. The detailed logs related to this errors are not surfaced to Cloud Logging, yet.
According to your behavior, it seems a workload, so we can relate this case to an issue with running out of resources.
There are somethings that are well worth to take a look:
Check your metrics. The memory and CPU usage should be under healthy limits.
Check whether your scaling metrics are being enough to your workload.
Is there a chance to share these metrics near to the restart event?
Also, i t would be goo if you share your resources and scaling in the app.yaml.

Azure Http connection gets interrupted after 5 minutes

We have a setup with several RESTful APIs on the same VM in Azure.
The websites run in Kestrel on IIS.
They are protected by the azure application gateway with firewall.
We now have requests that would run for at least 20 minutes.
The request run the full length uninterrupted on Kestrel (Visible in the logs) but the sender either get "socket hang up" after exactly 5 minutes or run forever even if the request finished in kestrel. The request continue in Kestrel even if the connection was interrupted for the sender.
What I have done:
Wrote a small example application that returns after a set amount of
seconds to exclude our websites being the problem.
Ran the request in the VM (to localhost): No problems, response was received.
Ran the request within Azure from one to another VM: Request ran forever.
Ran the request from outside of Azure: Request terminates after 5 minutes
with "socket hang up".
Checked set timeouts: Kestrel: 50m , IIS: 4000s, ApplicationGateway-HttpSettings: 3600
Request were tested with Postman,
Is there another request or connection timeout hidden somewhere in Azure?
We now have requests that would run for at least 20 minutes.
This is a horrible architecture and it should be rewritten to be async. Don't take this personally, it is what it is. Consider returning a 202 Accepted with a Location header to poll for the result.
You're most probably hitting the Azure SNAT layer timeout —
Change it under the Configuration blade for the Public IP.
So I ran into something like this a little while back:
For us the issue was probably the timeout like the other answer suggests but the solution was (instead of increasing timeout) to add PGbouncer in front of our postgres database to manage the connections and make sure a new one is started before the timeout fires.
Not sure what your backend connection looks like but something similar (backend db proxy) could work to give you more ability to tune connection / reconnection on your side.
For us we were running AKS (azure Kubernetes service) but all azure public ips obey the same rules that cause issues similar to this one.
While it isn't an answer I know there are also two types of public IP addresses, one of them is considered 'basic' and doesn't have the same configurability, could be something related to the difference between basic and standard public ips / load balancers?

Node.js Server Timeout Problems (EC2 + Express + PM2)

I'm relatively new to running production node.js apps and I've recently been having problems with my server timing out.
Basically after a certain amount of usage & time my node.js app stops responding to requests. I don't even see routes being fired on my console anymore - it's like the whole thing just comes to a halt and the HTTP calls from my client (iPhone running AFNetworking) don't reach the server anymore. But if I restart my node.js app server everything starts working again, until things inevitable stop again. The app never crashes, it just stops responding to requests.
I'm not getting any errors, and I've made sure to handle and log all DB connection errors so I'm not sure where to start. I thought it might have something to do with memory leaks so I installed node-memwatch and set up a listener for memory leaks but that doesn't get called before my server stops responding to requests.
Any clue as to what might be happening and how I can solve this problem?
Here's my stack:
Node.js on AWS EC2 Micro Instance (using Express 4.0 + PM2)
Database on AWS RDS volume running MySQL (using node-mysql)
Sessions stored w/ Redis on same EC2 instance as the node.js app
Clients are iPhones accessing the server via AFNetworking
Once again no errors are firing with any of the modules mentioned above.
First of all you need to be a bit more specific about timeouts.
TCP timeouts: TCP divides a message into packets which are sent one by one. The receiver needs to acknowledge having received the packet. If the receiver does not acknowledge having received the package within certain period of time, a TCP retransmission occurs, which is sending the same packet again. If this happens a couple of more times, the sender gives up and kills the connection.
HTTP timeout: An HTTP client like a browser, or your server while acting as a client (e.g: sending requests to other HTTP servers), can set an arbitrary timeout. If a response is not received within that period of time, it will disconnect and call it a timeout.
Now, there are many, many possible causes for this... from more trivial to less trivial:
Wrong Content-Length calculation: If you send a request with a Content-Length: 20 header, that means "I am going to send you 20 bytes". If you send 19, the other end will wait for the remaining 1. If that takes too long... timeout.
Not enough infrastructure: Maybe you should assign more machines to your application. If (total load / # of CPU cores) is over 1, or your memory usage is high, your system may be over capacity. However keep reading...
Silent exception: An error was thrown but not logged anywhere. The request never finished processing, leading to the next item.
Resource leaks: Every request needs to be handled to completion. If you don't do this, the connection will remain open. In addition, the IncomingMesage object (aka: usually called req in express code) will remain referenced by other objects (e.g: express itself). Each one of those objects can use a lot of memory.
Node event loop starvation: I will get to that at the end.
For memory leaks, the symptoms would be:
the node process would be using an increasing amount of memory.
To make things worse, if available memory is low and your server is misconfigured to use swapping, Linux will start moving memory to disk (swapping), which is very I/O and CPU intensive. Servers should not have swapping enabled.
cat /proc/sys/vm/swappiness
will return you the level of swappiness configured in your system (goes from 0 to 100). You can modify it in a persistent way via /etc/sysctl.conf (requires restart) or in a volatile way using: sysctl vm.swappiness=10
Once you've established you have a memory leak, you need to get a core dump and download it for analysis. A way to do that can be found in this other Stackoverflow response: Tools to analyze core dump from Node.js
For connection leaks (you leaked a connection by not handling a request to completion), you would be having an increasing number of established connections to your server. You can check your established connections with netstat -a -p tcp | grep ESTABLISHED | wc -l can be used to count established connections.
Now, the event loop starvation is the worst problem. If you have short lived code node works very well. But if you do CPU intensive stuff and have a function that keeps the CPU busy for an excessive amount of time... like 50 ms (50 ms of solid, blocking, synchronous CPU time, not asynchronous code taking 50 ms), operations being handled by the event loop such as processing HTTP requests start falling behind and eventually timing out.
The way to find a CPU bottleneck is using a performance profiler. nodegrind/qcachegrind are my preferred profiling tools but others prefer flamegraphs and such. However it can be hard to run a profiler in production. Just take a development server and slam it with requests. aka: a load test. There are many tools for this.
Finally, another way to debug the problem is:
env NODE_DEBUG=tls,net node <...arguments for your app>
node has optional debug statements that are enabled through the NODE_DEBUG environment variable. Setting NODE_DEBUG to tls,net will make node emit debugging information for the tls and net modules... so basically everything being sent or received. If there's a timeout you will see where it's coming from.
Source: Experience of maintaining large deployments of node services for years.

Resources