Node.js app on Heroku experience a lot of H12 errors - node.js

I can't seem to figure out why I get H12 errors throughout the day. I see the consoles showing 30000ms for a response and my Heroku starts calling. If I debug the exact same route in postman....responds in under 300ms.
Please, any advice is appreciated.

This generally occurs when you have a long async process. Heroku imposes a 30 second timeout on all of its routes. If no data is received in that time, then it sends the H12 error.
I had faced this issue while scraping a lot of social media data for a website. I installed expressJS timeout so that timeout errors are caught at dyno level. This stops the server from crashing.
However, the timeout error would still remain. To get rid of that, investigate the code to find which route you're having the timeout issue with. Heroku has given a few tips here: https://help.heroku.com/AXOSFIXN/why-am-i-getting-h12-request-timeout-errors-in-nodejs
Explain your exact problem here, as in what you are trying to accomplish with your NodeJS app in order for more specific help to be provided.

Related

REST API In Node Deployed as Azure App Service 500 Internal Server Errors

I have looked at the request trace for several requests that resulted in the same outcome.
What will happen is I'll get a HttpModule="iisnode", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus=500, HttpReason="Internal Server Error", HttpSubstatus=1013, ErrorCode="The pipe has been ended. (0x6d)"
This is a production API. Fewer than 1% of requests get this result but it's not the requests themselves - I can reissue the same request and it'll work.
I log telemetry for every API request - basics on the way in, things like http status and execution time as the response is on its way out.
None of the requests that get this error are in telemetry which makes me think something is happening somewhere between IIS and iisnode.
If anyone has resolved this or has solid thoughts on how to pin down what the root issue is I'd appreciate it.
Well for me, what's described here covered the bulk of the issue: github.com/Azure/iisnode/issues/57 Setting keepAliveTimeout to 0 on the express server reduced the 500s significantly.
Once the majority of the "noise" was eliminated it was much easier to correlate the remaining 500s that would occur to things I could see in my logs. For example, I'm using a 3rd party node package to resize images and a couple of the "images" that were loaded into the system weren't images. Instead of gracefully throwing an exception, the package seems to exit out of the running node process. True story. So on Azure, it would get restarted, but while that was happening requests would get a 500 internal server error.

Error: socket hang up when sending a POST request using Postman + nodemon

I'm writing an application using the MVC model. Each time I tried to send a POST request through Postman I cannot get a response and get a 'Error: socket hang up' message. It happened suddenly, seconds before this error occured for the first time I had sent requests without any errors. None of the answers on SO seem to solve this problem, although I know from them that Nodemon can have an influence on this behaviour. I'm running the app the port 8000. I don't even know what piece of code can be useful for explaining this question, because everything worked before.

Heroku auto restart dyno on H12 Request timeout errors

We have a node dyno processing small API requests, ~10/second. All requests complete in under 0.5s
Once every few days, dyno starts giving H12 Request timeout errors on all requests. We couldn't discover the cause. Restarting fixes it.
How to make Heroku automatically restart the dyno on a H12 Request timeout threshold, e.g. more than 5/second?
As ryan said H12 Request timeout means that Heroku's load balancers are sending a request to your app but not getting a response in time (heroku has a max response time of 30 seconds). Sometimes a request is just intense to calculate or an inefficient DB query is delaying the response.
Yet the root of the problem does not necessary mean an application error on your side.
In our case we have multiple web dynos handling requests in parallel. Now and then one of those dynos produces H12 (timeouts) while all others are running flawless. So we can completely rule out all application problems. A restart of the affected dyno helps with a high probability, as your application lands on a different physical server whenever it is restarted (at least with a high probability).
So Heroku has "bad servers" in their rotation! And now and then your code will land on one of those bad servers. I cannot say if one has a "noisy neighbor" problem. I also asked Heroku how to prevent that and the only response that I got was to pay for dedicated performance dynos, which is quite dissatisfying...
H12 Request timeout means that Heroku's load balancers are sending a request to your app but not getting a response.
This can happen for lots of reasons, since the app is already working you can likely rule out configuration issues. So now you are looking at the application code and will have to inspect to the logs to understand whats happening. I'd suggest using one of their logging apps like papertrail so you can have a history of the logs when this happens.
Some things it could be, but not limited to:
Application crashing and not restarting
Application generating an error, but no response being sent
Application getting stuck in event loop preventing new request
Heroku provides some documentation around the issue that might help in debugging your situation
https://devcenter.heroku.com/articles/request-timeout
https://help.heroku.com/AXOSFIXN/why-am-i-getting-h12-request-timeout-errors-in-nodejs

How to debug a NodeJS blocked event loop?

We have a NodeJS/Express server running in production, and occasionally, all requests are getting blocked. The web requests are being received, but not processed (and they eventually all time out). After a few minutes, it'll begin accepting requests again, but then almost immediately begin blocking like before.
We've been trying to reproduce the issue locally but can't reproduce and determine what the cause is. My guess is the event loop is getting blocked from either a synchronous operation that's taking too long to complete or doesn't complete at all.
Are there any ways to debug a live production system and figure out what's causing the block? I've searched, but could only find solutions for local development. Is my best solution to look back at the logs, see where the last request that didn't block complete (before it started blocking), and debug that?
Using Node 6.2.2, Express 4.13.4, and running on Heroku.

Heroku H12 Timeout Error with Node.js

Currently, I am working on a REST API using the Node hapijs framework. The API is deployed on Heroku.
There is a GET endpoint in the API that makes a get request to retrieve data from a third party and processes the data before sending a reply. This particular endpoint times out from time to time. When the endpoint times out, Heroku returns an H12 error. Once it has timed out, subsequent requests to that endpoint result in the H12 errors. I have to restart the application on Heroku to get that endpoint working again. No other endpoints in the API are affected in any way by this error and continue to work just fine even after the error has ocurred.
In my debugging process and looking through the logs, it seems that there are times when a response is not returned from the third party API, causing the error.
I've tried the following solutions to try and solve the problem:
I am using the request library to make requests. Hence, I've tried setting a timeout to 5000 ms as part of the options passed in to the request. It has worked at times... the timeout is triggered and the endpoint sends the timeout error associated with request. This is the kind of behavior that I would like, since subsequent requests to the endpoint work. However, there are times when the request timeout is not triggered but Heroku still returns an H12 error (always after 30 seconds, the Heroku default). After that, subsequent requests to that endpoint return the H12 error (also after 30 seconds). It seems that some sort of process gets "stuck" on Heroku and is not terminated until I restart the app.
I've tried adding a timeout to the hapi.js route config object. I get the same results as above.
I've continued doing research and suspect that the issues has to do with the description given here and here. It seems like setting a timeout at the app server level that can send a SIGKILL to the Heroku worker might do the trick. It seems fairly straightforward in Ruby but I cannot find much information on how to do this in Node.
Any insight is greatly appreciated. I am aware that a timeout might occur when making a request to a third party. That is not the issue. The issue is that the endpoint seems to get "stuck" on Heroku after a timeout and it becomes unresponsive.
Thanks for the help!
I had a similar issue and after giving up for a day, I came back to it and found my error. I wasn't sending a response to the client when an error occurred on the server side. Make sure you are returning a response no matter what the result of your server side algorithm is. If there is an error, return that. If the request was successful, return that response. I hope that helps.
If that doesn't help, check heroku's guides on handling Request Timeouts, especially the Debugging request timeouts section could be of help:

Resources