How can I solve aws load balancer 502 error? - node.js

Now, I am trying to solve ELB 502 error.
I am using express node js server and deploy server with ECS, deployment type instance.
And ELB keeps export 502 errors.
As I check elb access log, lots of logs contains
request_processing_time, target_processing_time >= 0
response_processing_time = -1
I found some documentations that this occurs when server's keepAliveTimeout is shorter than ALB's idle timeout.
However, I set server's timeout 65 secounds as
app.js
server.keepAliveTimeout = 65000;
server.headersTimeout = 66000;
and ELB idle timeout is 60 seconds.
What would be the problem then?

This issue is occurred because your connection is closed and thus AWS throws 502 (gateway error).
const http = require('http');
const server = http.createServer((req, res) => {
res.end('Hello World!');
});
server.keepAliveTimeout = 60 * 1000; // Set keep-alive timeout to 60 seconds
server.listen(8080, () => {
console.log('Server listening on port 8080');
});
In this example, the keepAliveTimeout is set to 60 seconds (60,000 milliseconds), which should be long enough to match the idle timeout of most ALBs. You can adjust this value as needed based on the idle timeout setting of your ALB.
After making this change, you'll need to restart your Node.js application for the changes to take effect.
NOTE: Also check if your target is health based on your configuration.

Related

Why does a NodeJS http server close socket on timeout without response?

Given a NodeJS http server with a timeout of 10s:
const httpServer = require('http').createServer(app);
httpServer.timeout = 10 * 1000;
On timeout, Postman shows this without any response code:
Error: socket hang up
Warning: This request did not get sent completely and might not have all the required system headers
If the NodeJS server is behind an nginx reverse proxy, nginx returns a 502 response (upstream prematurely closed connection while reading response header from upstream). But here it is just NodeJS/express running on localhost. Still one would expect a proper http response.
According to this answer, this is expected behavior, the socket is simply destroyed.
In an architecture with an nginx reverse proxy, is it usual that the server just destroys the socket without sending a timeout response to the proxy?
You're setting the socket timeout when you're setting the http server timeout. The socket timeout prevents abuse from clients that might want to hang on to your connection to DOS you. It has other benefits like ensuring a certain level of service (though these are often more important when you're a client).
The reason it uses a socket timeout instead of sending a 408 status code (Request Timeout) is because the status code might have already been sent for a successful message.
If you want to implement a response timeout on your backend and handle it gracefully, you can timeout the response yourself. Note, you should likely respond with a 408 instead. 502 is for gateways like http proxies (nginx) to indicate that a downstream connection failed.
Here's a simple strawman implementation of handling that.
const httpServer = require('http').createServer((req, res) => {
setTimeout(()=>{
res.statusCode = 200;
res.statusMessage = "Ok";
res.end("Done"); // I'm never called because the timeout will be called instead;
}, 10000)
});
httpServer.on('request', (req, res) => {
setTimeout(()=>{
res.statusCode = 408;
res.statusMessage = 'Request Timeout';
res.end();
}, 1000)
});
httpServer.listen(8080);

How to serve a nodeJS app via HTTPS on AWS EC2?

I'm trying to run a hello world express app on an EC2 instance and serve it via HTTPS.
Here is the server code:
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!\n');
});
const server = app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
server.keepAliveTimeout = 65000; // Ensure all inactive connections are terminated by the ALB, by setting this a few seconds higher than the ALB idle timeout
server.headersTimeout = 66000; // Ensure the headersTimeout is set higher than the keepAliveTimeout due to this nodejs regression bug: https://github.com/nodejs/node/issues/27363
I created an EC2 instance and let it run there. Additionally to get HTTPS, I fired up an Application Load Balancer with an SSL certificate. I created a listener on port 443 and forwarded it to port 3000 on my EC2. Lastly I set up a Route53 entry to point to that ALB.
All I get 24/7 is 502 Bad Gateway. Am I missing something basic here?
How to run the most basic express server via HTTPS?
For anyone who might stumble upon this some time later:
If you wish to terminate HTTPS on the load balancer and speak HTTP to your app behind it you need to select HTTP as prototoll and the port of your node app when creating a target group in the console.
For some reason I thought for hours this should be HTTPS and 443 when I want to accept HTTPS traffic.

Node app on AWS returning 503

I have an Hello World node app:
var express = require('express');
var app = express();
app.get('/', function (req, res) {
console.log('in the get /');
res.send('Hello World!');
});
app.listen(8080, function () {
console.log('Example app listening on port 8080!');
});
I've pushed this to my EC2 instance. I go to my url and the page is black. I see a 503 is coming back. Im seeing live Node logs and the app is going into app.get because I see 'in the get /' repeatedly.
I have 2 instances. The first is running Nginx and requests to example.net get redirected to https://www.example.net. I also have a Load Balancer listening which take requests to www.example.net and directing them to my Node instance.
Incidentally, every few seconds I see a new 'in the get /' line. So my app is getting hit repeatedly from God knows where. Could this be comething to do with getting 503 (which indicates server is busy)? Note: this worked fine yesterday.
EDIT
The app suddenly started returning "Hello World". I then restarted the app - making no code changes - and im back getting 503's again
The ELB is detecting that your instance is unhealthy because it's down during a restart (Unhealthy Threshold). Then it has to pass the health check a certain number of times before it is healty again (Healthy Threshold). You can configure all this in the ELB settings.
I would decrease the Unhealthy Threshold if you only have one instance in your pool. And possibly decrease the HealthCheck Interval as well.

How to set the HTTP Keep-Alive timeout in a nodejs server

I'm actually doing some load testing against an ExpressJS server, and I noticed that the response send by the server includes a "Connection: Keep-Alive" header. As far as I understand it, the connection will remain opened until the server or the client sends a "Connection: Close" header.
In some implementations, the "Connection: Keep-Alive" header comes up with a "Keep-Alive" header setting the connection timeout and the maximum number of consecutive requests send via this connection.
For example : "Keep-Alive: timeout=15, max=100"
Is there a way (and is it relevant) to set these parameters on an Express server ?
If not, do you know how ExpressJS handles this ?
Edit:
After some investigations, I found out that the default timeout is set in the node standard http library:
socket.setTimeout(2 * 60 * 1000); // 2 minute timeout
In order to change this:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end("Hello World");
}).on('connection', function(socket) {
socket.setTimeout(10000);
}).listen(3000);
Anyway it still looks a little bit weird to me that the server doesn't send any hint to the client concerning its timeout.
Edit2:
Thanks to josh3736 for his comment.
setSocketKeepAlive is not related to HTTP keep-alive. It is a TCP-level option that allows you to detect that the other end of the connection has disappeared.
For Express 3:
var express = require('express');
var app = express();
var server = app.listen(5001);
server.on('connection', function(socket) {
console.log("A new connection was made by a client.");
socket.setTimeout(30 * 1000);
// 30 second timeout. Change this as you see fit.
});
To set keepAliveTimeout on the express server do:
var express = require('express');
var app = express();
var server = app.listen(5001);
server.keepAliveTimeout = 30000;
For Node.js 10.15.2 and newer with express, only server.keepAliveTimeout was not enough. We also need to configure server.headersTimeout longer than server.keepAliveTimeout.
server.keepAliveTimeout = 30000;
// Ensure all inactive connections are terminated by the ALB, by setting this a few seconds higher than the ALB idle timeout
server.headersTimeout = 31000;
// Ensure the headersTimeout is set higher than the keepAliveTimeout due to this nodejs regression bug: https://github.com/nodejs/node/issues/27363
Update
Since this issue Regression issue with keep alive connections has been closed. We could just set keepAliveTimeout on the latest node.js version.
One more thing, If your node.js server is deployed under AWS ELB and encounters 502 error code occasionally.
Clients -> AWS ELB -> Node Server
AWS ELB has 60 seconds of connection idle timeout by default, and per doc
We also recommend that you configure the idle timeout of your application to be larger than the idle timeout configured for the load balancer
Config the value of keepAliveTimeout to greater than 60 seconds could be one option to eliminate this issue.

Long polling getting timed out from browser

I'm trying to serve long polling requests for 60 secs using node.js. The problem I'm facing is, the browser is getting timed out. The same setup is working for 30 secs. Can anybody suggest how to achieve this? Using JQuery as JS framework.
Thanks...
By default, node.js has a 60 second timeout for TCP/IP connections. You can get around this by explicitly setting the timeout. Here's a quick example:
http.createServer(function (req, res) {
// Connection now times out after 120 seconds
req.connection.setTimeout(120000);
// ... TODO: server logic ...
}).listen(8000);
You can tell node to hold the connection open indefinitely by setting to the timeout to 0. Also, note that the default 60 second timeout applies to all socket connections in addition to TCP/IP.

Resources