(PERL-> NODEJS) 500: Server closed connection without sending any data back - node.js

My Perl script talks to a Node.js server and sends the commands that the Node.js server needs to execute. While some commands take less time, others take a lot longer. While the command is executing on the server, there is silence on the connection. After a while, I receive the error: 500: Server closed connection without sending any data back
During this error, the command is still executing on the server and the desired results are obtained (if you check the server logs). My problem is that I don't want the connection to reset as there are other follow on commands that need to run after these long running commands. Some commands might take 20 mins
Perl Side Code:
my $ua = LWP::UserAgent->new;
$ua->timeout(12000);
my $uri = URI->new('http://server');
my $json = JSON->new;
my $data_to_json = {DATA};
my $json_content = $json->encode($data_to_json);
# set custom HTTP request header fields
my $req = HTTP::Request->new(POST => $uri);
my $resp = $ua->request($req);
my $message = $resp->decoded_content;
NodeJS Code
var express = require('express');
var http = require('http');
var app = express();
app.use(express.json());
var port = process.env.PORT || 8080;
app.get('<API URL>', function (req, res) {
<get all the passed arguments>
<send output to the console>
});
app.post('<API URL>', function(req, res) {
.
.
.
req.connection.setTimeout(0);
var exec = require('child_process').exec;
var child = exec(command);
}
// start the server
const server = app.listen(port);
server.timeout = 12000;
console.log('Server started! Listening on port' + port);
I have tried to add the timeout for the server using server.timeout and req.connection.setTimeout(0);.
How do I make sure that the connection is not broken?

It's generally a bad idea to make a web worker carry out long running tasks. It ties up the worker and you end up having problems like this.
Instead, have the web worker add a job to some sort of job queue (Perl's Minion is really nice). The queue operates independently of the web server. Clients can poll the server to check on the status of a job and get the output or artifacts when it's complete.
Another advantage of a proper job queue is that you can restart jobs if they fail. The queue knows the job was there. As you've seen, a broken web connection means that it fails and you've probably lost track of those inputs.

Related

HAProxy locks up simple express server in 5 minutes?

I have a really strange one I just cannot work out.
I have been building node/express apps for years now and usually run a dev server just at home for quick debugging/testing. I frontend it with a haproxy instance to make it "production like" and to perform the ssl part.
In any case, just recently ALL servers (different projects) started mis-behaving and stopped responding to requests around exactly 5 minutes after being started. That is ALL the 3 or 4 I run sometimes on this machine, yet the exact same instance of haproxy is front-ending the production version of the code and that has no issues, it's still rock solid. And, infuriatingly, I wrote a really basic express server example and if it's front ended by the same haproxy it also locks up, but if I switch ports, it runs fine forever as expected!
So in summary:
1x haproxy instance frontending a bunch of prod/dev instances with the same rule sets, all with ssl.
2x production instances working fine
4x dev instances(and a simple test program) ALL locking up after around 5 min when behind haproxy
and if I run the simple test program on a different port so it's local network only, it works perfectly.
I do also have uptime robot liveness checks hitting the haproxy as well to monitor the instances.
So this example:
const express = require('express')
const request = require('request');
const app = express()
const port = 1234
var counter = 0;
var received = 0;
process.on('warning', e => console.warn(e.stack));
const started = new Date();
if (process.pid) {
console.log('Starting as pid ' + process.pid);
}
app.get('/', (req, res) => {
res.send('Hello World!').end();
})
app.get('/livenessCheck', (req, res) => {
res.send('ok').end();
})
app.use((req, res, next) => {
console.log('unknown', { host: req.headers.host, url: req.url });
res.send('ok').end();
})
const server = app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
app.keepAliveTimeout = (5 * 1000) + 1000;
app.headersTimeout = (6 * 1000) + 2000;
setInterval(() => {
server.getConnections(function(error, count) {
console.log('connections', count);
});
//console.log('tick', new Date())
}, 500);
setInterval(() => {
console.log('request', new Date())
request('http://localhost:' + port, function (error, response, body) {
if (error) {
const ended = new Date();
console.error('request error:', ended, error); // Print the error if one occurred
counter = counter - 1;
if (counter < 0) {
console.error('started ', started); // Print the error if one occurred
const diff = Math.floor((ended - started) / 1000)
const min = Math.floor(diff / 60);
console.error('elapsed ', min, 'min ', diff - min*60, 'sec');
exit;
}
return;
}
received = received + 1;
console.log('request ', received, 'statusCode:', new Date(), response && response.statusCode); // Print the response status code if a response was received
//console.log('body:', body); // Print the HTML for the Google homepage.
});
}, 1000);
works perfectly and runs forever on a non-haproxy port, but only runs for approx 5 min on a port behind haproxy, it usually gets to 277 request responses each time before hanging up and timing out.
The "exit()" function is just a forced crash for testing.
I've tried adjusting some timeouts on haproxy, but to no avail. And each one has no impact on the production instances that just keep working fine.
I'm running these dev versions on a mac pro 2013 with latest OS. and tried various versions of node.
Any thoughts what it could be or how to debug further?
oh, and they all server web sockets as well as http requests.
Here is one example of a haproxy config that I am trying (relevant sections):
global
log 127.0.0.1 local2
...
nbproc 1
daemon
defaults
mode http
log global
option httplog
option dontlognull
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 4s
timeout server 5s
timeout http-keep-alive 4s
timeout check 4s
timeout tunnel 1h
maxconn 3000
frontend wwws
bind *:443 ssl crt /etc/haproxy/certs/ no-sslv3
option http-server-close
option forwardfor
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Port:\ 443
http-request set-header X-Client-IP %[src]
# set HTTP Strict Transport Security (HTST) header
rspadd Strict-Transport-Security:\ max-age=15768000
acl host_working hdr_beg(host) -i working.
use_backend Working if host_working
default_backend BrokenOnMac
backend Working
balance roundrobin
server working_1 1.2.3.4:8456 check
backend BrokenOnMac
balance roundrobin
server broken_1 2.3.4.5:8456 check
So if you go to https://working.blahblah.blah it works forever, but the backend for https://broken.blahblah.blah locks up and stops responding after 5 minutes (including direct curl requests bypassing haproxy).
BUT if I run the EXACT same code on a different port, it responds forever to any direct curl request.
The "production" servers that are working are on various OSes like Centos. On my Mac Pro, I run the tests. The test code works on the Mac on a port NOT front-ended by haproxy. The same test code hangs up after 5 minutes on the Mac when it has haproxy in front.
So the precise configuration that fails is:
Mac Pro + any node express app + frontended by haproxy.
If I change anything, like run the code on Centos or make sure there is no haproxy, then the code works perfectly.
So given it only stopped working recently, then is it the latest patch for OSX Monterey (12.6) maybe somehow interfering with the app socket when it gets a certain condition from haproxy? Seems highly unlikely, but the most logical explanation I can come up with.

Connecting to Websocket in OpenShift Online Next Gen Starter

I'm in the process of trying to get an application which I'd built on the old OpenShift Online 2 free service up and running on the new OpenShift Online 3 Starter, and I'm having a bit of trouble.
The application uses websocket, and in the old system all that was required was for the client to connect to my server on port 8443 (which was automatically routed to my server). That doesn't seem to work in the new setup however - the connection just times out - and I haven't been able to find any documentation about using websocket in the new system.
My first thought was that I needed an additional rout, but 8080 is the only port option available for routing as far as I can see.
The app lives here, and the connection is made on line 21 of this script with the line:
this.socket = new WebSocket( 'wss://' + this.server + ':' + this.port, 'tabletop-protocol' );
Which becomes, in practice:
this.socket = new WebSocket( 'wss://production-instanttabletop.7e14.starter-us-west-2.openshiftapps.com:8443/', 'tabletop-protocol' );
On the back end, the server setup is unchanged from what I had on OpenShift 2, aside from updating the IP and port lookup from env as needed, and adding logging to help diagnose the issues I've been having.
For reference, here's the node.js server code (with the logic trimmed out):
var http = require( "http" );
var ws = require( "websocket" ).server;
// Trimmed some others used by the logic...
var ip = process.env.IP || process.env.OPENSHIFT_NODEJS_IP || '0.0.0.0';
var port = process.env.PORT || process.env.OPENSHIFT_NODEJS_PORT || 8080;
/* FILE SERVER */
// Create a static file server for the client page
var pageHost = http.createServer( function( request, response ){
// Simple file server that seems to be working, if a bit slowly
// ...
} ).listen( port, ip );
/* WEBSOCKET */
// Create a websocket server for ongoing communications
var wsConnections = [];
wsServer = new ws( { httpServer: pageHost } );
// Start listening for events on the server
wsServer.on( 'request', function( request ){
// Server logic for the app, but nothing in here ever gets hit
// ...
} );
In another question it was suggested that nearly anything - including this -
could be related to the to the ongoing general issues with US West 2, but other related problems I was experiencing seem to have cleared, and that issue has been posted for a week with no update, so I figured I'd dig deeper into this on the assumption that it's something I'm doing wrong instead of them.
Anyone know more about this and what I need to do to make it work?

NodeJs application behind Amazon ELB throws 502

We have a node application running behind Amazon Elastic Load Balancer (ELB), which randomly throws 502 errors when there are multiple concurrent requests and when each request takes time to process. Initially, we tried to increase the idle timeout of ELB to 5 minutes, but still we were getting 502 responses.
When we contacted amazon team, they said this was happening because the back-end is closing the connection with ELB after 5s.
ELB will send HTTP-502 back to your clients for the following reasons:
The load balancer received a TCP RST from the target when attempting to establish a connection.
The target closed the connection with a TCP RST or a TCP FIN while the load balancer had an outstanding request to the target.
The target response is malformed or contains HTTP headers that are not valid.
A new target group was used but no targets have passed an initial health check yet. A target must pass one health check to be considered healthy.
We tried to set our application's keep-alive/timeouts greater than ELB idle timeout (5 min), so the ELB can be responsible for opening and closing the connections. But still, we are facing 502 errors.
js:
var http = require( 'http' );
var express = require( 'express' );
var url = require('url');
var timeout = require('connect-timeout')
const app = express();
app.get( '/health', (req, res, next) => {
res.send( "healthy" );
});
app.get( '/api/test', (req, res, next) => {
var query = url.parse( req.url, true ).query;
var wait = query.wait ? parseInt(query.wait) : 1;
setTimeout(function() {
res.send( "Hello!" );
}, wait );
});
var server = http.createServer(app);
server.setTimeout(10*60*1000); // 10 * 60 seconds * 1000 msecs
server.listen(80, function () {
console.log('**** STARTING SERVER ****');
});
Try setting server.keepAliveTimeout to something other than the default 5s. See: https://nodejs.org/api/http.html#http_server_keepalivetimeout. Per AWS docs, you'd want this to be greater than the load balancer's idle timeout.
Note: this was added in Node v8.0.0
Also, if you're still on the Classic ELB, consider moving to the new Application Load Balancer as based on current experience this seems to have improved things for us a lot. You'll also save a few bucks if you have a lot of separate ELBs for each service. The downside could be that there's 1 point of failure for all your services. But in AWS we trust :)
In my case, I needed upgrade nodejs version:
https://github.com/nodejs/node/issues/27363
After that the problem was fixed.
Change your server.listen() to this:
const port = process.env.PORT || 3000;
const server = app.listen(port, function() {
console.log("Server running at http://127.0.0.1:" + port + "/");
});
You can read more about this here.

node.js http server: how to get pending socket connections?

on a basic node http server like this:
var http = require('http');
var server = http.createServer(function(req, res){
console.log("number of concurr. connections: " + this.getConnections());
//var pendingConnections = ???
});
server.maxConnections = 500;
server.listen(4000);
if i send 500 requests at one time to the server, the number of concurr. connections is arround 350. with the hard limit set to 500 (net.server.backlog too), i want to know, how to access the number of pending connections (max. 150 in this example) when ever a new request starts.
so i think i have to access the underlying socket listening on port 4000 to get this info, but until now i was not able to get it.
EDIT
looking at node-http there is an event called connection, so i think the roundtrip of an request is as follows:
client connects to server socket --> 3-way-handshake, socket lasts in state CONNECTED (or ESTABLISHED ?!) then in node event connection is emitted.
the node http server accepts this pending connection an starts processing the request by emitting request
so the number of connections has to be at least as big as the number of requests, but with following example i could not confirm this:
var http = require('http');
var activeRequets = 0;
var activeConnections = 0;
var server = http.createServer(function(req, res){
activeRequests++;
res.send("foo");
});
server.on('connection', function (socket) {
socket.setKeepAlive(false);
activeConnections++;
});
setInterval(function(){
console.log("activeConns: " + activeConnections + " activeRequests: " + activeRequests);
activeRequests = 0;
activeConnections = 0;
}, 500);
server.maxConnections = 1024;
server.listen(4000, '127.0.0.1');
even if i stress the server with 1000 concurr connections and adding one delay in the response, activeRequests is mostly as high as activeConnections. even worse, the activeRequests are often higher then activeconnections, how could this be?
IIRC You can just count how many connections that have a status of SYN_RECV for that particular IP and port that you're listening on. Whether you use a child process to execute netstat and grep (or similar utilities) for that information, or write a binding to get this information using the *nix C API, is up to you.

Change port without losing data

I'm building a settings manager for my http server. I want to be able to change settings without having to kill the whole process. One of the settings I would like to be able to change is change the port number, and I've come up with a variety of solutions:
Kill the process and restart it
Call server.close() and then do the first approach
Call server.close() and initialize a new server in the same process
The problem is, I'm not sure what the repercussions of each approach is. I know that the first will work, but I'd really like to accomplish these things:
Respond to existing requests without accepting new ones
Maintain data in memory on the new server
Lose as little uptime as possible
Is there any way to get everything I want? The API for server.close() gives me hope:
server.close(): Stops the server from accepting new connections.
My server will only be accessible by clients I create and by a very limited number of clients connecting through a browser, so I will be able to notify them of a port change. I understand that changing ports is generally a bad idea, but I want to allow for the edge-case where it is convenient or possibly necessary.
P.S. I'm using connect if that changes anything.
P.P.S. Relatively unrelated, but what would change if I were to use UNIX server sockets or change the host name? This might be a more relevant use-case.
P.P.P.S. This code illustrates the problem of using server.close(). None of the previous servers are killed, but more are created with access to the same resources...
var http = require("http");
var server = false,
curPort = 8888;
function OnRequest(req,res){
res.end("You are on port " + curPort);
CreateServer(curPort + 1);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
curPort = port;
server = http.createServer(OnRequest);
server.listen(curPort);
}
CreateServer(curPort);
Resources:
http://nodejs.org/docs/v0.4.4/api/http.html#server.close
I tested the close() function. It seems to do absolute nothing. The server still accepts connections on his port. restarting the process was the only way for me.
I used the following code:
var http = require("http");
var server = false;
function OnRequest(req,res){
res.end("server now listens on port "+8889);
CreateServer(8889);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
server = http.createServer(OnRequest);
server.listen(port);
}
CreateServer(8888);
I was about to file an issue on the node github page when I decided to test my code thoroughly to see if it really is a bug (I hate filing bug reports when it's user error). I realized that the problem only manifests itself in the browser, because apparently browsers do some weird kind of HTTP request keep alive thing where it can still access dead ports because there's still a connection with the server.
What I've learned is this:
Browser caches keep ports alive unless the process on the server is killed
Utilities that do not keep caches by default (curl, wget, etc) work as expected
HTTP requests in node also don't keep the same type of cache that browsers do
For example, I used this code to prove that node http clients don't have access to old ports:
Client-side code:
var http = require('http'),
client,
request;
function createClient (port) {
client = http.createClient(port, 'localhost');
request = client.request('GET', '/create');
request.end();
request.on('response', function (response) {
response.on('end', function () {
console.log("Request ended on port " + port);
setTimeout(function () {
createClient(port);
}, 5000);
});
});
}
createClient(8888);
And server-side code:
var http = require("http");
var server,
curPort = 8888;
function CreateServer(port){
if(server){
server.close();
server = undefined;
}
curPort = port;
server = http.createServer(function (req, res) {
res.end("You are on port " + curPort);
if (req.url === "/create") {
CreateServer(curPort);
}
});
server.listen(curPort);
console.log("Server listening on port " + curPort);
}
CreateServer(curPort);
Thanks everyone for the responses.
What about using cluster?
http://learnboost.github.com/cluster/docs/reload.html
It looks interesting!

Resources