I have had a very interesting problem that I cannot seem to solve. It actually may not be a problem at all but rather something built into node.js. I am having an issue with file descriptors staying around longer than expected after a response has been sent. They seem to persist in the ESTABLISHED state for up to 2 minutes after any data has been sent. This is causing issues with our production servers. Even though we are serving static files, the number of file descriptors that can be open at one time can be very high. They frequently hit our system limit (linux) and cause EMFILE issues. I realize we could modify our hard ulimit above 1024, but this seems like a hacky fix to a different issue. It would seem to me as soon as the socket is closed, the file descriptor should be released.
I have node.js version: 0.10.24
I have been able to replicate this issue with a small amount of code.
http server:
var http = require("http"),
url = require("url"),
path = require("path"),
fs = require("fs")
port = process.argv[2] || 8888;
http.createServer(function(request, response) {
var uri = url.parse(request.url).pathname
, filename = path.join(process.cwd(), uri);
var contentTypesByExtension = {
'.html': "text/html",
'.css': "text/css",
'.js': "text/javascript"
};
path.exists(filename, function(exists) {
fs.readFile(filename, "binary", function(err, file) {
if(err) {
response.writeHead(500, {"Content-Type": "text/plain"});
response.write(err + "\n");
response.end();
return;
}
var headers = {};
var contentType = contentTypesByExtension[path.extname(filename)];
if (contentType) headers["Content-Type"] = contentType;
response.writeHead(200, headers);
response.write(file, "binary");
response.end();
});
});
}).listen(parseInt(port, 10));
console.log("Static file server running...");
```
Make a request to the server. I made a simple request for a static js file. Monitor the file descriptors before and after making the request.
$ lsof -i -n -P | grep node
My results:
After starting server and before request:
node 16787 ncswenson 11u IPv4 0x884c32306a467c65 0t0 TCP *:8888 (LISTEN)
After request completes (up to 2 minutes after request):
node 16787 ncswenson 11u IPv4 0x884c32306a467c65 0t0 TCP *:8888 (LISTEN)
node 16787 ncswenson 12u IPv4 0x884c32306a2773b5 0t0 TCP 127.0.0.1:8888->127.0.0.1:49399 (ESTABLISHED)
Is this how node should properly behave? Leave the file descriptor open for minutes after a request? Is there a way to circumvent this issue? Is there a way to better investigate this issue?
Related
I'm trying to create a print-server written with electron and node js.
My goal is to catch the body of a print-job from a POS to an Epson thermal printer.
As I understood correctly from the documentations of Epson, the printer communicates on tcp port 9100 and on udp 3289 by default.
So I created a websocket which is listening on the tcp port with the "Net" module.
The socket is established successfully and I also recieve some Buffer data.
My Question for now is, how can I encode this buffer, as it isn't possible to encode this via the default encoding types from Node.js.
Or would you recommend to use a virtual printer which prints a file and afterwards to try reading the data from it?
Which module or virtual printers are recommended?
I've searched already for quite a while now without finding any positive results.
Here is my current code from the net server:
var server = net.createServer(function(socket) {
socket.setEncoding('utf8')
socket.on('data', function(buffer) {
var decoded = buffer
console.log(decoded)
})
socket.on('end', socket.end)
});
server.on('connection', handleConnection);
server.listen(9100, function() {
console.log('server listening to %j', server.address());
});
function handleConnection(conn) {
var remoteAddress = conn.remoteAddress + ':' + conn.remotePort;
console.log('new client connection from %s', remoteAddress);
conn.on('data', onConnData);
conn.once('close', onConnClose);
conn.on('error', onConnError);
}
Ok I've got this running.
The Problem was, that the cashing system first made a request for the printerstatus "DLE EOT n".
So I responded to the cashing system with the according status bits / byte (0x16).
Afterwards the POS sended the printjob which I decoded from CP437 to UTF8 to capture and to be able to let my script read the incoming printrequest.
Hope this post helps anyone who is developing anything similar like kitchen monitors, printservers etc. as I found very less informations in the web about this topic.
I want to serve socket connections from a Flash browser client, and therefore I need to add support for the policy-request-file protocol. I can't run the policy-file-request service on the default port 843 because of firewalls etc. The only option I have is to server the protocol on port 80, beside my HTTP server.
My app is written in node.js and the following code works:
var httpServer = http.createServer();
net.createServer(function(socket){
httpServer.emit('connection', socket);
}).listen(80);
I open a socket server on port 80, and for now I just emit the connection event on the httpServer, no problem so far. Now I want to check if the new socket is a policy-file-request which will just send the plain string <policy-file-request /> over a TCP connection. When I notice this string I know it isn't HTTP and I can return the crossdomain file and close the socket. So what I try now is this:
net.createServer(function(socket){
socket.once('readable', function(){
var chunk = socket.read(1);
// chunk[0] === 60 corresponds to the opening bracket '<'
if(chunk !== null && chunk[0] === 60) {
socket.end(crossdomain);
} else {
socket.unshift(chunk);
httpServer.emit('connection', socket);
}
});
}).listen(80);
Now I check if the first byte is the opening bracket '<' and then write the crossdomain file to the socket. Otherwise I unshift the chunk onto the stream and emit the socket as a connection on the HTTP-server. Now the problem is that the HTTP-server doesn't emit a request event anymore and my regular HTTP-requests aren't handled as a result.
I also tried this solution but with no success either:
httpServer.on('connection', function(socket){
socket.once('data', function(chunk){
if(chunk[0] === 60) {
socket.end(crossdomain);
}
})
});
When the socket emits the data event, the readyState of the socket is already 'closed' and a clientError event is already thrown by the httpServer. I searched everywhere and didn't found a solution. I also don't want to pipe the data through another socket to another port where my HTTP server is running locally, because that adds to many, unnecessary overhead I think. Is there a clean way to do this in node.js? I tested all this on node.js version 0.10.26.
on a basic node http server like this:
var http = require('http');
var server = http.createServer(function(req, res){
console.log("number of concurr. connections: " + this.getConnections());
//var pendingConnections = ???
});
server.maxConnections = 500;
server.listen(4000);
if i send 500 requests at one time to the server, the number of concurr. connections is arround 350. with the hard limit set to 500 (net.server.backlog too), i want to know, how to access the number of pending connections (max. 150 in this example) when ever a new request starts.
so i think i have to access the underlying socket listening on port 4000 to get this info, but until now i was not able to get it.
EDIT
looking at node-http there is an event called connection, so i think the roundtrip of an request is as follows:
client connects to server socket --> 3-way-handshake, socket lasts in state CONNECTED (or ESTABLISHED ?!) then in node event connection is emitted.
the node http server accepts this pending connection an starts processing the request by emitting request
so the number of connections has to be at least as big as the number of requests, but with following example i could not confirm this:
var http = require('http');
var activeRequets = 0;
var activeConnections = 0;
var server = http.createServer(function(req, res){
activeRequests++;
res.send("foo");
});
server.on('connection', function (socket) {
socket.setKeepAlive(false);
activeConnections++;
});
setInterval(function(){
console.log("activeConns: " + activeConnections + " activeRequests: " + activeRequests);
activeRequests = 0;
activeConnections = 0;
}, 500);
server.maxConnections = 1024;
server.listen(4000, '127.0.0.1');
even if i stress the server with 1000 concurr connections and adding one delay in the response, activeRequests is mostly as high as activeConnections. even worse, the activeRequests are often higher then activeconnections, how could this be?
IIRC You can just count how many connections that have a status of SYN_RECV for that particular IP and port that you're listening on. Whether you use a child process to execute netstat and grep (or similar utilities) for that information, or write a binding to get this information using the *nix C API, is up to you.
I am trying to enable tcp, http and websocket.io communication on the same port. I started out with the tcp server (part above //// line), it worked. Then I ran the echo server example found on websocket.io (part below //// line), it also worked. But when I try to merge them together, tcp doesn't work anymore.
SO, is it possible to enable tcp, http and websockets all using the same port? Or do I have to listen on another port for tcp connections?
var net = require('net');
var http = require('http');
var wsio = require('websocket.io');
var conn = [];
var server = net.createServer(function(client) {//'connection' listener
var info = {
remote : client.remoteAddress + ':' + client.remotePort
};
var i = conn.push(info) - 1;
console.log('[conn] ' + conn[i].remote);
client.on('end', function() {
console.log('[disc] ' + conn[i].remote);
});
client.on('data', function(msg) {
console.log('[data] ' + conn[i].remote + ' ' + msg.toString());
});
client.write('hello\r\n');
});
server.listen(8080);
///////////////////////////////////////////////////////////
var hs = http.createServer(function(req, res) {
res.writeHead(200, {
'Content-Type' : 'text/html'
});
res.end(['<script>', "var ws = new WebSocket('ws://127.0.0.1:8080');", 'ws.onmessage = function (data) { ws.send(data); };', '</script>'].join(''));
});
hs.listen(server);
var ws = wsio.attach(hs);
var i = 0, last;
ws.on('connection', function(client) {
var id = ++i, last
console.log('Client %d connected', id);
function ping() {
client.send('ping!');
if (last)
console.log('Latency for client %d: %d ', id, Date.now() - last);
last = Date.now();
};
ping();
client.on('message', ping);
});
You can have multiple different protocols handled by the same port but there are some caveats:
There must be some way for the server to detect (or negotiate) the protocol that the client wishes to speak. You can think of separate ports as the normal way of detecting the protocol the client wishes to speak.
Only one server process can be actually listening on the port. This server might only serve the purpose of detecting the type of protocol and then forwarding to multiple other servers, but each port is owned by a single server process.
You can't support multiple protocols where the server speaks first (because there is no way to detect the protocol of the client). You can support a single server-first protocol with multiple client-first protocols (by adding a short delay after accept to see if the client will send data), but that's a bit wonky.
An explicit design goal of the WebSocket protocol was to allow WebSocket and HTTP protocols to share the same server port. The initial WebSocket handshake is an HTTP compatible upgrade request.
The websockify server/bridge is an example of a server that can speak 5 different protocols on the same port: HTTP, HTTPS (encrypted HTTP), WS (WebSockets), WSS (encrypted WebSockets), and Flash policy response. The server peeks at the first character of the incoming request to determine if it is TLS encrypted (HTTPS, or WSS) or whether it begins with "<" (Flash policy request). If it is a Flash policy request, then it reads the request, responds and closes the connection. Otherwise, it reads the HTTP handshake (either encrypted or not) and the Connection and Upgrade headers determine whether it is a WebSocket request or a plain HTTP request.
Disclaimer: I made websockify
Short answer - NO, you can't have different TCP/HTTP/Websocket servers running on the same port.
Longish answer -
Both websockets and HTTP work on top of TCP. So you can think of a http server or websocket server as a custom TCP server (with some state mgmt and protocol specific encoding/decoding). It is not possible to have multiple sockets bind to the same port/protocol pair on a machine and so the first one will win and the following ones will get socket bind exceptions.
nginx allows you to run http and websocket on the same port, and it forwards to the correct appliaction:
https://medium.com/localhost-run/using-nginx-to-host-websockets-and-http-on-the-same-domain-port-d9beefbfa95d
I'm building a settings manager for my http server. I want to be able to change settings without having to kill the whole process. One of the settings I would like to be able to change is change the port number, and I've come up with a variety of solutions:
Kill the process and restart it
Call server.close() and then do the first approach
Call server.close() and initialize a new server in the same process
The problem is, I'm not sure what the repercussions of each approach is. I know that the first will work, but I'd really like to accomplish these things:
Respond to existing requests without accepting new ones
Maintain data in memory on the new server
Lose as little uptime as possible
Is there any way to get everything I want? The API for server.close() gives me hope:
server.close(): Stops the server from accepting new connections.
My server will only be accessible by clients I create and by a very limited number of clients connecting through a browser, so I will be able to notify them of a port change. I understand that changing ports is generally a bad idea, but I want to allow for the edge-case where it is convenient or possibly necessary.
P.S. I'm using connect if that changes anything.
P.P.S. Relatively unrelated, but what would change if I were to use UNIX server sockets or change the host name? This might be a more relevant use-case.
P.P.P.S. This code illustrates the problem of using server.close(). None of the previous servers are killed, but more are created with access to the same resources...
var http = require("http");
var server = false,
curPort = 8888;
function OnRequest(req,res){
res.end("You are on port " + curPort);
CreateServer(curPort + 1);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
curPort = port;
server = http.createServer(OnRequest);
server.listen(curPort);
}
CreateServer(curPort);
Resources:
http://nodejs.org/docs/v0.4.4/api/http.html#server.close
I tested the close() function. It seems to do absolute nothing. The server still accepts connections on his port. restarting the process was the only way for me.
I used the following code:
var http = require("http");
var server = false;
function OnRequest(req,res){
res.end("server now listens on port "+8889);
CreateServer(8889);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
server = http.createServer(OnRequest);
server.listen(port);
}
CreateServer(8888);
I was about to file an issue on the node github page when I decided to test my code thoroughly to see if it really is a bug (I hate filing bug reports when it's user error). I realized that the problem only manifests itself in the browser, because apparently browsers do some weird kind of HTTP request keep alive thing where it can still access dead ports because there's still a connection with the server.
What I've learned is this:
Browser caches keep ports alive unless the process on the server is killed
Utilities that do not keep caches by default (curl, wget, etc) work as expected
HTTP requests in node also don't keep the same type of cache that browsers do
For example, I used this code to prove that node http clients don't have access to old ports:
Client-side code:
var http = require('http'),
client,
request;
function createClient (port) {
client = http.createClient(port, 'localhost');
request = client.request('GET', '/create');
request.end();
request.on('response', function (response) {
response.on('end', function () {
console.log("Request ended on port " + port);
setTimeout(function () {
createClient(port);
}, 5000);
});
});
}
createClient(8888);
And server-side code:
var http = require("http");
var server,
curPort = 8888;
function CreateServer(port){
if(server){
server.close();
server = undefined;
}
curPort = port;
server = http.createServer(function (req, res) {
res.end("You are on port " + curPort);
if (req.url === "/create") {
CreateServer(curPort);
}
});
server.listen(curPort);
console.log("Server listening on port " + curPort);
}
CreateServer(curPort);
Thanks everyone for the responses.
What about using cluster?
http://learnboost.github.com/cluster/docs/reload.html
It looks interesting!