I have a homework assignment to build an http server using only node native modules.
I am trying to protect the server from overloading, so each request is hashed and stores.
If a certain request reaches a high number, say 500, I call socket.destroy().
Every interval (one minute) I restart the hash-table. Problem is that when I do a socket that was previously dead is now working again. The only thing I do each interval is requests = {}, and nothing to do with the connections.
Any ideas why the connection is live again? Is there a better function to use than destroy()?
Thanks
Destroying the socket won't necessarily stop the client from retrying the request with a new socket.
You might instead try responding minimally with just a non-OK status code:
if (requests[path] >= 500) {
res.statusCode = 503;
res.end();
}
And, on the 503 status code:
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server.
Related
I have developed an application with ReactJS, ExpressJS, MongoDB and SocketIO.
I have two servers:- Server A || Server B
Socket Server is hosted on the Server A and application is hosted on the Server B
I am using Server A socket on Server B as a client.
Mainly work of Server A socket is to emit the data after fetching from the MongoDB database of Server A
Everything is working as expected but after 4-5-6 hours stop emitting the data but the socket connection will work.
I have checked using
socket.on('connection',function(){
console.log("Connected")
)
I am not getting whats the wrong with the code.
My code : https://jsfiddle.net/ymqxo31d/
Can anyone help me out on this
I have some programming errors.
I am getting data from MongoDB inside the setInterval() so after a little while exhausts resources and database connection starts failing every time.
Firstly i have created a Single MongoDB connection and used every place where i needed.
2ndly i removed setInterval and used setTimeout as below. (NOTE: If i continue using setInterval it execute on defined interval. It doesn't have any status that the data is emitted or not [this also cause the heavy resource usages] but i need to emit the data to socket when successfully fetched.)
setTimeout(emitData,1000);
function emitData(){
db.collection.find({}).toArray(function(data){
socket.emit('updateData',data);
setTimeout(emitData,1000);
})
}
Recently I've been trying to create a simple file server with nodejs and looks like I've run into some problems that I can't seem to overcome.
In short:
I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.
When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.
For example:
worker 1 handling request 1, request 5 waiting
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting
If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.
I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).
The Question
These requests don't have anything to do with each other, so one failing should not take down the other.
Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?
In Long
Why?
I'm using node, because I have some other requirements (like logging, etc..) that I can do in JavaScript fairly easy.
Since I have a ASP.NET MVC background and I'm running windows, after a few searches I've found the iisnode module for IIS, that can be used to host a node app with IIS. This makes it easy for me to manage and deploy the application. I also read on many sites, that node servers have good performance because of their async nature.
How?
I started with a very basic exception handling logic, that catches exceptions using the node's domain object:
var server = http.createServer(function (request, response) {
var d = domain.create();
d.on('error', function (err) {
try {
//stop taking new requests.
serverShutdown();
//send an error to the request that triggered the problem
response.statusCode = 500;
response.end('Oops, there was a problem! ;) \n');
}
catch (er2) {
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
process.exit(1);
}
});
d.add(request);
d.add(response);
d.run(function () {
router.route(request, response);
});
}).listen(process.env.PORT);
Since I could not find any best practices to gracefully shut down the server, when there is an unhandled exception, I decided to write my own logic. So after server.close() is called, I go through the sockets, and wake them so the server can shut down:
function serverShutdown() {
server.close();
for (var s in sockets) {
sockets[s].setTimeout(1, function () { });
}
}
This is also great!
What?
The problem comes when I try to stresstest this. For some reason the cluster module is not supported by the iisnode, but it has a similar feature. I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.
I'm not entirely sure on how this works, but here's what I figured out from testing:
When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.
For example:
worker 1 handling request 1, request 5 waiting
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting
If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.
I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).
The Question Again
These requests don't have anything to do with each other, so one failing should not take down the other.
Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?
Any help is appreciated!
Update
I managed to rule out iisnode, and made the same server using cluster and worker processes.
The problem still persist, and request that are queued to the worker that has the exception are returned with 502 Bad Gateway.
Again, I don't know what's happening with the requests that are coming in to the server, and which level are they at the time of the exception. And I can't seem to find any info about this either...
Anyone could point me in the right direction? At least where to search for the solution?
Here is a simple script
var http = require("http");
http.get( WEBSITE, function(res) {
console.log("Does not return");
return;
});
if WEBSITE variable is 'http://google.com' or 'http://facebook.com' script does not return to console.
but if WEBSITE variable is 'http://yahoo.com' or 'http://wikipedia.org' it returns to console. What is the difference?
By "return to console" I'm assuming you mean that node exits and drops you back at a shell prompt.
In fact, node does eventually exit for all of those domains you listed. (You were just impatient.)
What you are seeing is the result of HTTP keep-alives. By default, node keeps the TCP connection open after a HTTP request completes. This makes subsequent requests to the same server faster. As long as a TCP connection is still open, node will not exit.
Eventually, either node or the server will close the idle connection (and thus node will exit). It's likely that Google and Facebook allow idle connections to live for longer amounts of time than Yahoo and Wikipedia.
If you want your script to make a request and exit as soon as it completes, you need to disable HTTP keep-alives. You can do this by disabling Agent support.
http.get({ host:'google.com', port:80, path:'/', agent:false }, function(res) {
...
});
Only disable the Agent if you need this specific functionality. In a normal, long-running app, disabling the Agent can cause many problems.
There are also some other approaches you can take to avoid keep-alives keeping node running.
In production, I have a game which uses connection-local variables to hold game state. However I notice that if I idle for a certain time on the connection, it disconnects and reconnects which loses the current state. During my tests on a local host, I never noticed this behavior. Is this the norm behavior for socket connections or is something else causing the connections to drop.
If it is a normal behavior how is this typically handled? Should connection values be stored globally so they can be restored should a user drop/reconnect?
Your problem is around socket timeouts. If there's no activity on a certain socket, socket.io will close it automatically.
An easy (and hackish) fix is to send a heartbeat to the connected client to create activity and stop the socket from timing out.
Server:
function sendHeartbeat(){
setTimeout(sendHeartbeat, 8000);
io.sockets.emit('ping', { beat : 1 });
}
io.sockets.on('connection', function (socket) {
socket.on('pong', function(data){
console.log("Pong received from client");
});
}
setTimeout(sendHeartbeat, 8000);
Client:
socket.on('ping', function(data){
socket.emit('pong', {beat: 1});
});
More Information:
You can get more information on configuring socket.io here.
EDIT: Mark commented that if the user does lose the connection (connection drops on his end because of internet troubles), you should be able to restore the user to his last state.
To do that, the best way would be to use a already widely used method for storing user data, cookies and sessions.
An extremely well done tutorial on how to do this located here. Although he uses express to set cookies, you can do this using anything (I do it using rails). Using this method, you can store the user data in a cookie and fetch it during the handshake. From there you can just access the data using socket.handshake.data.
What you need to do is create or identify the session per (re-) connection. You may reduce the number of reconnections per Moox's answer above but it is still not failsafe - e.g. a user loses wifi connection for a bit, etc. In other words - maintain user metadata per session and not per socket, and expect occasional disconnects and reconnects.
I'm trying to make simple http server, that can be pause and resume,, I've looked at Nodejs API,, here http://nodejs.org/docs/v0.6.5/api/http.html
but that couldn't help me,, I've tried to remove event listener on 'request' event and add back,, that worked well but the listen callback call increase every time i try to pause and resume,, here some code i did:
var httpServer = require('http').Server();
var resumed = 0;
function ListenerHandler(){
console.log('[-] HTTP Server running at 127.0.0.1:2525');
};
function RequestHandler(req,res){
res.writeHead(200,{'Content-Type': 'text/plain'});
res.end('Hello, World');
};
function pauseHTTP(){
if(resumed){
httpServer.removeAllListeners('request');
httpServer.close();
resumed = 0;
console.log('[-] HTTP Server Paused');
}
};
function resumeHTTP(){
resumed = 1;
httpServer.on('request',RequestHandler);
httpServer.listen(2525,'127.0.0.1',ListenerHandler);
console.log('[-] HTTP Server Resumed');
};
I don't know quite what you're trying to do, but I think you're working at the wrong level to do what you want.
If you want incoming connection requests to your web server to block until the server is prepared to handle them, you need to stop calling the accept(2) system call on the socket. (I cannot imagine that node.js, or indeed any web server, would make this task very easy. The request callback is doubtless called only when an entire well-formed request has been received, well after session initiation.) Your operating system kernel would continue accepting connections up until the maximum backlog given to the listen(2) system call. On slow sites, that might be sufficient. On busy sites, that's less than a blink of an eye.
If you want incoming connection requests to your web server to be rejected until the server is prepared to handle them, you need to close(2) the listening socket. node.js makes this available via the close() method, but that will tear down the state of the server. You'll have to re-install the callbacks when you want to run again.